Re: Status and maturity of riak-cs

2017-07-20 Thread Alex De la rosa
I never had the time to implement CS but my idea was using it as a
photo/video storage for the social network I was building on Raik-KV. But
seeing Basho's situation is better to look for a CS alternative.

LeoFS looks good, and seems Rakuten is one of its sponsors... might well
test it.

Thanks,
Alex

On Thu, Jul 20, 2017 at 10:35 AM, Russell Brown <russell.br...@icloud.com>
wrote:

> I was really only thinking of LeoFS, Ceph, and Joyent Manta.
>
> There must be others.
>
> Or stay on CS. What do you need to stay on it? Basho were certainly not
> doing anything.
>
> On 20 Jul 2017, at 06:57, Alex De la rosa <alex.rosa@gmail.com> wrote:
>
> > "However there are alternatives under active development."... can you
> please list them? I'm also interested on a CS alternative.
> >
> > Thanks,
> > Alex
> >
> > On Wed, Jul 19, 2017 at 9:27 PM, Russell Brown <russell.br...@icloud.com>
> wrote:
> > Hi Stefan
> > On the one hand, even before the demise of Basho, they’d stopped
> supporting Riak CS. On the other, there is an organisation based in Japan,
> but with an international remote team, that supports other CS customers, so
> may well be a choice of support.
> >
> > The CS code base has not had a huge amount of recent attention, but
> there are plenty of people running at, in industry, at reasonable scale.
> >
> > There’s a genuine market of providers of ex-Basho products, and a
> community of CS users. However there are alternatives under active
> development.
> >
> > Regards
> >
> > Russell
> >
> > On 19 Jul 2017, at 17:27, Stefan Funk <stefan.f...@gmail.com> wrote:
> >
> > > Hi everybody,
> > >
> > > I'm new to Riak-CS and just joined the group.
> > >
> > > We've been exploring Riak-CS now for a couple of days and consider it
> as a potential inhouse-alternative to external S3-based storage providers.
> > >
> > > Given the last commit was in January 2016, the question rose as to how
> well the project is supported and how mature the solution is.
> > >
> > > I'd be very thankful for any comments from the community on this.
> > >
> > > Best regards
> > > Stefan
> > >
> > >
> > > ___
> > > riak-users mailing list
> > > riak-users@lists.basho.com
> > > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Status and maturity of riak-cs

2017-07-19 Thread Alex De la rosa
"However there are alternatives under active development."... can you
please list them? I'm also interested on a CS alternative.

Thanks,
Alex

On Wed, Jul 19, 2017 at 9:27 PM, Russell Brown 
wrote:

> Hi Stefan
> On the one hand, even before the demise of Basho, they’d stopped
> supporting Riak CS. On the other, there is an organisation based in Japan,
> but with an international remote team, that supports other CS customers, so
> may well be a choice of support.
>
> The CS code base has not had a huge amount of recent attention, but there
> are plenty of people running at, in industry, at reasonable scale.
>
> There’s a genuine market of providers of ex-Basho products, and a
> community of CS users. However there are alternatives under active
> development.
>
> Regards
>
> Russell
>
> On 19 Jul 2017, at 17:27, Stefan Funk  wrote:
>
> > Hi everybody,
> >
> > I'm new to Riak-CS and just joined the group.
> >
> > We've been exploring Riak-CS now for a couple of days and consider it as
> a potential inhouse-alternative to external S3-based storage providers.
> >
> > Given the last commit was in January 2016, the question rose as to how
> well the project is supported and how mature the solution is.
> >
> > I'd be very thankful for any comments from the community on this.
> >
> > Best regards
> > Stefan
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Alternative to Riak?

2017-07-15 Thread Alex De la rosa
Hi there,

As Riak's future is uncertain and despite I love it as it covers everything
I was needing, I'm on the look for another NoSQL DB that replaces it for my
project. What are your suggestions? Some of my requirements (Riak features)
would be:

1. Master-less (all nodes can read/write)
2. Schema-less (except if I need a SOLR-style custom index for a certain
bucket)
3. Replication
4. Counters, Sets, Maps...
5. Python client instead of REST interface
6. Indexing (SOLR-like) for background data analysis
7. Self-hosted (for example, DynamoDB adds latency to my communications)

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: app.config missing?

2016-09-19 Thread Alex De la rosa
Ok, documentation was confusing, i thought i had to add the data in both
riak.conf and app.config

Thanks,
Alex

On Mon, Sep 19, 2016 at 11:42 AM, Magnus Kessler <mkess...@basho.com> wrote:

> On 18 September 2016 at 07:51, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
>> Hi there,
>>
>> I'm trying to locate the app.config file in Riak 2.1.4-1 to add the
>> following:
>>
>> { kernel, [
>> {inet_dist_listen_min, 6000},
>> {inet_dist_listen_max, 7999}
>>   ]},
>>
>> as explained at http://docs.basho.com/riak/kv/2.1.4/using/security but I
>> can't find it.
>>
>> Thanks,
>> Alex
>>
>
>
> Hi Alex,
>
> With Riak 2.x we recommend using the new configuration mechanism (a.k.a
> cuttlefish). Please use the instructions for using riak.conf on the page
> you quoted.
>
> erlang.distribution.port_range.minimum = 6000 
> erlang.distribution.port_range.maximum
> = 7999
>
> For more information about Riak's configuration system, please see the
> configuration reference documentation [0].
>
> Kind Regards,
>
> Magnus
>
> [0]: http://docs.basho.com/riak/kv/2.1.4/configuring/reference/
>
>  --
> Magnus Kessler
> Client Services Engineer
> Basho Technologies Limited
>
> Registered Office - 8 Lincoln’s Inn Fields London WC2A 3BP Reg 07970431
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


app.config missing?

2016-09-18 Thread Alex De la rosa
Hi there,

I'm trying to locate the app.config file in Riak 2.1.4-1 to add the
following:

{ kernel, [
{inet_dist_listen_min, 6000},
{inet_dist_listen_max, 7999}
  ]},

as explained at http://docs.basho.com/riak/kv/2.1.4/using/security but I
can't find it.

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak cluster protected by firewall

2016-09-18 Thread Alex De la rosa
So mainly the ports are:

epmd listener: TCP:4369
handoff_port listener: TCP:8099
http: TCP:8098
protocol buffers: TCP: 8087
solr: TCP:8093
solr imx: TCP:8985
erlang range: TCP:6000~7999 (if configured in riak's configuration)

Is that alright? am I missing any? or is there any of them that is not
needed to add in the firewall?

Thanks,
Alex

On Sun, Sep 18, 2016 at 5:57 AM, John Daily <jda...@basho.com> wrote:

> You should find most of what you need here: http://docs.basho.com/
> riak/kv/2.1.4/using/security/
>
> Sent from my iPhone
>
> On Sep 17, 2016, at 1:26 PM, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
> Hi all,
>
> I have a cluster of 5 nodes connected to each other and now I want to use
> UFW to deny any  external incoming traffic into them but i will allow each
> node to access between themselves. Which ports should i open
> (pb_port,http_port,solr,...)? I connect via pbc but i may need more ports
> open i guess.
>
> A configurations like this (assuming is node_1):
>
> ufw default deny incoming
> ufw default allow outgoing
> ufw allow 22 --> SSH (private keys)
> ufw allow from  to any port 443 --> HTTPS (API that talks
> with Riak locally via Python client)
>
> ufw allow from  to any port 
> ufw allow from  to any port 
> ufw allow from  to any port 
> ufw allow from  to any port 
>
> Thanks!
> Alex
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak cluster protected by firewall

2016-09-17 Thread Alex De la rosa
Hi all,

I have a cluster of 5 nodes connected to each other and now I want to use
UFW to deny any  external incoming traffic into them but i will allow each
node to access between themselves. Which ports should i open
(pb_port,http_port,solr,...)? I connect via pbc but i may need more ports
open i guess.

A configurations like this (assuming is node_1):

ufw default deny incoming
ufw default allow outgoing
ufw allow 22 --> SSH (private keys)
ufw allow from  to any port 443 --> HTTPS (API that talks
with Riak locally via Python client)

ufw allow from  to any port 
ufw allow from  to any port 
ufw allow from  to any port 
ufw allow from  to any port 

Thanks!
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing ring size on 1.4 cluster

2016-06-01 Thread Alex De la rosa
Can the ring size be changed easily in Riak 2.X?

Imagine I have 5 servers originally with a ring_size = 64... If later on I
add 5 more servers (10 in total) and I also want to duplicate the
partitions, can I just edit the ring_size to 128?

How would be the process to do it? Will it rebalance properly and have no
issues?

Thanks,
Alex

On Wed, Jun 1, 2016 at 10:46 PM, Luke Bakken  wrote:

> Hi Johnny,
>
> Yes, the latter two are your main options. For a 1.4 series Riak
> installation, your only option is to bring up a new cluster with the
> desired ring size and replicate data.
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Fri, May 27, 2016 at 12:11 PM, Johnny Tan  wrote:
> > The docs
> http://docs.basho.com/riak/kv/2.1.4/configuring/basic/#ring-size
> > seem to imply that there's no easy, non-destructive way to change a
> > cluster's ring size live for Riak-1.4x.
> >
> > I thought about replacing one node at a time, but you can't join a new
> node
> > or replace an existing one with a node that has a different ring size.
> >
> > I was also thinking of bring up a completely new cluster with the new
> ring
> > size, and then replicating the data from the original cluster, and take a
> > quick maintenance window to failover to the new cluster.
> >
> > One other alternative seems to be to upgrade to 2.0, and then use 2.x's
> > ability to resize the ring.
> >
> > Are these latter two my main options?
> >
> > johnny
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing ring size on 1.4 cluster

2016-06-01 Thread Alex De la rosa
Any answers about this?

Thanks,
Alex

On Sun, May 29, 2016 at 2:31 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Hi there,
>
> I'm interested on knowing how to increase the cluster's ring too... i'm
> using Riak 2.X though... any documentation around on how to play with the
> number of servers/partitions?
>
> Thanks,
> Alex
>
> On Fri, May 27, 2016 at 11:11 PM, Johnny Tan <johnnyd...@gmail.com> wrote:
>
>> The docs http://docs.basho.com/riak/kv/2.1.4/configuring/basic/#ring-size
>> seem to imply that there's no easy, non-destructive way to change a
>> cluster's ring size live for Riak-1.4x.
>>
>> I thought about replacing one node at a time, but you can't join a new
>> node or replace an existing one with a node that has a different ring size.
>>
>> I was also thinking of bring up a completely new cluster with the new
>> ring size, and then replicating the data from the original cluster, and
>> take a quick maintenance window to failover to the new cluster.
>>
>> One other alternative seems to be to upgrade to 2.0, and then use 2.x's
>> ability to resize the ring.
>>
>> Are these latter two my main options?
>>
>> johnny
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Changing ring size on 1.4 cluster

2016-05-29 Thread Alex De la rosa
Hi there,

I'm interested on knowing how to increase the cluster's ring too... i'm
using Riak 2.X though... any documentation around on how to play with the
number of servers/partitions?

Thanks,
Alex

On Fri, May 27, 2016 at 11:11 PM, Johnny Tan  wrote:

> The docs http://docs.basho.com/riak/kv/2.1.4/configuring/basic/#ring-size
> seem to imply that there's no easy, non-destructive way to change a
> cluster's ring size live for Riak-1.4x.
>
> I thought about replacing one node at a time, but you can't join a new
> node or replace an existing one with a node that has a different ring size.
>
> I was also thinking of bring up a completely new cluster with the new ring
> size, and then replicating the data from the original cluster, and take a
> quick maintenance window to failover to the new cluster.
>
> One other alternative seems to be to upgrade to 2.0, and then use 2.x's
> ability to resize the ring.
>
> Are these latter two my main options?
>
> johnny
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ranking search results by relevancy

2016-05-25 Thread Alex De la rosa
Hi Luke,

That was not the question... I know that I can use ORs, etc... I wanted to
know how to sort them by relevancy or higher equality score.

Thanks,
Alex

On Wed, May 25, 2016 at 8:08 PM, Luke Bakken <lbak...@basho.com> wrote:

> Hi Alex,
>
> You can use the HTTP search endpoint to see what information Riak
> returns for Solr queries as well as to try out queries:
> https://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying
>
> Since you're indexing first and last name, I'm not sure what indexing
> a full name buys you on top of that.
>
> It should be possible to combine your queries using OR.
>
> More info about Solr ranking can be found online (such as
> https://wiki.apache.org/solr/SolrRelevancyFAQ).
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Wed, May 18, 2016 at 10:07 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > Hi all,
> >
> > I would like to perform a search on Riak/Solr of people given an input
> > containing its full name (or part of it), like when searching for
> members in
> > Facebook's search bar.
> >
> > search input [ alex garcia ]
> >
> > results = client.fulltext_search('people', 'firstname_register:*alex* OR
> > lastname_register:*garcia*')
> >
> > this would give me members like:
> >
> > alex garcia
> > alexis garcia
> > alex fernandez
> > jose garcia
> >
> > Is there any way to get these results ranked/ordered by the most precise
> > search? "alex garcia" would be the most relevant because matches equally
> to
> > the search input... "alexis garcia" may come second as even not an exact
> > match is very similar pattern, the other two would come after as they
> match
> > only 1 of the 2 search parameters.
> >
> > Would it be convenient to index also fullname_register:alex garcia in
> order
> > to find exact matches too?
> >
> > Can it be done all at once in just 1 search query? or should I compile
> > results from 3 queries?
> >
> > result_1 = client.fulltext_search('people', 'fullname_register:alex
> garcia')
> > result_2 = client.fulltext_search('people', 'firstname_register:*alex*
> AND
> > lastname_register:*garcia*')
> > result_3 = client.fulltext_search('people', 'firstname_register:*alex* OR
> > lastname_register:*garcia*')
> >
> > Thanks and Best Regards,
> > Alex
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Ranking search results by relevancy

2016-05-25 Thread Alex De la rosa
Hi, nobody has an answer to this?

Thanks,
Alex

On Wed, May 18, 2016 at 9:07 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Hi all,
>
> I would like to perform a search on Riak/Solr of people given an input
> containing its full name (or part of it), like when searching for members
> in Facebook's search bar.
>
> search input [ alex garcia ]
>
> results = client.fulltext_search('people', 'firstname_register:*alex* OR
> lastname_register:*garcia*')
>
> this would give me members like:
>
> alex garcia
> alexis garcia
> alex fernandez
> jose garcia
>
> Is there any way to get these results ranked/ordered by the most precise
> search? "alex garcia" would be the most relevant because matches equally to
> the search input... "alexis garcia" may come second as even not an exact
> match is very similar pattern, the other two would come after as they match
> only 1 of the 2 search parameters.
>
> Would it be convenient to index also *fullname_register:alex garcia* in
> order to find exact matches too?
>
> Can it be done all at once in just 1 search query? or should I compile
> results from 3 queries?
>
> result_1 = client.fulltext_search('people', 'fullname_register:alex
> garcia')
> result_2 = client.fulltext_search('people', 'firstname_register:*alex* AND
> lastname_register:*garcia*')
> result_3 = client.fulltext_search('people', 'firstname_register:*alex* OR
> lastname_register:*garcia*')
>
> Thanks and Best Regards,
> Alex
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions about installing Stanchion for Riak CS

2016-05-23 Thread Alex De la rosa
Hi Luke,

Cool, understood :) then is not as troublesome as I thought.

Thanks,
Alex

On Mon, May 23, 2016 at 10:37 PM, Luke Bakken <lbak...@basho.com> wrote:

> On Mon, May 23, 2016 at 11:07 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > So if the node with Stanchion fatally crashed and can not be recovered I
> can
> > install Stanchion in another node and this node will get the "master"
> role?
>
> Yes. There is no concept of "master" or "slave" with Stanchion, since
> only one Stanchion process should ever be running at a time and
> servicing requests.
>
> > Also, you said that if Stanchion is down it can not create users and
> > buckets... but can it still create keys inside the existing buckets? and
> > also read data from the nodes?
>
> Yes, since these operations do not involve Stanchion.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions about installing Stanchion for Riak CS

2016-05-23 Thread Alex De la rosa
So if the node with Stanchion fatally crashed and can not be recovered I
can install Stanchion in another node and this node will get the "master"
role?

Also, you said that if Stanchion is down it can not create users and
buckets... but can it still create keys inside the existing buckets? and
also read data from the nodes?

Thanks,
Alex

On Mon, May 23, 2016 at 9:15 PM, Luke Bakken <lbak...@basho.com> wrote:

> Alex -
>
> You won't be able to create new users or buckets while Stanchion is
> offline. You would follow normal procedures to rebuild Riak KV on the
> crashed node, and in the meantime could bring up Stanchion on an
> existing node.
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Mon, May 23, 2016 at 9:29 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > Hi Luke,
> >
> > Ok, understood, what if I don't have a load balancer and the node having
> > Stanchion crashes? what will happen to the cluster and how to rebuild it?
> >
> > Thanks,
> > Alex
> >
> > On Mon, May 23, 2016 at 8:09 PM, Luke Bakken <lbak...@basho.com> wrote:
> >>
> >> Hi Alex,
> >>
> >> You should only have one active Stanchion process running in your
> >> cluster, since its purpose is to ensure consistent, ordered operations
> >> with regard to users and buckets. You can have a hot-backup if you
> >> configure a load balancer to proxy requests from the Riak CS
> >> processes.
> >> --
> >> Luke Bakken
> >> Engineer
> >> lbak...@basho.com
> >>
> >>
> >> On Sat, May 21, 2016 at 11:41 AM, Alex De la rosa
> >> <alex.rosa@gmail.com> wrote:
> >> > Hi there,
> >> >
> >> > I'm creating a Riak CS cluster and I got some questions about the
> >> > following
> >> > sentence from the documentation:
> >> >
> >> > Riak KV and Riak CS must be installed on each node in your cluster.
> >> > Stanchion, however, needs to be installed on only one node.
> >> >
> >> > Is this statement saying that only 1 node can have Stanchion? Or can
> it
> >> > be
> >> > placed in more servers? like Riak KV and Riak CS must be in 5 out
> of
> >> > 5
> >> > nodes but Stanchion can be in 1 to 5 out of 5 nodes?
> >> >
> >> > If is referring that ONLY 1 out of 5 nodes can have Stanchion and the
> >> > other
> >> > 4 nodes are not allowed to have it installed, what happens if the
> >> > "master"
> >> > node that has Stanchion crashes?
> >> >
> >> > Thanks and Best Regards,
> >> > Alex
> >> >
> >> > ___
> >> > riak-users mailing list
> >> > riak-users@lists.basho.com
> >> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >> >
> >
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Questions about installing Stanchion for Riak CS

2016-05-23 Thread Alex De la rosa
Hi Luke,

Ok, understood, what if I don't have a load balancer and the node having
Stanchion crashes? what will happen to the cluster and how to rebuild it?

Thanks,
Alex

On Mon, May 23, 2016 at 8:09 PM, Luke Bakken <lbak...@basho.com> wrote:

> Hi Alex,
>
> You should only have one active Stanchion process running in your
> cluster, since its purpose is to ensure consistent, ordered operations
> with regard to users and buckets. You can have a hot-backup if you
> configure a load balancer to proxy requests from the Riak CS
> processes.
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Sat, May 21, 2016 at 11:41 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > Hi there,
> >
> > I'm creating a Riak CS cluster and I got some questions about the
> following
> > sentence from the documentation:
> >
> > Riak KV and Riak CS must be installed on each node in your cluster.
> > Stanchion, however, needs to be installed on only one node.
> >
> > Is this statement saying that only 1 node can have Stanchion? Or can it
> be
> > placed in more servers? like Riak KV and Riak CS must be in 5 out of
> 5
> > nodes but Stanchion can be in 1 to 5 out of 5 nodes?
> >
> > If is referring that ONLY 1 out of 5 nodes can have Stanchion and the
> other
> > 4 nodes are not allowed to have it installed, what happens if the
> "master"
> > node that has Stanchion crashes?
> >
> > Thanks and Best Regards,
> > Alex
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Questions about installing Stanchion for Riak CS

2016-05-21 Thread Alex De la rosa
Hi there,

I'm creating a Riak CS cluster and I got some questions about the following
sentence from the documentation:

*Riak KV and Riak CS must be installed on each node in your cluster.
Stanchion, however, needs to be installed on only one node.*

Is this statement saying that only 1 node can have Stanchion? Or can it be
placed in more servers? like Riak KV and Riak CS *must* be in 5 out of
5 nodes but Stanchion *can* be in 1 to 5 out of 5 nodes?

If is referring that *ONLY* 1 out of 5 nodes *can* have Stanchion and the
other 4 nodes are not allowed to have it installed, what happens if the
"master" node that has Stanchion crashes?

Thanks and Best Regards,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Ranking search results by relevancy

2016-05-18 Thread Alex De la rosa
Hi all,

I would like to perform a search on Riak/Solr of people given an input
containing its full name (or part of it), like when searching for members
in Facebook's search bar.

search input [ alex garcia ]

results = client.fulltext_search('people', 'firstname_register:*alex* OR
lastname_register:*garcia*')

this would give me members like:

alex garcia
alexis garcia
alex fernandez
jose garcia

Is there any way to get these results ranked/ordered by the most precise
search? "alex garcia" would be the most relevant because matches equally to
the search input... "alexis garcia" may come second as even not an exact
match is very similar pattern, the other two would come after as they match
only 1 of the 2 search parameters.

Would it be convenient to index also *fullname_register:alex garcia* in
order to find exact matches too?

Can it be done all at once in just 1 search query? or should I compile
results from 3 queries?

result_1 = client.fulltext_search('people', 'fullname_register:alex garcia')
result_2 = client.fulltext_search('people', 'firstname_register:*alex* AND
lastname_register:*garcia*')
result_3 = client.fulltext_search('people', 'firstname_register:*alex* OR
lastname_register:*garcia*')

Thanks and Best Regards,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Alex De la rosa
Hi Luke,

Yes, I think I will go with /search instead; should be enough.

Thanks,
Alex

On Mon, May 16, 2016 at 6:55 PM, Luke Bakken <lbak...@basho.com> wrote:

> Hi Alex,
>
> Benchmarking is the only sure way to know if you need to add this
> additional complexity to your system for your own use-case or if
> search in Riak 2.0 will suffice. I suspect the latter will be true.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Mon, May 16, 2016 at 6:08 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > Hi Fred,
> >
> > Yeah, I realised that on my testing node with n_val of 3 I was getting
> the
> > triple of results in the count... that is not ideal.
> >
> > I was just concerned on how much extra-work would get Riak to talk with
> SOLR
> > and compile data against hitting SOLR directly... For my tests these
> days,
> > seems the /search interface is pretty fast and it may not be a real
> problem
> > for Riak... but still have my fears from Riak 0.14 an Riak 1.4
> >
> > Thanks,
> > Alex
> >
> > On Mon, May 16, 2016 at 4:49 PM, Fred Dushin <fdus...@basho.com> wrote:
> >>
> >> Hi Alex,
> >>
> >> Other people have chimed in, but let me repeat that while the
> >> internal_solr interface is accessible via HTTP (and needs to be, at
> least
> >> from Riak processes), you cannot use that interface to query Solr and
> expect
> >> a correct result set (unless you are using a single node cluster with an
> >> n_val of 1).
> >>
> >> When you run your queries through Riak, Yokozuna, the component that
> >> interfaces with Solr, will use a riak_core coverage plan to generate a
> >> distributed Solr filter query across the entire cluster that guarantees
> that
> >> for any document stored on all Solr nodes in the cluster, the query will
> >> select one (and only one) replica.  If you were to run your query
> locally
> >> using the internal_solr interface, your query would not span the cluster
> >> (likely missing documents on other nodes) and may have duplicates
> (e.g., in
> >> degenerate cases where you have more than one replica on the same node).
> >>
> >> I hope that helps explain why using the internal_solr interface is not
> >> only not recommended, it's also not going to give you the results you
> >> expect.
> >>
> >> -Fred
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-16 Thread Alex De la rosa
Hi Fred,

Yeah, I realised that on my testing node with n_val of 3 I was getting the
triple of results in the count... that is not ideal.

I was just concerned on how much extra-work would get Riak to talk with
SOLR and compile data against hitting SOLR directly... For my tests these
days, seems the /search interface is pretty fast and it may not be a real
problem for Riak... but still have my fears from Riak 0.14 an Riak 1.4

Thanks,
Alex

On Mon, May 16, 2016 at 4:49 PM, Fred Dushin <fdus...@basho.com> wrote:

> Hi Alex,
>
> Other people have chimed in, but let me repeat that while the
> internal_solr interface is accessible via HTTP (and needs to be, at least
> from Riak processes), you cannot use that interface to query Solr and
> expect a correct result set (unless you are using a single node cluster
> with an n_val of 1).
>
> When you run your queries through Riak, Yokozuna, the component that
> interfaces with Solr, will use a riak_core coverage plan to generate a
> distributed Solr filter query across the entire cluster that guarantees
> that for any document stored on all Solr nodes in the cluster, the query
> will select one (and only one) replica.  If you were to run your query
> locally using the internal_solr interface, your query would not span the
> cluster (likely missing documents on other nodes) and may have duplicates
> (e.g., in degenerate cases where you have more than one replica on the same
> node).
>
> I hope that helps explain why using the internal_solr interface is not
> only not recommended, it's also not going to give you the results you
> expect.
>
> -Fred
>
> On May 15, 2016, at 4:18 AM, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
> Hi Vitaly,
>
> I know that you can access search via HTTP through Riak like this:
>
> http://localhost:8098/search/query/famous?wt=json=leader:true AND
> age_i:[25 TO *]
>
> I didn't find documentation about this, but according to your words I
> could access SOLR directly like this?
>
> http://localhost:8093/internal_solr/famous/select?wt=json=leader:true
> AND age_i:[25 TO *]
>
> If I go through "8098/search" would it be adding extra stress into the
> Riak cluster? Or is recommended to go through "8098/search" instead of
> "8093/internal_solr"??
>
> I just want to see if I can make use of SOLR with an external mapreduce
> platform (Disco) without giving extra stress to Riak.
>
> Thanks,
> Rohman
>
> On Sun, May 15, 2016 at 12:07 PM, Vitaly <13vitam...@gmail.com> wrote:
>
>> There is, you can *query *Solr directly via HTTP, at least as of Riak
>> 2.0.x
>>
>> Have a look at http://:8093/internal_solr/#/ and
>> http://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying
>>
>> Vitaly
>>
>>
>> On Sun, May 15, 2016 at 10:49 AM, Alex De la rosa <
>> alex.rosa@gmail.com> wrote:
>>
>>> Nobody knows if there is a way to access SOLR right away without going
>>> through RIAK's interface?
>>>
>>> Thanks,
>>> Alex
>>>
>>> On Fri, May 13, 2016 at 11:07 PM, Alex De la rosa <
>>> alex.rosa@gmail.com> wrote:
>>>
>>>> Hi all,
>>>>
>>>> If I want to create a Disco cluster [ http://discoproject.org ] to
>>>> build statistics and compile data attacking Riak's SOLR directly without
>>>> using Riak, how can I do it?
>>>>
>>>> In this way, I would leave Riak mainly for data IO (post/get) and leave
>>>> the heavy duty of searching and compiling data to Disco; so Riak's
>>>> performance shouldn't be affected for searching as mainly it will store and
>>>> retrieve data only.
>>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search with case-insensitive searches

2016-05-15 Thread Alex De la rosa
Yeah... I realised it... either way, my use case doesn't really require
case-sensitive searches, so is good for me ;)

Thanks,
Alex

On Sun, May 15, 2016 at 2:06 PM, Vitaly <13vitam...@gmail.com> wrote:

> If you take a closer look, you can notice that the idea is to convert the
> field to lower case in insertion/indexing, and then look for a lowercase
> match during search. Which means you won't be able to use the same field
> for case-sensitive search. Unfortunately, I'm not aware of other ways of
> implementing case-insensitive search in Solr.
>
> Regards,
> Vitaly
>
> On Sun, May 15, 2016 at 12:51 PM, Alex De la rosa <alex.rosa@gmail.com
> > wrote:
>
>> I see, cool :) thanks for the help
>>
>> Alex
>>
>> On Sun, May 15, 2016 at 1:49 PM, Vitaly <13vitam...@gmail.com> wrote:
>>
>>> You can use a case-insensitive field type for this, for example
>>>
>>>>> sortMissingLast="true" omitNorms="true">
>>>
>>> 
>>> 
>>> 
>>> 
>>> 
>>>         
>>> 
>>> 
>>>
>>> Of course, you'll have to adjust your datatype/schema to make use of the
>>> new type.
>>>
>>> Regards,
>>> Vitaly
>>>
>>> On Sun, May 15, 2016 at 12:41 PM, Alex De la rosa <
>>> alex.rosa@gmail.com> wrote:
>>>
>>>> Hi there,
>>>>
>>>> Is there a way to make case-insensitive search queries? The following
>>>> query works fine, but only find the entry if sending "Alex" but not finding
>>>> anything with "ALEX" or "alex" or "AlEx", etc...
>>>>
>>>>
>>>> http://xxx.xxx.xxx.xxx:8098/search/query/customers?wt=json=firstname_register:Alex
>>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>> ___
>>>> riak-users mailing list
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>>
>>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search with case-insensitive searches

2016-05-15 Thread Alex De la rosa
I see, cool :) thanks for the help

Alex

On Sun, May 15, 2016 at 1:49 PM, Vitaly <13vitam...@gmail.com> wrote:

> You can use a case-insensitive field type for this, for example
>
> sortMissingLast="true" omitNorms="true">
>
> 
> 
> 
> 
> 
> 
> 
> 
>
> Of course, you'll have to adjust your datatype/schema to make use of the
> new type.
>
> Regards,
> Vitaly
>
> On Sun, May 15, 2016 at 12:41 PM, Alex De la rosa <alex.rosa@gmail.com
> > wrote:
>
>> Hi there,
>>
>> Is there a way to make case-insensitive search queries? The following
>> query works fine, but only find the entry if sending "Alex" but not finding
>> anything with "ALEX" or "alex" or "AlEx", etc...
>>
>>
>> http://xxx.xxx.xxx.xxx:8098/search/query/customers?wt=json=firstname_register:Alex
>>
>> Thanks,
>> Alex
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak search with case-insensitive searches

2016-05-15 Thread Alex De la rosa
Hi there,

Is there a way to make case-insensitive search queries? The following query
works fine, but only find the entry if sending "Alex" but not finding
anything with "ALEX" or "alex" or "AlEx", etc...

http://xxx.xxx.xxx.xxx:8098/search/query/customers?wt=json=firstname_register:Alex

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-15 Thread Alex De la rosa
I think is just my fear of querying Riak... I had bad experiences in the
past with Riak 0.14 and Riak 1.4 when searching with MapReduce, Search or
2-i... nodes crashing, etc... so I try to avoid searching as much as
possible.

Maybe my fear is not justified anymore on Riak 2.0 and is search via
SOLR... but still... I want to avoid issues on searching.

So my thinking was that if I can query the SOLR index itself without Riak
even noticing, then I can leave Riak simply to put/get data and the
searching would be done by Disco hitting SOLR itself... no extra work for
Riak on searching anything.

Thanks,
Alex

On Sun, May 15, 2016 at 12:48 PM, Vitaly <13vitam...@gmail.com> wrote:

> Keep in mind that the /search endpoint returns consolidated results (i.e.
> a query runs over all nodes of a cluster), while /internal_solr is only for
> the node you run it on.
>
> I'm not sure what you mean by "extra stress", as running queries is
> exactly what the /search endpoint is meant for. Having said that, I would
> think that any activity can add stress, you should measure what kind of
> stress your system creates, and plan your cluster accordingly.
>
> Regards,
> Vitaly
>
> On Sun, May 15, 2016 at 11:18 AM, Alex De la rosa <alex.rosa@gmail.com
> > wrote:
>
>> Hi Vitaly,
>>
>> I know that you can access search via HTTP through Riak like this:
>>
>> http://localhost:8098/search/query/famous?wt=json=leader:true AND
>> age_i:[25 TO *]
>>
>> I didn't find documentation about this, but according to your words I
>> could access SOLR directly like this?
>>
>> http://localhost:8093/internal_solr/famous/select?wt=json=leader:true
>> AND age_i:[25 TO *]
>>
>> If I go through "8098/search" would it be adding extra stress into the
>> Riak cluster? Or is recommended to go through "8098/search" instead of
>> "8093/internal_solr"??
>>
>> I just want to see if I can make use of SOLR with an external mapreduce
>> platform (Disco) without giving extra stress to Riak.
>>
>> Thanks,
>> Rohman
>>
>> On Sun, May 15, 2016 at 12:07 PM, Vitaly <13vitam...@gmail.com> wrote:
>>
>>> There is, you can *query *Solr directly via HTTP, at least as of Riak
>>> 2.0.x
>>>
>>> Have a look at http://:8093/internal_solr/#/ and
>>> http://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying
>>>
>>> Vitaly
>>>
>>>
>>> On Sun, May 15, 2016 at 10:49 AM, Alex De la rosa <
>>> alex.rosa@gmail.com> wrote:
>>>
>>>> Nobody knows if there is a way to access SOLR right away without going
>>>> through RIAK's interface?
>>>>
>>>> Thanks,
>>>> Alex
>>>>
>>>> On Fri, May 13, 2016 at 11:07 PM, Alex De la rosa <
>>>> alex.rosa@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> If I want to create a Disco cluster [ http://discoproject.org ] to
>>>>> build statistics and compile data attacking Riak's SOLR directly without
>>>>> using Riak, how can I do it?
>>>>>
>>>>> In this way, I would leave Riak mainly for data IO (post/get) and
>>>>> leave the heavy duty of searching and compiling data to Disco; so Riak's
>>>>> performance shouldn't be affected for searching as mainly it will store 
>>>>> and
>>>>> retrieve data only.
>>>>>
>>>>> Thanks,
>>>>> Alex
>>>>>
>>>>
>>>>
>>>> ___
>>>> riak-users mailing list
>>>> riak-users@lists.basho.com
>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>>
>>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-15 Thread Alex De la rosa
Hi Vitaly,

I know that you can access search via HTTP through Riak like this:

http://localhost:8098/search/query/famous?wt=json=leader:true AND
age_i:[25 TO *]

I didn't find documentation about this, but according to your words I could
access SOLR directly like this?

http://localhost:8093/internal_solr/famous/select?wt=json=leader:true AND
age_i:[25 TO *]

If I go through "8098/search" would it be adding extra stress into the Riak
cluster? Or is recommended to go through "8098/search" instead of
"8093/internal_solr"??

I just want to see if I can make use of SOLR with an external mapreduce
platform (Disco) without giving extra stress to Riak.

Thanks,
Rohman

On Sun, May 15, 2016 at 12:07 PM, Vitaly <13vitam...@gmail.com> wrote:

> There is, you can *query *Solr directly via HTTP, at least as of Riak
> 2.0.x
>
> Have a look at http://:8093/internal_solr/#/ and
> http://docs.basho.com/riak/kv/2.1.4/developing/usage/search/#querying
>
> Vitaly
>
>
> On Sun, May 15, 2016 at 10:49 AM, Alex De la rosa <alex.rosa@gmail.com
> > wrote:
>
>> Nobody knows if there is a way to access SOLR right away without going
>> through RIAK's interface?
>>
>> Thanks,
>> Alex
>>
>> On Fri, May 13, 2016 at 11:07 PM, Alex De la rosa <
>> alex.rosa@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> If I want to create a Disco cluster [ http://discoproject.org ] to
>>> build statistics and compile data attacking Riak's SOLR directly without
>>> using Riak, how can I do it?
>>>
>>> In this way, I would leave Riak mainly for data IO (post/get) and leave
>>> the heavy duty of searching and compiling data to Disco; so Riak's
>>> performance shouldn't be affected for searching as mainly it will store and
>>> retrieve data only.
>>>
>>> Thanks,
>>> Alex
>>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Querying SOLR outside of Riak

2016-05-15 Thread Alex De la rosa
Nobody knows if there is a way to access SOLR right away without going
through RIAK's interface?

Thanks,
Alex

On Fri, May 13, 2016 at 11:07 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Hi all,
>
> If I want to create a Disco cluster [ http://discoproject.org ] to build
> statistics and compile data attacking Riak's SOLR directly without using
> Riak, how can I do it?
>
> In this way, I would leave Riak mainly for data IO (post/get) and leave
> the heavy duty of searching and compiling data to Disco; so Riak's
> performance shouldn't be affected for searching as mainly it will store and
> retrieve data only.
>
> Thanks,
> Alex
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search on an index limited to only 1 bucket

2016-05-13 Thread Alex De la rosa
Oh, nice snippets! thanks Drew!

Alex

On Fri, May 13, 2016 at 11:28 PM, Drew Kerrigan <d...@kerrigan.io> wrote:

> @Alex please kindly take a look at the default solr schema for Riak
> Search. You should have based your custom schema on this (if you've created
> a custom schema):
> https://docs.basho.com/riak/kv/2.1.4/developing/usage/search-schemas/ ->
> https://raw.githubusercontent.com/basho/yokozuna/develop/priv/default_schema.xml
>
> Specifically take a look at these lines:
> https://github.com/basho/yokozuna/blob/develop/priv/default_schema.xml#L124-L131
>  (This
> is where the _yz_rt/rb/rk are defined to be indexed)
>
> And these:
> https://github.com/basho/yokozuna/blob/develop/priv/default_schema.xml#L101-L104
>  -
> These dynamic fields catch all Riak DTs because the solr field names of
> data types automatically get their type name appended to the end (as you
> noticed with your reference to "*likes_counter" *in your own index).
>
> As you can see in the default schema, all sets are automatically indexed
> as multivalued.
>
> Hopefully this info takes away some of the magic for you ;-)
>
> Drew
>
> On Fri, May 13, 2016 at 12:16 PM Vitaly <13vitam...@gmail.com> wrote:
>
>> In general, Riak/Solr is capable of indexing multi-valued properties
>> (i.g. lists). You're right thinking that multiValued = "true" should be
>> used for it. That said, check if it works with your client library (it's
>> Python, isn't it?). I believe it does.
>>
>> Regards,
>> Vitaly
>>
>> On Fri, May 13, 2016 at 9:59 PM, Alex De la rosa <alex.rosa@gmail.com
>> > wrote:
>>
>>> Another question... if I have a set of tags for the elements... like
>>> photo.set['tags'] with things like: ["holidays", "Hawaii", "2016"]... will
>>> it be indexed like this?
>>>
>>> >> multiValued="true" />
>>>
>>> Thanks,
>>> Alex
>>>
>>> On Fri, May 13, 2016 at 10:52 PM, Alex De la rosa <
>>> alex.rosa@gmail.com> wrote:
>>>
>>>> Oh!! silly me... *_yz_rb* and *_yz_rt*... how didn't I think of
>>>> that?...
>>>>
>>>> thanks also for the "*:*" tip ; )
>>>>
>>>> Thanks!
>>>> Alex
>>>>
>>>> On Fri, May 13, 2016 at 10:50 PM, Vitaly <13vitam...@gmail.com> wrote:
>>>>
>>>>> Hi Alex,
>>>>>
>>>>> 'likes_counter:[100 TO *] AND _yz_rb:photos' will limit query results
>>>>> to the photos bucket only. Similarly, "_yz_rt" is for a bucket type.
>>>>>
>>>>> Searching for anything in an index can be done with  "*:*" (any field,
>>>>> any value).
>>>>>
>>>>> Regards,
>>>>> Vitaly
>>>>>
>>>>> On Fri, May 13, 2016 at 9:40 PM, Alex De la rosa <
>>>>> alex.rosa@gmail.com> wrote:
>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> Imaging I have an index called "*posts*" where I index the following
>>>>>> fields
>>>>>>
>>>>>> 
>>>>>> >>>>> />
>>>>>> >>>>> stored="false" />
>>>>>>
>>>>>> and I reuse the index in 3 buckets: "status", "photos" and
>>>>>> "videos"... then I do the following:
>>>>>>
>>>>>> *results = client.fulltext_search('posts', 'likes_counter:[100 TO
>>>>>> *]', sort='likes_counter desc', rows=10)*
>>>>>>
>>>>>> This query would give me the top10 most liked items (can be statuses,
>>>>>> photos or videos) with at least 100 likes. But how could I limit the
>>>>>> resultset to only the "photos" bucket?? The goal is to get the Top10 
>>>>>> liked
>>>>>> photos without creating an index for itself... as is good to also be able
>>>>>> to query the top10 items in general. Any way to do it?
>>>>>>
>>>>>> In another hand... does somebody know how to do the same query but
>>>>>> without the [100 TO *]?? I leave it empty?
>>>>>>
>>>>>> *results = client.fulltext_search('**posts**', '',
>>>>>> sort='likes_counter desc', rows=10)*
>>>>>>
>>>>>> Thanks,
>>>>>> Alex
>>>>>>
>>>>>> ___
>>>>>> riak-users mailing list
>>>>>> riak-users@lists.basho.com
>>>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search on an index limited to only 1 bucket

2016-05-13 Thread Alex De la rosa
Yes, using Python client. I'm able to store sets without problem into Riak,
just wondered if indexing them in SOLR would be as simple as just adding
the multiValued attribute to the "set" field... seems is going to be that
way :)

Thanks,
Alex

On Fri, May 13, 2016 at 11:15 PM, Vitaly <13vitam...@gmail.com> wrote:

> In general, Riak/Solr is capable of indexing multi-valued properties (i.g.
> lists). You're right thinking that multiValued = "true" should be used for
> it. That said, check if it works with your client library (it's Python,
> isn't it?). I believe it does.
>
> Regards,
> Vitaly
>
> On Fri, May 13, 2016 at 9:59 PM, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
>> Another question... if I have a set of tags for the elements... like
>> photo.set['tags'] with things like: ["holidays", "Hawaii", "2016"]... will
>> it be indexed like this?
>>
>> > multiValued="true" />
>>
>> Thanks,
>> Alex
>>
>> On Fri, May 13, 2016 at 10:52 PM, Alex De la rosa <
>> alex.rosa@gmail.com> wrote:
>>
>>> Oh!! silly me... *_yz_rb* and *_yz_rt*... how didn't I think of that?...
>>>
>>> thanks also for the "*:*" tip ; )
>>>
>>> Thanks!
>>> Alex
>>>
>>> On Fri, May 13, 2016 at 10:50 PM, Vitaly <13vitam...@gmail.com> wrote:
>>>
>>>> Hi Alex,
>>>>
>>>> 'likes_counter:[100 TO *] AND _yz_rb:photos' will limit query results
>>>> to the photos bucket only. Similarly, "_yz_rt" is for a bucket type.
>>>>
>>>> Searching for anything in an index can be done with  "*:*" (any field,
>>>> any value).
>>>>
>>>> Regards,
>>>> Vitaly
>>>>
>>>> On Fri, May 13, 2016 at 9:40 PM, Alex De la rosa <
>>>> alex.rosa@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> Imaging I have an index called "*posts*" where I index the following
>>>>> fields
>>>>>
>>>>> 
>>>>> 
>>>>> >>>> stored="false" />
>>>>>
>>>>> and I reuse the index in 3 buckets: "status", "photos" and "videos"...
>>>>> then I do the following:
>>>>>
>>>>> *results = client.fulltext_search('posts', 'likes_counter:[100 TO *]',
>>>>> sort='likes_counter desc', rows=10)*
>>>>>
>>>>> This query would give me the top10 most liked items (can be statuses,
>>>>> photos or videos) with at least 100 likes. But how could I limit the
>>>>> resultset to only the "photos" bucket?? The goal is to get the Top10 liked
>>>>> photos without creating an index for itself... as is good to also be able
>>>>> to query the top10 items in general. Any way to do it?
>>>>>
>>>>> In another hand... does somebody know how to do the same query but
>>>>> without the [100 TO *]?? I leave it empty?
>>>>>
>>>>> *results = client.fulltext_search('**posts**', '',
>>>>> sort='likes_counter desc', rows=10)*
>>>>>
>>>>> Thanks,
>>>>> Alex
>>>>>
>>>>> ___
>>>>> riak-users mailing list
>>>>> riak-users@lists.basho.com
>>>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>>>
>>>>>
>>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Querying SOLR outside of Riak

2016-05-13 Thread Alex De la rosa
Hi all,

If I want to create a Disco cluster [ http://discoproject.org ] to build
statistics and compile data attacking Riak's SOLR directly without using
Riak, how can I do it?

In this way, I would leave Riak mainly for data IO (post/get) and leave the
heavy duty of searching and compiling data to Disco; so Riak's performance
shouldn't be affected for searching as mainly it will store and retrieve
data only.

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search on an index limited to only 1 bucket

2016-05-13 Thread Alex De la rosa
Another question... if I have a set of tags for the elements... like
photo.set['tags'] with things like: ["holidays", "Hawaii", "2016"]... will
it be indexed like this?



Thanks,
Alex

On Fri, May 13, 2016 at 10:52 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Oh!! silly me... *_yz_rb* and *_yz_rt*... how didn't I think of that?...
>
> thanks also for the "*:*" tip ; )
>
> Thanks!
> Alex
>
> On Fri, May 13, 2016 at 10:50 PM, Vitaly <13vitam...@gmail.com> wrote:
>
>> Hi Alex,
>>
>> 'likes_counter:[100 TO *] AND _yz_rb:photos' will limit query results to
>> the photos bucket only. Similarly, "_yz_rt" is for a bucket type.
>>
>> Searching for anything in an index can be done with  "*:*" (any field,
>> any value).
>>
>> Regards,
>> Vitaly
>>
>> On Fri, May 13, 2016 at 9:40 PM, Alex De la rosa <alex.rosa@gmail.com
>> > wrote:
>>
>>> Hi all,
>>>
>>> Imaging I have an index called "*posts*" where I index the following
>>> fields
>>>
>>> 
>>> 
>>> >> stored="false" />
>>>
>>> and I reuse the index in 3 buckets: "status", "photos" and "videos"...
>>> then I do the following:
>>>
>>> *results = client.fulltext_search('posts', 'likes_counter:[100 TO *]',
>>> sort='likes_counter desc', rows=10)*
>>>
>>> This query would give me the top10 most liked items (can be statuses,
>>> photos or videos) with at least 100 likes. But how could I limit the
>>> resultset to only the "photos" bucket?? The goal is to get the Top10 liked
>>> photos without creating an index for itself... as is good to also be able
>>> to query the top10 items in general. Any way to do it?
>>>
>>> In another hand... does somebody know how to do the same query but
>>> without the [100 TO *]?? I leave it empty?
>>>
>>> *results = client.fulltext_search('**posts**', '', sort='likes_counter
>>> desc', rows=10)*
>>>
>>> Thanks,
>>> Alex
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak search on an index limited to only 1 bucket

2016-05-13 Thread Alex De la rosa
Oh!! silly me... *_yz_rb* and *_yz_rt*... how didn't I think of that?...

thanks also for the "*:*" tip ; )

Thanks!
Alex

On Fri, May 13, 2016 at 10:50 PM, Vitaly <13vitam...@gmail.com> wrote:

> Hi Alex,
>
> 'likes_counter:[100 TO *] AND _yz_rb:photos' will limit query results to
> the photos bucket only. Similarly, "_yz_rt" is for a bucket type.
>
> Searching for anything in an index can be done with  "*:*" (any field, any
> value).
>
> Regards,
> Vitaly
>
> On Fri, May 13, 2016 at 9:40 PM, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
>> Hi all,
>>
>> Imaging I have an index called "*posts*" where I index the following
>> fields
>>
>> 
>> 
>> > />
>>
>> and I reuse the index in 3 buckets: "status", "photos" and "videos"...
>> then I do the following:
>>
>> *results = client.fulltext_search('posts', 'likes_counter:[100 TO *]',
>> sort='likes_counter desc', rows=10)*
>>
>> This query would give me the top10 most liked items (can be statuses,
>> photos or videos) with at least 100 likes. But how could I limit the
>> resultset to only the "photos" bucket?? The goal is to get the Top10 liked
>> photos without creating an index for itself... as is good to also be able
>> to query the top10 items in general. Any way to do it?
>>
>> In another hand... does somebody know how to do the same query but
>> without the [100 TO *]?? I leave it empty?
>>
>> *results = client.fulltext_search('**posts**', '', sort='likes_counter
>> desc', rows=10)*
>>
>> Thanks,
>> Alex
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak search on an index limited to only 1 bucket

2016-05-13 Thread Alex De la rosa
Hi all,

Imaging I have an index called "*posts*" where I index the following fields





and I reuse the index in 3 buckets: "status", "photos" and "videos"... then
I do the following:

*results = client.fulltext_search('posts', 'likes_counter:[100 TO *]',
sort='likes_counter desc', rows=10)*

This query would give me the top10 most liked items (can be statuses,
photos or videos) with at least 100 likes. But how could I limit the
resultset to only the "photos" bucket?? The goal is to get the Top10 liked
photos without creating an index for itself... as is good to also be able
to query the top10 items in general. Any way to do it?

In another hand... does somebody know how to do the same query but without
the [100 TO *]?? I leave it empty?

*results = client.fulltext_search('**posts**', '', sort='likes_counter
desc', rows=10)*

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Schemas, is worth to store values in SOLR?

2016-05-11 Thread Alex De la rosa
My use case for searching is mainly for internal purposes, rankings and
statistics (all that data is pre-compiled and stored into final objects for
the app to display)... so I think is best to not store anything in SOLR and
just fetch keys to compile the data when required.

Thanks,
Alex

On Wed, May 11, 2016 at 10:40 PM, Alexander Sicular <sicul...@basho.com>
wrote:

> Those are exactly the two options and opinions vary generally based on use
> case. Storing the data not only take up more space but also more io which
> makes things slower not only on read time , but more crucially , at write
> time.
>
> Often people will take a hybrid approach and store certain elements like ,
> say , for blog posts , the author , publish date and title fields. Yet they
> will leave the body out of the solr index. That way you could quickly
> generate lists of posts by title and only fetch the body when the post is
> clicked through.
>
> What is your use case?
>
> Best,
> Alexander
>
> On Wednesday, May 11, 2016, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
>> Hi all,
>>
>> When creating a SOLR schema for Riak Search, we can chose to store or not
>> the data we are indexing, for example:
>>
>> 
>>
>> I know that the point to have the value stored is to be able to get it
>> returned automatically when doing a search query... that implies using more
>> disc to store data that maybe never would be searched and making the return
>> slower as more bytes are required to get the data.
>>
>> Would it be better to just index data but not store the values, returning
>> only Riak IDs (_yz_id) and then doing a multi-get in the client/API to
>> fetch the objects for the final response?
>>
>> Or would it be better to store the values in SOLR so they will be already
>> fetched when searching?
>>
>> What would give better performance or more sense in terms of disc space
>> on an application that normally you won't be using much searching (all data
>> is more or less discoverable without searching using GETs)
>>
>> Thanks and Best Regards,
>> Alex
>>
>
>
> --
>
>
> Alexander Sicular
> Solutions Architect
> Basho Technologies
> 9175130679
> @siculars
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Schemas, is worth to store values in SOLR?

2016-05-11 Thread Alex De la rosa
Hi all,

When creating a SOLR schema for Riak Search, we can chose to store or not
the data we are indexing, for example:



I know that the point to have the value stored is to be able to get it
returned automatically when doing a search query... that implies using more
disc to store data that maybe never would be searched and making the return
slower as more bytes are required to get the data.

Would it be better to just index data but not store the values, returning
only Riak IDs (_yz_id) and then doing a multi-get in the client/API to
fetch the objects for the final response?

Or would it be better to store the values in SOLR so they will be already
fetched when searching?

What would give better performance or more sense in terms of disc space on
an application that normally you won't be using much searching (all data is
more or less discoverable without searching using GETs)

Thanks and Best Regards,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: issue creating a custom index/schema

2016-05-09 Thread Alex De la rosa
Ok, definitely I won't need that spatialContextFactory:

Solr supports polygons via JTS Topology Suite, which does not come with
Solr. It's a JAR file that you need to put on Solr's classpath (but not via
the standard solrconfig.xml mechanisms). If you intend to use those shapes,
set this attribute to
org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory. (note:
prior to Solr 6, the "org.locationtech.spatial4j" part was
"com.spatial4j.core").

So I believe I can go on with your snippet. Thanks!
Alex

On Mon, May 9, 2016 at 9:26 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Thank Jorge,
>
> I was reading the documentation of Solr and seems they changed the
> spatialContextFactory in their latest version:
>
>
> https://cwiki.apache.org/confluence/display/solr/Spatial+Search#SpatialSearch-SpatialRecursivePrefixTreeFieldType(abbreviatedasRPT)
>
> This should work according to new documentation:
>
>  class="solr.SpatialRecursivePrefixTreeFieldType"
> spatialContextFactory="org.locationtech.spatial4j.context.jts.JtsSpatialContextFactory"
> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
>
> Although maybe is not needed to add the spatialContentFactory as per your
> example...
>
> Thanks,
> Alex
>
> On Mon, May 9, 2016 at 9:23 PM, Jorge Garrido gomez <
> jorge.garr...@hovanetworks.com> wrote:
>
>> Hello Alex,
>>
>> We solved the issue using the Spatial:
>>
>> > class="solr.SpatialRecursivePrefixTreeFieldType"
>> distErrPct="0.025"
>> maxDistErr="0.09"
>> units="degrees"
>>  />
>>
>> We use that definition for the field and works perfectly, I hope this can be 
>> helpful, if you want more info maybe we can help you
>>
>>
>> Thank you! :-)
>>
>>
>>
>> On May 9, 2016, at 12:06 PM, Alex De la rosa <alex.rosa@gmail.com>
>> wrote:
>>
>> Ok, i solved the issues for the datatypes "int", "string", but still
>> getting errors for the "location_rpt":
>>
>> > class="solr.SpatialRecursivePrefixTreeFieldType"
>> spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
>> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
>>
>> 2016-05-09 19:03:56.798 [error] <0.588.0>@yz_index:core_create:287
>> Couldn't create index leaders_b:
>> {ok,"500",[{"Content-Type","text/html;charset=ISO-8859-1"},{"Cache-Control","must-revalidate,no-cache,no-store"},{"Content-Length","11214"}],<<"\n\n> http-equiv=\"Content-Type\" content=\"text/html;
>> charset=ISO-8859-1\"/>\nError 500
>> {msg=com/vividsolutions/jts/geom/CoordinateSequenceFactory,trace=java.lang.NoClassDefFoundError:
>> com/vividsolutions/jts/geom/CoordinateSequenceFactory\n\tat
>> java.lang.Class.getDeclaredConstructors0(Native Method)\n\tat
>> java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)\n\tat
>> java.lang.Class.getConstructor0(Class.java:2895)\n\tat
>> java.lang.Class.newInstance(Class.java:354)\n\tat
>> com.spatial4j.core.context.SpatialContextFactory.makeSpatialContext(SpatialContextFactory.java:96)\n\tat
>> org.apache.solr.schema.AbstractSpatialFieldType.init(AbstractSpatialFieldType.java:107)\n\tat
>> org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType.init(AbstractSpatialPrefixTreeFieldType.java:43)\n\tat
>> org.apache.solr.schema.SpatialRecursivePrefixTreeFieldType.init(SpatialRecursivePrefixTreeFieldType.java:37)\n\tat
>> org.apache.solr.schema.FieldType.setArgs(FieldType.java:165)\n\tat
>> org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:141)\n\tat
>> org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:43)\n\tat
>> org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:190)\n\tat
>> org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:468)\n\tat
>> org.apache.solr.schema.IndexSchema.init(IndexSchema.java:166)\n\tat
>> org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:55)\n\tat
>> org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:69)\n\tat
>> org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:559)\n\tat
>> org.apache.solr.core.CoreContainer.create(CoreContainer.java:597)\n\tat
>> org.apache.solr.handler.admin.Co

Re: issue creating a custom index/schema

2016-05-09 Thread Alex De la rosa
i see... that part of the schema was written in another code block in the
documentation:



will try with that :) thanks

Alex

On Mon, May 9, 2016 at 8:48 PM, Vitaly <13vitam...@gmail.com> wrote:

> You don't have this type under . The failure is about "int" because
> it's the first one encountered in the list of fields.
>
> Regards,
> Vitaly
>
> On Mon, May 9, 2016 at 7:16 PM, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
>> Hi there,
>>
>> I'm trying to create a custom index as seen at
>> http://docs.basho.com/riak/kv/2.1.4/developing/usage/search-schemas and
>> I'm getting the following errors in my log:
>>
>> 2016-05-09 18:08:05.229 [error] <0.588.0>@yz_index:core_create:287
>> Couldn't create index scores: {ok,"400",[{"Content-Type","application/xml;
>> charset=UTF-8"},{"Transfer-Encoding","chunked"}],<<"> encoding=\"UTF-8\"?>\n\n> name=\"status\">40026> name=\"error\">Error CREATEing SolrCore 'scores': *Unable
>> to create core: scores Caused by: Unknown fieldType 'int' specified on
>> field points*400\n\n">>}
>>
>> 2016-05-09 18:08:43.096 [error] <0.9673.1>@yz_index:sync_index:464 Solr
>> core error after trying to create index scores:
>> <<"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>> *Unknown fieldType 'int' specified on field points*. Schema file is
>> /var/lib/riak/yz/scores/./leaders.xml">>
>>
>> is saying the type "int" is unknown!! any ideas why? this is the schema
>> i'm uploading:
>>
>> 
>> 
>>   
>>
>>> stored="false"/>
>>> stored="true"/>
>>> stored="true"/>
>>> stored="true"/>
>>> stored="true"  multiValued="false" required="true"/>
>>> stored="false" multiValued="false"/>
>>> stored="false" multiValued="false"/>
>>> stored="false" multiValued="false"/>
>>> stored="false" multiValued="false"/>
>>> stored="true"  multiValued="false"/>
>>> stored="true"  multiValued="false"/>
>>> stored="true"  multiValued="false"/>
>>> stored="false" multiValued="false"/>
>>   
>>
>>   _yz_id
>>
>>   
>> > class="solr.SpatialRecursivePrefixTreeFieldType"
>> spatialContextFactory="com.spatial4j.core.context.jts.JtsSpatialContextFactory"
>> distErrPct="0.025" maxDistErr="0.09" units="degrees" />
>> > multiValued="true" class="solr.StrField" />
>> 
>> > sortMissingLast="true" />
>>   
>> 
>>
>> Thanks,
>> Alex
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: issue creating a custom index/schema

2016-05-09 Thread Alex De la rosa
Ok, i solved the issues for the datatypes "int", "string", but still
getting errors for the "location_rpt":



2016-05-09 19:03:56.798 [error] <0.588.0>@yz_index:core_create:287 Couldn't
create index leaders_b:
{ok,"500",[{"Content-Type","text/html;charset=ISO-8859-1"},{"Cache-Control","must-revalidate,no-cache,no-store"},{"Content-Length","11214"}],<<"\n\n\nError 500
{msg=com/vividsolutions/jts/geom/CoordinateSequenceFactory,trace=java.lang.NoClassDefFoundError:
com/vividsolutions/jts/geom/CoordinateSequenceFactory\n\tat
java.lang.Class.getDeclaredConstructors0(Native Method)\n\tat
java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)\n\tat
java.lang.Class.getConstructor0(Class.java:2895)\n\tat
java.lang.Class.newInstance(Class.java:354)\n\tat
com.spatial4j.core.context.SpatialContextFactory.makeSpatialContext(SpatialContextFactory.java:96)\n\tat
org.apache.solr.schema.AbstractSpatialFieldType.init(AbstractSpatialFieldType.java:107)\n\tat
org.apache.solr.schema.AbstractSpatialPrefixTreeFieldType.init(AbstractSpatialPrefixTreeFieldType.java:43)\n\tat
org.apache.solr.schema.SpatialRecursivePrefixTreeFieldType.init(SpatialRecursivePrefixTreeFieldType.java:37)\n\tat
org.apache.solr.schema.FieldType.setArgs(FieldType.java:165)\n\tat
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:141)\n\tat
org.apache.solr.schema.FieldTypePluginLoader.init(FieldTypePluginLoader.java:43)\n\tat
org.apache.solr.util.plugin.AbstractPluginLoader.load(AbstractPluginLoader.java:190)\n\tat
org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:468)\n\tat
org.apache.solr.schema.IndexSchema.init(IndexSchema.java:166)\n\tat
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:55)\n\tat
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:69)\n\tat
org.apache.solr.core.CoreContainer.createFromLocal(CoreContainer.java:559)\n\tat
org.apache.solr.core.CoreContainer.create(CoreContainer.java:597)\n\tat
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:509)\n\tat
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:152)\n\tat
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:732)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:268)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:217)\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)\n\tat
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)\n\tat
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)\n\tat
org.eclipse.jetty.server.Server.handle(Server.java:368)\n\tat
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)\n\tat
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)\n\tat
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)\n\tat
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)\n\tat
org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.ja...">>}
Anyone knows how to fix it?

Thanks,
Alex

On Mon, May 9, 2016 at 8:50 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> i see... that part of the schema was written in another code block in the
> documentation:
>
>  positionIncrementGap="0"/>
>
> will try with that :) thanks
>
> Alex
>
> On Mon, May 9, 2016 at 8:48 PM, Vitaly <13vitam...@gmail.com> wrote:
>
>> You don't have this type under . The failure is about "int"
>> because it's the first one encountered in the list of fields.
>>
>> Regards,
>> Vitaly
>>

issue creating a custom index/schema

2016-05-09 Thread Alex De la rosa
Hi there,

I'm trying to create a custom index as seen at
http://docs.basho.com/riak/kv/2.1.4/developing/usage/search-schemas and I'm
getting the following errors in my log:

2016-05-09 18:08:05.229 [error] <0.588.0>@yz_index:core_create:287 Couldn't
create index scores: {ok,"400",[{"Content-Type","application/xml;
charset=UTF-8"},{"Transfer-Encoding","chunked"}],<<"\n\n40026Error CREATEing SolrCore 'scores': *Unable
to create core: scores Caused by: Unknown fieldType 'int' specified on
field points*400\n\n">>}

2016-05-09 18:08:43.096 [error] <0.9673.1>@yz_index:sync_index:464 Solr
core error after trying to create index scores:
<<"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
*Unknown fieldType 'int' specified on field points*. Schema file is
/var/lib/riak/yz/scores/./leaders.xml">>

is saying the type "int" is unknown!! any ideas why? this is the schema i'm
uploading:



  
   
   
   
   
   
   
   
   
   
   
   
   
   
   
  

  _yz_id

  




  


Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak KV vs Riak TS

2016-04-15 Thread Alex De la rosa
Hi there,

Can somebody give some answers to this? I really want to know if Riak TS is
something for me to use or i'm better with Riak KV.

Thanks,
Alex

On Thu, Apr 14, 2016 at 3:42 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Hi there,
>
> I had been building a project for a while (i got stuck with work and kinda
> got delayed for a year until now that i could resume it) and I was planning
> to use the following setup (the project is a social network... so imagine
> something like Facebook)
>
> - Riak CS: Video storage and streaming
> - Riak KV: Everything else... Users, Profiles, Photos, Statuses, Likes,
> Counters, etc...
>
> For what I've seen now, Riak CS is gone; I guess it was replaced by Riak
> S2... and now there is the new Riak TS. Now I wonder if Riak TS could be
> good for my project and what use case I could really give it (maybe user's
> status updates?)
>
> However, in this situation I could think of:
>
> - Riak S2: Video storage and streaming (photo storage too)
> - Riak TS: Statuses, Updates, News, etc...
> - Riak KV: Users, Profiles, Settings
>
> Is this something to think of going forward? Also... would I require a
> cluster of at least 5 servers for each Riak variant?? That would require a
> minimum of 15 servers just to start!... a bit too much.
>
> Can somebody enlighten me a little on this new Riak TS? Also, is not yet
> available to download/test?
>
> Thanks,
> Alex
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: is Riak S2 same as Riak CS 2.1.1??

2016-04-14 Thread Alex De la rosa
Perfect, good to know :) thanks

Yeah, is pretty confusing, I though I was just hitting some old release or
something.

Thanks,
Alex

On Thu, Apr 14, 2016 at 6:56 PM, Justin Pease <jpe...@basho.com> wrote:

> Alex,
>
> Apologies for the confusion. Riak S2 is a rebranding of Riak CS. So yes,
> they are the same.
>
> --
>
> *Justin Pease*
> VP, Services
> Basho Technologies, Inc.
>
>
>
> On Thu, Apr 14, 2016 at 8:35 AM, Alex De la rosa <alex.rosa@gmail.com>
> wrote:
>
>> Hi there,
>>
>> I was trying to download Riak S2 to do some testings on it, but seems the
>> download page takes you to Riak CS2.1.1:
>>
>> http://docs.basho.com/riak/cs/2.1.1/downloads/
>>
>> Are they the same?
>>
>> Also, in https://packagecloud.io/basho there is no indications of
>> Riak-s2 but Riak-cs.
>>
>> Is very confusing.
>>
>> Thanks,
>> Alex
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


is Riak S2 same as Riak CS 2.1.1??

2016-04-14 Thread Alex De la rosa
Hi there,

I was trying to download Riak S2 to do some testings on it, but seems the
download page takes you to Riak CS2.1.1:

http://docs.basho.com/riak/cs/2.1.1/downloads/

Are they the same?

Also, in https://packagecloud.io/basho there is no indications of Riak-s2
but Riak-cs.

Is very confusing.

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects from Riak KV

2016-04-14 Thread Alex De la rosa
I only have 1 node, is for testing purposes.

riak-admin security status

RPC to 'r...@xx.xx.xx.xx' failed: {'EXIT',

  {undef,

   [{riak_core_console,security_status,

 [[]],

 []},

{rpc,'-handle_call_call/6-fun-0-',5,

 [{file,"rpc.erl"},{line,205}]}]}}

Thanks,
Alex

On Thu, Apr 14, 2016 at 6:23 PM, Luke Bakken <lbak...@basho.com> wrote:

> Just to be sure, could you run this command on all nodes to ensure
> security is disabled?
>
> riak-admin security status
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Thu, Apr 14, 2016 at 7:07 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > I seem to be having this error messages on the log, any ideas?
> >
> > 2016-04-14 16:03:00.460 [error] <0.5460.8143> CRASH REPORT Process
> > <0.5460.8143> with 0 neighbours crashed with reason: call to undefined
> > function webmachine_error_handler:render_error(404,
> >
> {webmachine_request,{wm_reqstate,#Port<0.147587697>,[],undefined,undefined,"xx.xx.xx.xx",{wm_reqdata,...},...}},
> > {none,none,[]})
> >
> > Thanks,
> > Alex
> >
> > On Thu, Apr 14, 2016 at 6:04 PM, Luke Bakken <lbak...@basho.com> wrote:
> >>
> >> Hi Alex,
> >>
> >> Thanks for running that. This proves that it is not a Python client
> >> issue. You can see the transcript of storing, fetching and deleting an
> >> object successfully here:
> >> https://gist.github.com/lukebakken/f1f3cbc96c2762eabb2f124b42797fda
> >>
> >> At this point, I suggest checking the error.log files on each Riak
> >> node for information. Or, if you run "riak-debug" on your cluster and
> >> provide the archives somewhere (private access), I could take a look.
> >>
> >> --
> >> Luke Bakken
> >> Engineer
> >> lbak...@basho.com
> >>
> >>
> >> On Thu, Apr 14, 2016 at 6:57 AM, Alex De la rosa
> >> <alex.rosa@gmail.com> wrote:
> >> > Hi Luke, I tried and get this and didn't work:
> >> >
> >> > ~ # curl -4vvv -XDELETE
> >> > http://xx.xx.xx.xx:8098/buckets/test/keys/something
> >> > * Hostname was NOT found in DNS cache
> >> > *   Trying xx.xx.xx.xx...
> >> > * Connected to xx.xx.xx.xx (xx.xx.xx.xx) port 8098 (#0)
> >> >> DELETE /buckets/test/keys/something HTTP/1.1
> >> >> User-Agent: curl/7.35.0
> >> >> Host: xx.xx.xx.xx:8098
> >> >> Accept: */*
> >> >>
> >> > * Empty reply from server
> >> > * Connection #0 to host xx.xx.xx.xx left intact
> >> > curl: (52) Empty reply from server
> >> >
> >> > Thanks,
> >> > Alex
> >> >
> >> > On Thu, Apr 14, 2016 at 5:50 PM, Alex De la rosa
> >> > <alex.rosa@gmail.com>
> >> > wrote:
> >> >>
> >> >> I can try that, but I would like to do it via the python client
> >> >> itself...
> >> >>
> >> >> Thanks,
> >> >> Rohman
> >> >>
> >> >> On Thu, Apr 14, 2016 at 5:47 PM, Luke Bakken <lbak...@basho.com>
> wrote:
> >> >>>
> >> >>> Hi Alex,
> >> >>>
> >> >>> Can you use the HTTP API to delete an object? Something like:
> >> >>>
> >> >>> curl -4vvv -XDELETE riak-host:8098/buckets/test/keys/something
> >> >>>
> >> >>> --
> >> >>> Luke Bakken
> >> >>> Engineer
> >> >>> lbak...@basho.com
> >> >>>
> >> >>>
> >> >>> On Thu, Apr 14, 2016 at 2:05 AM, Alex De la rosa
> >> >>> <alex.rosa@gmail.com> wrote:
> >> >>> > I upgraded the Python library to the latest and is still
> failing...
> >> >>> > I'm
> >> >>> > unable to delete any objects at all.
> >> >>> >
> >> >>> > ~ # pip show riak
> >> >>> > ---
> >> >>> > Name: riak
> >> >>> > Version: 2.4.2
> >> >>> > Location: /usr/local/lib/python2.7/dist-packages
> >> >>> > Requires: six, pyOpenSSL, protobuf
> >> >>> >
> >

Re: Can't delete objects from Riak KV

2016-04-14 Thread Alex De la rosa
I seem to be having this error messages on the log, any ideas?

2016-04-14 16:03:00.460 [error] <0.5460.8143> CRASH REPORT Process
<0.5460.8143> with 0 neighbours crashed with reason: call to undefined
function webmachine_error_handler:render_error(404,
{webmachine_request,{wm_reqstate,#Port<0.147587697>,[],undefined,undefined,"xx.xx.xx.xx",{wm_reqdata,...},...}},
{none,none,[]})
Thanks,
Alex

On Thu, Apr 14, 2016 at 6:04 PM, Luke Bakken <lbak...@basho.com> wrote:

> Hi Alex,
>
> Thanks for running that. This proves that it is not a Python client
> issue. You can see the transcript of storing, fetching and deleting an
> object successfully here:
> https://gist.github.com/lukebakken/f1f3cbc96c2762eabb2f124b42797fda
>
> At this point, I suggest checking the error.log files on each Riak
> node for information. Or, if you run "riak-debug" on your cluster and
> provide the archives somewhere (private access), I could take a look.
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Thu, Apr 14, 2016 at 6:57 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > Hi Luke, I tried and get this and didn't work:
> >
> > ~ # curl -4vvv -XDELETE
> http://xx.xx.xx.xx:8098/buckets/test/keys/something
> > * Hostname was NOT found in DNS cache
> > *   Trying xx.xx.xx.xx...
> > * Connected to xx.xx.xx.xx (xx.xx.xx.xx) port 8098 (#0)
> >> DELETE /buckets/test/keys/something HTTP/1.1
> >> User-Agent: curl/7.35.0
> >> Host: xx.xx.xx.xx:8098
> >> Accept: */*
> >>
> > * Empty reply from server
> > * Connection #0 to host xx.xx.xx.xx left intact
> > curl: (52) Empty reply from server
> >
> > Thanks,
> > Alex
> >
> > On Thu, Apr 14, 2016 at 5:50 PM, Alex De la rosa <
> alex.rosa@gmail.com>
> > wrote:
> >>
> >> I can try that, but I would like to do it via the python client
> itself...
> >>
> >> Thanks,
> >> Rohman
> >>
> >> On Thu, Apr 14, 2016 at 5:47 PM, Luke Bakken <lbak...@basho.com> wrote:
> >>>
> >>> Hi Alex,
> >>>
> >>> Can you use the HTTP API to delete an object? Something like:
> >>>
> >>> curl -4vvv -XDELETE riak-host:8098/buckets/test/keys/something
> >>>
> >>> --
> >>> Luke Bakken
> >>> Engineer
> >>> lbak...@basho.com
> >>>
> >>>
> >>> On Thu, Apr 14, 2016 at 2:05 AM, Alex De la rosa
> >>> <alex.rosa....@gmail.com> wrote:
> >>> > I upgraded the Python library to the latest and is still failing...
> I'm
> >>> > unable to delete any objects at all.
> >>> >
> >>> > ~ # pip show riak
> >>> > ---
> >>> > Name: riak
> >>> > Version: 2.4.2
> >>> > Location: /usr/local/lib/python2.7/dist-packages
> >>> > Requires: six, pyOpenSSL, protobuf
> >>> >
> >>> > Everything else seems fine, just timeouts when deleting :(
> >>> >
> >>> > Thanks,
> >>> > Alex
> >>> >
> >>> > On Thu, Apr 14, 2016 at 8:53 AM, Alex De la rosa
> >>> > <alex.rosa@gmail.com>
> >>> > wrote:
> >>> >>
> >>> >> Hi there,
> >>> >>
> >>> >> I'm trying to delete objects from riak with the python library and
> is
> >>> >> timing out, any ideas? (this example is from a simple object, but
> also
> >>> >> have
> >>> >> issues with bucket types with map objects, etc...)... Just I seem to
> >>> >> unable
> >>> >> to delete anything, just times out.
> >>> >>
> >>> >> >>> import riak
> >>> >> >>> RIAK = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
> >>> >> >>> '',
> >>> >> >>> 'http_port': 8098, 'pb_port': 8087}])
> >>> >> >>> x = RIAK.bucket('test').get('something')
> >>> >> >>> print x.data
> >>> >> {"something":"here"}
> >>> >> >>> x.delete()
> >>> >> Traceback (most recent call last):
> >>> >>   File "", line 1, in 
> >>> >>   File "/usr/local/lib/python2.7/dist-packages/riak/riak_object.py",
> >>> >> line
> >>> >> 329, in delete
> 

Re: Can't delete objects from Riak KV

2016-04-14 Thread Alex De la rosa
Hi Luke, I tried and get this and didn't work:

~ # curl -4vvv -XDELETE http://xx.xx.xx.xx:8098/buckets/test/keys/something
* Hostname was NOT found in DNS cache
*   Trying xx.xx.xx.xx...
* Connected to xx.xx.xx.xx (xx.xx.xx.xx) port 8098 (#0)
> DELETE /buckets/test/keys/something HTTP/1.1
> User-Agent: curl/7.35.0
> Host: xx.xx.xx.xx:8098
> Accept: */*
>
* Empty reply from server
* Connection #0 to host xx.xx.xx.xx left intact
curl: (52) Empty reply from server

Thanks,
Alex

On Thu, Apr 14, 2016 at 5:50 PM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> I can try that, but I would like to do it via the python client itself...
>
> Thanks,
> Rohman
>
> On Thu, Apr 14, 2016 at 5:47 PM, Luke Bakken <lbak...@basho.com> wrote:
>
>> Hi Alex,
>>
>> Can you use the HTTP API to delete an object? Something like:
>>
>> curl -4vvv -XDELETE riak-host:8098/buckets/test/keys/something
>>
>> --
>> Luke Bakken
>> Engineer
>> lbak...@basho.com
>>
>>
>> On Thu, Apr 14, 2016 at 2:05 AM, Alex De la rosa
>> <alex.rosa@gmail.com> wrote:
>> > I upgraded the Python library to the latest and is still failing... I'm
>> > unable to delete any objects at all.
>> >
>> > ~ # pip show riak
>> > ---
>> > Name: riak
>> > Version: 2.4.2
>> > Location: /usr/local/lib/python2.7/dist-packages
>> > Requires: six, pyOpenSSL, protobuf
>> >
>> > Everything else seems fine, just timeouts when deleting :(
>> >
>> > Thanks,
>> > Alex
>> >
>> > On Thu, Apr 14, 2016 at 8:53 AM, Alex De la rosa <
>> alex.rosa@gmail.com>
>> > wrote:
>> >>
>> >> Hi there,
>> >>
>> >> I'm trying to delete objects from riak with the python library and is
>> >> timing out, any ideas? (this example is from a simple object, but also
>> have
>> >> issues with bucket types with map objects, etc...)... Just I seem to
>> unable
>> >> to delete anything, just times out.
>> >>
>> >> >>> import riak
>> >> >>> RIAK = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
>> '',
>> >> >>> 'http_port': 8098, 'pb_port': 8087}])
>> >> >>> x = RIAK.bucket('test').get('something')
>> >> >>> print x.data
>> >> {"something":"here"}
>> >> >>> x.delete()
>> >> Traceback (most recent call last):
>> >>   File "", line 1, in 
>> >>   File "/usr/local/lib/python2.7/dist-packages/riak/riak_object.py",
>> line
>> >> 329, in delete
>> >> timeout=timeout)
>> >>   File
>> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
>> >> line 196, in wrapper
>> >> return self._with_retries(pool, thunk)
>> >>   File
>> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
>> >> line 138, in _with_retries
>> >> return fn(transport)
>> >>   File
>> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
>> >> line 194, in thunk
>> >> return fn(self, transport, *args, **kwargs)
>> >>   File
>> "/usr/local/lib/python2.7/dist-packages/riak/client/operations.py",
>> >> line 744, in delete
>> >> pw=pw, timeout=timeout)
>> >>   File
>> >>
>> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py",
>> >> line 283, in delete
>> >> riak.pb.messages.MSG_CODE_DEL_RESP)
>> >>   File
>> >>
>> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
>> >> line 34, in _request
>> >> return self._recv_msg(expect)
>> >>   File
>> >>
>> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
>> >> line 165, in _recv_msg
>> >> raise RiakError(bytes_to_str(err.errmsg))
>> >> riak.riak_error.RiakError: 'timeout'
>> >>
>> >> My Riak version is 2.1.4
>> >>
>> >> My Python library is (installed via pip):
>> >> Name: riak
>> >> Version: 2.2.0
>> >> Location: /usr/local/lib/python2.7/dist-packages
>> >> Requires: six, pyOpenSSL, riak-pb
>> >>
>> >> Thanks,
>> >> Alex
>> >
>> >
>> >
>> > ___
>> > riak-users mailing list
>> > riak-users@lists.basho.com
>> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> >
>>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects from Riak KV

2016-04-14 Thread Alex De la rosa
I can try that, but I would like to do it via the python client itself...

Thanks,
Rohman

On Thu, Apr 14, 2016 at 5:47 PM, Luke Bakken <lbak...@basho.com> wrote:

> Hi Alex,
>
> Can you use the HTTP API to delete an object? Something like:
>
> curl -4vvv -XDELETE riak-host:8098/buckets/test/keys/something
>
> --
> Luke Bakken
> Engineer
> lbak...@basho.com
>
>
> On Thu, Apr 14, 2016 at 2:05 AM, Alex De la rosa
> <alex.rosa@gmail.com> wrote:
> > I upgraded the Python library to the latest and is still failing... I'm
> > unable to delete any objects at all.
> >
> > ~ # pip show riak
> > ---
> > Name: riak
> > Version: 2.4.2
> > Location: /usr/local/lib/python2.7/dist-packages
> > Requires: six, pyOpenSSL, protobuf
> >
> > Everything else seems fine, just timeouts when deleting :(
> >
> > Thanks,
> > Alex
> >
> > On Thu, Apr 14, 2016 at 8:53 AM, Alex De la rosa <
> alex.rosa@gmail.com>
> > wrote:
> >>
> >> Hi there,
> >>
> >> I'm trying to delete objects from riak with the python library and is
> >> timing out, any ideas? (this example is from a simple object, but also
> have
> >> issues with bucket types with map objects, etc...)... Just I seem to
> unable
> >> to delete anything, just times out.
> >>
> >> >>> import riak
> >> >>> RIAK = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
> '',
> >> >>> 'http_port': 8098, 'pb_port': 8087}])
> >> >>> x = RIAK.bucket('test').get('something')
> >> >>> print x.data
> >> {"something":"here"}
> >> >>> x.delete()
> >> Traceback (most recent call last):
> >>   File "", line 1, in 
> >>   File "/usr/local/lib/python2.7/dist-packages/riak/riak_object.py",
> line
> >> 329, in delete
> >> timeout=timeout)
> >>   File
> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
> >> line 196, in wrapper
> >> return self._with_retries(pool, thunk)
> >>   File
> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
> >> line 138, in _with_retries
> >> return fn(transport)
> >>   File
> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py",
> >> line 194, in thunk
> >> return fn(self, transport, *args, **kwargs)
> >>   File
> "/usr/local/lib/python2.7/dist-packages/riak/client/operations.py",
> >> line 744, in delete
> >> pw=pw, timeout=timeout)
> >>   File
> >>
> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py",
> >> line 283, in delete
> >> riak.pb.messages.MSG_CODE_DEL_RESP)
> >>   File
> >>
> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
> >> line 34, in _request
> >> return self._recv_msg(expect)
> >>   File
> >>
> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
> >> line 165, in _recv_msg
> >> raise RiakError(bytes_to_str(err.errmsg))
> >> riak.riak_error.RiakError: 'timeout'
> >>
> >> My Riak version is 2.1.4
> >>
> >> My Python library is (installed via pip):
> >> Name: riak
> >> Version: 2.2.0
> >> Location: /usr/local/lib/python2.7/dist-packages
> >> Requires: six, pyOpenSSL, riak-pb
> >>
> >> Thanks,
> >> Alex
> >
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak KV vs Riak TS

2016-04-14 Thread Alex De la rosa
Hi there,

I had been building a project for a while (i got stuck with work and kinda
got delayed for a year until now that i could resume it) and I was planning
to use the following setup (the project is a social network... so imagine
something like Facebook)

- Riak CS: Video storage and streaming
- Riak KV: Everything else... Users, Profiles, Photos, Statuses, Likes,
Counters, etc...

For what I've seen now, Riak CS is gone; I guess it was replaced by Riak
S2... and now there is the new Riak TS. Now I wonder if Riak TS could be
good for my project and what use case I could really give it (maybe user's
status updates?)

However, in this situation I could think of:

- Riak S2: Video storage and streaming (photo storage too)
- Riak TS: Statuses, Updates, News, etc...
- Riak KV: Users, Profiles, Settings

Is this something to think of going forward? Also... would I require a
cluster of at least 5 servers for each Riak variant?? That would require a
minimum of 15 servers just to start!... a bit too much.

Can somebody enlighten me a little on this new Riak TS? Also, is not yet
available to download/test?

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects from Riak KV

2016-04-14 Thread Alex De la rosa
I upgraded the Python library to the latest and is still failing... I'm
unable to delete any objects at all.

~ # pip show riak
---
Name: riak
Version: 2.4.2
Location: /usr/local/lib/python2.7/dist-packages
Requires: six, pyOpenSSL, protobuf

Everything else seems fine, just timeouts when deleting :(

Thanks,
Alex

On Thu, Apr 14, 2016 at 8:53 AM, Alex De la rosa <alex.rosa@gmail.com>
wrote:

> Hi there,
>
> I'm trying to delete objects from riak with the python library and is
> timing out, any ideas? (this example is from a simple object, but also have
> issues with bucket types with map objects, etc...)... Just I seem to unable
> to delete anything, just times out.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *>>> import riak>>> RIAK = riak.RiakClient(protocol = 'pbc', nodes =
> [{'host': '', 'http_port': 8098, 'pb_port': 8087}])>>> x =
> RIAK.bucket('test').get('something')>>> print x.data{"something":"here"}>>>
> x.delete()Traceback (most recent call last):  File "", line 1, in
>   File
> "/usr/local/lib/python2.7/dist-packages/riak/riak_object.py", line 329, in
> deletetimeout=timeout)  File
> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line
> 196, in wrapperreturn self._with_retries(pool, thunk)  File
> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line
> 138, in _with_retriesreturn fn(transport)  File
> "/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line
> 194, in thunkreturn fn(self, transport, *args, **kwargs)  File
> "/usr/local/lib/python2.7/dist-packages/riak/client/operations.py", line
> 744, in deletepw=pw, timeout=timeout)  File
> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py",
> line 283, in deleteriak.pb.messages.MSG_CODE_DEL_RESP)  File
> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
> line 34, in _requestreturn self._recv_msg(expect)  File
> "/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
> line 165, in _recv_msgraise
> RiakError(bytes_to_str(err.errmsg))riak.riak_error.RiakError: 'timeout'*
>
> My Riak version is 2.1.4
>
> My Python library is (installed via pip):
> Name: riak
> Version: 2.2.0
> Location: /usr/local/lib/python2.7/dist-packages
> Requires: six, pyOpenSSL, riak-pb
>
> Thanks,
> Alex
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Can't delete objects from Riak KV

2016-04-13 Thread Alex De la rosa
Hi there,

I'm trying to delete objects from riak with the python library and is
timing out, any ideas? (this example is from a simple object, but also have
issues with bucket types with map objects, etc...)... Just I seem to unable
to delete anything, just times out.

























*>>> import riak>>> RIAK = riak.RiakClient(protocol = 'pbc', nodes =
[{'host': '', 'http_port': 8098, 'pb_port': 8087}])>>> x =
RIAK.bucket('test').get('something')>>> print x.data{"something":"here"}>>>
x.delete()Traceback (most recent call last):  File "", line 1, in
  File
"/usr/local/lib/python2.7/dist-packages/riak/riak_object.py", line 329, in
deletetimeout=timeout)  File
"/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line
196, in wrapperreturn self._with_retries(pool, thunk)  File
"/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line
138, in _with_retriesreturn fn(transport)  File
"/usr/local/lib/python2.7/dist-packages/riak/client/transport.py", line
194, in thunkreturn fn(self, transport, *args, **kwargs)  File
"/usr/local/lib/python2.7/dist-packages/riak/client/operations.py", line
744, in deletepw=pw, timeout=timeout)  File
"/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py",
line 283, in deleteriak.pb.messages.MSG_CODE_DEL_RESP)  File
"/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
line 34, in _requestreturn self._recv_msg(expect)  File
"/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py",
line 165, in _recv_msgraise
RiakError(bytes_to_str(err.errmsg))riak.riak_error.RiakError: 'timeout'*

My Riak version is 2.1.4

My Python library is (installed via pip):
Name: riak
Version: 2.2.0
Location: /usr/local/lib/python2.7/dist-packages
Requires: six, pyOpenSSL, riak-pb

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak crashing until I delete AEE folders

2015-06-14 Thread Alex De la rosa
My riak node (i was doing some tests with Riak 2.0.0-1 in Ubuntu having
just 1 node) and after some time, out of a sudden started crashing and even
if i start it again, it hold running only a few seconds before crashing
again.

Then I deleted the AEE folders within anti_entropy and yz_anti_entropy
and is working fine again... I would like to know the reason for that : )

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak bind to hostname instead of IP Address!!

2015-04-30 Thread Alex De la rosa
You really don't want to do that... In the past, when using PostgreSQL and
binding it to a subdomain like db.yourdomain.com (just for not having to
remember IPs, even it doesn't really matter in code-level) I
had experienced randomly and unexpectedly very slow timings and even
disconnections for quite some painful time just because DNS servers between
were having issues, etc... Using an IP address will never incur in this
kind of issue as no hostname translations has to be performed.

I don't see any benefit for doing so. Just use IPs and forget about DNS
related issues.

Alex

On Thu, Apr 30, 2015 at 9:05 AM, Praveen Baratam praveen.bara...@gmail.com
wrote:

 Just to clarify... I was referring to...

 Is it legal to bind Riak to a hostname instead of IP?


 Yes, it's legal but it will incur the overhead of the lookup. If you're
 talking about the HTTP/PBC interfaces, it's best to use IP addresses, but
 for the node name, it's totally fine to use a hostname.
 Will it be a significant overhead? I believe the it will just be a
 function call after the initial lookup rather than a network call.

 Praveen
 ᐧ

 On Wed, Apr 29, 2015 at 12:40 AM, Praveen Baratam 
 praveen.bara...@gmail.com wrote:

 Hello All,

 I found it on the FAQ that using hostnames instead of IP addresses will
 incur a overhead! Will it be a significant overhead? I believe that if a
 hostname is queried during startup, the same will be used through the life
 of the VM or in an optimized case till the DNS TTL expires!!

 Best,

 Praveen Baratam

 about.me http://about.me/praveen.baratam
 ᐧ




 --
 Dr. Praveen Baratam

 about.me http://about.me/praveen.baratam

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: warnings on updating from 2.0 to 2.1

2015-04-29 Thread Alex De la rosa
Ok, cool : ) thanks for the info!

Alex

On Wed, Apr 29, 2015 at 1:30 PM, Luke Bakken lbak...@basho.com wrote:

 Hi Alex,

 This issue will be addressed in a future release with this PR:
 https://github.com/basho/yokozuna/pull/413

 You may remove the /usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d
 directory as the priv/solr/solr-webapp directory should be its only
 descendant. It is used as temporary space by Solr. In the future, this
 will be located in /var/riak along with other variable temp data.

 --
 Luke Bakken
 Engineer
 lbak...@basho.com


 On Wed, Apr 29, 2015 at 2:28 AM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  I'm getting the following warnings:
 
  Preparing to unpack .../riak_2.1.0-1_amd64.deb ...
  Unpacking riak (2.1.0-1) over (2.0.5-1) ...
  dpkg: warning: unable to delete old directory
  '/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d/priv/solr/solr-webapp':
  Directory not empty
  dpkg: warning: unable to delete old directory
  '/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d/priv/solr': Directory not
  empty
  dpkg: warning: unable to delete old directory
  '/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d/priv': Directory not empty
  dpkg: warning: unable to delete old directory
  '/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d': Directory not empty
 
  Should I delete those folders manually?
 
  Thanks,
  Alex
 
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


warnings on updating from 2.0 to 2.1

2015-04-29 Thread Alex De la rosa
I'm getting the following warnings:

Preparing to unpack .../riak_2.1.0-1_amd64.deb ...
Unpacking riak (2.1.0-1) over (2.0.5-1) ...
dpkg: warning: unable to delete old directory
'/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d/priv/solr/solr-webapp':
Directory not empty
dpkg: warning: unable to delete old directory
'/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d/priv/solr': Directory not
empty
dpkg: warning: unable to delete old directory
'/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d/priv': Directory not empty
dpkg: warning: unable to delete old directory
'/usr/lib/riak/lib/yokozuna-2.0.0-34-g122659d': Directory not empty

Should I delete those folders manually?

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: scaling up vertically upgrading HD

2015-04-23 Thread Alex De la rosa
Cool, thank you :) however, is true that if 4 nodes are 1TB and the other
one is 2TB; the 5 nodes will act as having 1TB each, right? (I guess that's
what you mean with Riak allocates data to nodes evenly)

Thanks,
Alex

On Thu, Apr 23, 2015 at 4:40 PM, Jon Meredith jmered...@basho.com wrote:

 Hi Alex,

 Riak allocates data to nodes evenly - it doesn't take into account free
 space.  You should just be able to upgrade all nodes to 1Tb and Riak will
 use the space, without needing to take any additional actions.

 Jon

 On Thu, Apr 23, 2015 at 7:17 AM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Hi there,

 I have the following question about scaling up vertically upgrading HDs
 to get more space. I have the understanding that if HDs are of different
 sizes on ring creation, they get the smallest size of them all and make all
 nodes even, for example:

 Node 1: 500GB
 Node 2: 500GB
 Node 3: 750GB
 Node 4: 500GB
 Node 5: 750GB

 In this case riak would make a parity of 500GB per node... so... how
 could we scale up the HDs? Imagine I want to turn them into the following:

 Node 1: 1TB
 Node 2: 1TB
 Node 3: 1TB
 Node 4: 1TB
 Node 5: 1TB

 How can you rebalance the cluster in the way their partitions grow to fit
 this new setup? (using Riak 2.1)

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Jon Meredith
 Chief Architect
 Basho Technologies, Inc.
 jmered...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


scaling up vertically upgrading HD

2015-04-23 Thread Alex De la rosa
Hi there,

I have the following question about scaling up vertically upgrading HDs to
get more space. I have the understanding that if HDs are of different sizes
on ring creation, they get the smallest size of them all and make all nodes
even, for example:

Node 1: 500GB
Node 2: 500GB
Node 3: 750GB
Node 4: 500GB
Node 5: 750GB

In this case riak would make a parity of 500GB per node... so... how could
we scale up the HDs? Imagine I want to turn them into the following:

Node 1: 1TB
Node 2: 1TB
Node 3: 1TB
Node 4: 1TB
Node 5: 1TB

How can you rebalance the cluster in the way their partitions grow to fit
this new setup? (using Riak 2.1)

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: scaling up vertically upgrading HD

2015-04-23 Thread Alex De la rosa
Awesome! Thank you very much :)

On Thursday, April 23, 2015, Alexander Sicular sicul...@gmail.com wrote:

 Hi Alex!

 Yes, each node would use an even amount of space regardless of maximum
 disk space available.

 The evenness has to do with Riak's uniform data distribution due to the
 sha1 consistent hashing algorithm. The output of sha1 is a number in the
 range of 0 to 2^160. That range is partitioned into segments called
 vnodes in Riak parlance (see ring_size configuration option). Vnodes, or
 virtual nodes, in turn are equally allocated to physical machines in the
 cluster. Thus, each node in a Riak cluster can be thought of as being
 responsible for 1/n of data stored and 1/n performance where n is the
 number of machines in the cluster.

 I think I use some more and or better words to describe it here
 https://basho.com/why-riak-just-works/

 -Alexander

 @siculars
 http://siculars.posthaven.com

 Sent from my iRotaryPhone

 On Apr 23, 2015, at 10:42, Alex De la rosa alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

 Cool, thank you :) however, is true that if 4 nodes are 1TB and the other
 one is 2TB; the 5 nodes will act as having 1TB each, right? (I guess that's
 what you mean with Riak allocates data to nodes evenly)

 Thanks,
 Alex

 On Thu, Apr 23, 2015 at 4:40 PM, Jon Meredith jmered...@basho.com
 javascript:_e(%7B%7D,'cvml','jmered...@basho.com'); wrote:

 Hi Alex,

 Riak allocates data to nodes evenly - it doesn't take into account free
 space.  You should just be able to upgrade all nodes to 1Tb and Riak will
 use the space, without needing to take any additional actions.

 Jon

 On Thu, Apr 23, 2015 at 7:17 AM, Alex De la rosa alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

 Hi there,

 I have the following question about scaling up vertically upgrading HDs
 to get more space. I have the understanding that if HDs are of different
 sizes on ring creation, they get the smallest size of them all and make all
 nodes even, for example:

 Node 1: 500GB
 Node 2: 500GB
 Node 3: 750GB
 Node 4: 500GB
 Node 5: 750GB

 In this case riak would make a parity of 500GB per node... so... how
 could we scale up the HDs? Imagine I want to turn them into the following:

 Node 1: 1TB
 Node 2: 1TB
 Node 3: 1TB
 Node 4: 1TB
 Node 5: 1TB

 How can you rebalance the cluster in the way their partitions grow to
 fit this new setup? (using Riak 2.1)

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 javascript:_e(%7B%7D,'cvml','riak-users@lists.basho.com');
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Jon Meredith
 Chief Architect
 Basho Technologies, Inc.
 jmered...@basho.com javascript:_e(%7B%7D,'cvml','jmered...@basho.com');


 ___
 riak-users mailing list
 riak-users@lists.basho.com
 javascript:_e(%7B%7D,'cvml','riak-users@lists.basho.com');
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: object sizes

2015-04-20 Thread Alex De la rosa
Hi Brett,

Yeah, that was my assumption too, an overhead on RAM memory for creating
the object structures, etc... that's also why the simple objects (raw
binary) gives a pretty accurate measure compared to cURL, but
maps/sets/etc... don't.

Exactly, I would like to be able to have a way to know how big is the
object stored inside Riak (using the python client instead of doing extra
cURL calls) so I can make sure objects not bigger than 1MB in storage space
is getting saved (and then implement some kind of key split mechanism if
arriving to the limit).

Thanks!
Alex

On Mon, Apr 20, 2015 at 11:42 PM, Brett Hazen br...@basho.com wrote:

 Alex -

 Looks like Matt created a GitHub issue to track this.
 https://github.com/basho/riak-python-client/issues/403 Thanks!

 It occurs to me that sys.getsizeof() returns the size of the Python Riak
 Object stored in memory which is most certainly not exactly the same as
 what curl reports.  Curl is measuring the JSON across the wire and the
 Python client is converting it into a native format.  There is extra
 information in memory such as indexes into dictionaries and CRDT metadata
 used in maps.

 Just to clarify, you want to know the size of the object stored in Riak as
 opposed to in memory, right?  The 1MB limit is on Riak storage?

 thanks,
 Brett

 On April 17, 2015 at 2:41:56 PM, Alex De la rosa (alex.rosa@gmail.com)
 wrote:

 Hi Matthew,

 I don't have a github account so seems i'm not able to create the ticket
 for this feature, could you do it?

 Thanks,
 Alex

 On Thu, Apr 16, 2015 at 10:08 PM, Alex De la rosa alex.rosa@gmail.com
  wrote:

 Hi Matthew,

 Thanks for your answer : ) i always have interesting questions : P

 about point [2]... if you see my examples, i'm already using
 sys.getsizeof()... but sizes are not so accurate, also, I believe that is
 the size they take on RAM when loaded by Python and not the full exact size
 of the object (specially on Maps that differs quite some).

 I will open the ticket then : ) I think it can be very helpful future
 feature.

 Thanks,
 Alex

 On Thu, Apr 16, 2015 at 10:03 PM, Matthew Brender mbren...@basho.com
 wrote:

 Hi Alex,

 That is an interesting question! I haven't seen a request like that in
 our backlog, so feel free to open a new issue [1]. I'm curious: why
 not use something like sys.getsizeof [2]?

 [1] https://github.com/basho/riak-python-client/issues
 [2]
 http://stackoverflow.com/questions/449560/how-do-i-determine-the-size-of-an-object-in-python

 Matt Brender | Developer Advocacy Lead
 Basho Technologies
 t: @mjbrender


 On Mon, Apr 13, 2015 at 7:26 AM, Alex De la rosa
  alex.rosa@gmail.com wrote:
  Hi Bryan,
 
  Thanks for your answer; i don't know how to code in erlang, so all my
 system
  relies on Python.
 
  Following Ciprian's curl suggestion, I tried to compare it with this
 python
  code during the weekend:
 
  Map object:
  curl -I
  1058 bytes
  print sys.getsizeof(obj.value)
  3352 bytes
 
  Standard object:
  curl -I
  9718 bytes
  print sys.getsizeof(obj.encoded_data)
  9755 bytes
 
  The standard object seems pretty accurate in both approaches even the
 image
  binary data was only 5kbs (I assume some overhead here)
 
  The map object is about 3x the difference between curl and getting the
  object via Python.
 
  Not so sure if this is a realistic way to measure their growth
 (moreover
  because the objects i would need this monitorization are Maps, not
 unaltered
  binary data that I can know the size before storing it).
 
  Would it be possible in some way that the Python get() function would
 return
  something like obj.content-lenght returning the size is currently
 taking?
  that would be a pretty nice feature.
 
  Thanks!
  Alex
 
  On Mon, Apr 13, 2015 at 12:47 PM, bryan hunt bh...@basho.com wrote:
 
  Alex,
 
 
  Maps and Sets are stored just like a regular Riak object, but using a
  particular data structure and object serialization format. As you have
  observed, there is an overhead, and you want to monitor the growth of
 these
  data structures.
 
  It is possible to write a MapReduce map function (in Erlang) which
  retrieves a provided object by type/bucket/id and returns the size of
 it's
  data. Would such a thing be of use?
 
  It would not be hard to write such a module, and I might even have
 some
  code for doing so if you are interested. There are also reasonably
 good
  examples in our documentation -
  http://docs.basho.com/riak/latest/dev/advanced/mapreduce
 
  I haven't looked at the Python PB API in a while, but I'm reasonably
  certain it supports the invocation of MapReduce jobs.
 
  Bryan
 
 
  On 10 Apr 2015, at 13:51, Alex De la rosa alex.rosa@gmail.com
 wrote:
 
  Also, I forgot, i'm most interested on bucket_types instead of simple
 riak
  buckets. Being able how my mutable data inside a MAP/SET has grown.
 
  For a traditional standard bucket I can calculate the size of what I'm
  sending before, so Riak won't get data bigger

Re: object sizes

2015-04-17 Thread Alex De la rosa
Hi Matthew,

I don't have a github account so seems i'm not able to create the ticket
for this feature, could you do it?

Thanks,
Alex

On Thu, Apr 16, 2015 at 10:08 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Hi Matthew,

 Thanks for your answer : ) i always have interesting questions : P

 about point [2]... if you see my examples, i'm already using
 sys.getsizeof()... but sizes are not so accurate, also, I believe that is
 the size they take on RAM when loaded by Python and not the full exact size
 of the object (specially on Maps that differs quite some).

 I will open the ticket then : ) I think it can be very helpful future
 feature.

 Thanks,
 Alex

 On Thu, Apr 16, 2015 at 10:03 PM, Matthew Brender mbren...@basho.com
 wrote:

 Hi Alex,

 That is an interesting question! I haven't seen a request like that in
 our backlog, so feel free to open a new issue [1]. I'm curious: why
 not use something like sys.getsizeof [2]?

 [1] https://github.com/basho/riak-python-client/issues
 [2]
 http://stackoverflow.com/questions/449560/how-do-i-determine-the-size-of-an-object-in-python

 Matt Brender | Developer Advocacy Lead
 Basho Technologies
 t: @mjbrender


 On Mon, Apr 13, 2015 at 7:26 AM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  Hi Bryan,
 
  Thanks for your answer; i don't know how to code in erlang, so all my
 system
  relies on Python.
 
  Following Ciprian's curl suggestion, I tried to compare it with this
 python
  code during the weekend:
 
  Map object:
  curl -I
  1058 bytes
  print sys.getsizeof(obj.value)
  3352 bytes
 
  Standard object:
  curl -I
  9718 bytes
  print sys.getsizeof(obj.encoded_data)
  9755 bytes
 
  The standard object seems pretty accurate in both approaches even the
 image
  binary data was only 5kbs (I assume some overhead here)
 
  The map object is about 3x the difference between curl and getting the
  object via Python.
 
  Not so sure if this is a realistic way to measure their growth (moreover
  because the objects i would need this monitorization are Maps, not
 unaltered
  binary data that I can know the size before storing it).
 
  Would it be possible in some way that the Python get() function would
 return
  something like obj.content-lenght returning the size is currently
 taking?
  that would be a pretty nice feature.
 
  Thanks!
  Alex
 
  On Mon, Apr 13, 2015 at 12:47 PM, bryan hunt bh...@basho.com wrote:
 
  Alex,
 
 
  Maps and Sets are stored just like a regular Riak object, but using a
  particular data structure and object serialization format. As you have
  observed, there is an overhead, and you want to monitor the growth of
 these
  data structures.
 
  It is possible to write a MapReduce map function (in Erlang) which
  retrieves a provided object by type/bucket/id and returns the size of
 it's
  data. Would such a thing be of use?
 
  It would not be hard to write such a module, and I might even have some
  code for doing so if you are interested. There are also reasonably good
  examples in our documentation -
  http://docs.basho.com/riak/latest/dev/advanced/mapreduce
 
  I haven't looked at the Python PB API in a while, but I'm reasonably
  certain it supports the invocation of MapReduce jobs.
 
  Bryan
 
 
  On 10 Apr 2015, at 13:51, Alex De la rosa alex.rosa@gmail.com
 wrote:
 
  Also, I forgot, i'm most interested on bucket_types instead of simple
 riak
  buckets. Being able how my mutable data inside a MAP/SET has grown.
 
  For a traditional standard bucket I can calculate the size of what I'm
  sending before, so Riak won't get data bigger than 1MB. Problem arise
 in
  MAPS/SETS that can grown.
 
  Thanks,
  Alex
 
  On Fri, Apr 10, 2015 at 2:47 PM, Alex De la rosa 
 alex.rosa@gmail.com
  wrote:
 
  Well... using the HTTP Rest API would make no sense when using the PB
  API... would be extremely costly to maintain, also it may include
 some extra
  bytes on the transport.
 
  I would be interested on being able to know the size via Python itself
  using the PB API as I'm doing.
 
  Thanks anyway,
  Alex
 
  On Fri, Apr 10, 2015 at 1:58 PM, Ciprian Manea cipr...@basho.com
 wrote:
 
  Hi Alex,
 
  You can always query the size of a riak object using `curl` and the
 REST
  API:
 
  i.e. curl -I riak-node-ip:8098/buckets/test/keys/demo
 
 
  Regards,
  Ciprian
 
  On Thu, Apr 9, 2015 at 12:11 PM, Alex De la rosa
  alex.rosa@gmail.com wrote:
 
  Hi there,
 
  I'm using the python client (by the way).
 
  obj = RIAK.bucket('my_bucket').get('my_key')
 
  Is there any way to know the actual size of an object stored in
 Riak?
  to make sure something mutable (like a set) didn't added up to more
 than 1MB
  in storage size.
 
  Thanks!
  Alex
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 
 
 
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com

Re: [Announcement] Riak 2.1 - Features Release Notes

2015-04-16 Thread Alex De la rosa
The write_once property is applied to a bucket type... why only on bucket
types?

ex: RIAK.bucket_type('my_type').bucket('my_bucket').get('my_key')

Normally I use bucket types to use mutable data like Maps/Sets/Counters...
so I can update its contents (multiple writes) and when I have static data
I use a basic buckets:

ex: RIAK.bucket('my_bucket').get('my_key')

It makes much more sense to me to have write_once capabilities in this kind
of bucket that not in a bucket type bucket. I can imagine the use of a
bucket type just for indexing data in Solr and not change it... but I think
simple buckets would benefit of this feature even more.

Thanks!
Alex

On Thu, Apr 16, 2015 at 10:40 PM, Matthew Brender mbren...@basho.com
wrote:

 Riak 2.1 is available [1]! Let’s start with the most fun part.


 ## New Feature
 Riak 2.1 introduces the concept of “write once” buckets, buckets whose
 entries are intended to be written exactly once, and never updated or
 over-written. The write_once property is applied to a bucket type and
 may only be set at bucket creation time. This allows Riak to avoid a
 read before write for write_once buckets only. More information, as
 always, is available in the docs [2]


 ## Other updates
 There are a number of GitHub Issues closed with the 2.1 release. Some
 noteworthy updates:

 * A nice solution for a corner case that could result in data loss [3]
 * A public API related to riak_core_ring_manager thanks to Darach Ennis!
 [4]
 * A JSON writer for a number of riak_admin commands - see commit for
 details [5]
 * Updates to Yokozuna (Riak’s Solr integration) that include
 additional metrics thanks to Jon Anderson! [6]

 Be sure to see the full Release Notes here [7] and the Product Advisories
 [8].

 ## Upgrading
 Be sure to review documentation [7] before an upgrade. It’s worth
 noting that all nodes in a cluster must be at 2.1 before you set the
 write_once property on a bucket.

 It’s worth noting that there is a known issue with Yokozuna that
 causes entry loss on AAE activity [9]. Please keep this in mind before
 upgrading.


 ## Feedback please
 Do you have a use case where write_once could be helpful? Please reply
 to me directly! I would love to learn about your environment and be
 able to share more details with you.

 Thanks,
 Matt
 Developer Advocate
 twitter.com/mjbrender


 [1] http://docs.basho.com/riak/latest/downloads/
 [2] http://docs.basho.com/riak/latest/dev/advanced/write-once
 [3] https://github.com/basho/riak_kv/issues/679
 [4] https://github.com/basho/riak_core/pull/716
 [5]
 https://github.com/basho/clique/commit/0560e7a135d9a1e77646384681ae88baf0cba31a
 [6] https://github.com/basho/riak_kv/pull/855
 [7] https://github.com/basho/riak/blob/develop/RELEASE-NOTES.md
 [8] http://docs.basho.com/riak/latest/community/product-advisories/
 [9] https://github.com/basho/yokozuna/issues/481
 ᐧ

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: object sizes

2015-04-13 Thread Alex De la rosa
Hi Bryan,

Thanks for your answer; i don't know how to code in erlang, so all my
system relies on Python.

Following Ciprian's curl suggestion, I tried to compare it with this python
code during the weekend:

Map object:
curl -I
 1058 bytes
print sys.getsizeof(obj.value)
 3352 bytes

Standard object:
curl -I
 9718 bytes
print sys.getsizeof(obj.encoded_data)
 9755 bytes

The standard object seems pretty accurate in both approaches even the image
binary data was only 5kbs (I assume some overhead here)

The map object is about 3x the difference between curl and getting the
object via Python.

Not so sure if this is a realistic way to measure their growth (moreover
because the objects i would need this monitorization are Maps, not
unaltered binary data that I can know the size before storing it).

Would it be possible in some way that the Python get() function would
return something like obj.content-lenght returning the size is currently
taking? that would be a pretty nice feature.

Thanks!
Alex

On Mon, Apr 13, 2015 at 12:47 PM, bryan hunt bh...@basho.com wrote:

 Alex,


 Maps and Sets are stored just like a regular Riak object, but using a
 particular data structure and object serialization format. As you have
 observed, there is an overhead, and you want to monitor the growth of these
 data structures.

 It is possible to write a MapReduce map function (in Erlang) which
  retrieves a provided object by type/bucket/id and returns the size of it's
 data. Would such a thing be of use?

 It would not be hard to write such a module, and I might even have some
 code for doing so if you are interested. There are also reasonably good
 examples in our documentation -
 http://docs.basho.com/riak/latest/dev/advanced/mapreduce

 I haven't looked at the Python PB API in a while, but I'm reasonably
 certain it supports the invocation of MapReduce jobs.

 Bryan


 On 10 Apr 2015, at 13:51, Alex De la rosa alex.rosa@gmail.com wrote:

 Also, I forgot, i'm most interested on bucket_types instead of simple riak
 buckets. Being able how my mutable data inside a MAP/SET has grown.

 For a traditional standard bucket I can calculate the size of what I'm
 sending before, so Riak won't get data bigger than 1MB. Problem arise in
 MAPS/SETS that can grown.

 Thanks,
 Alex

 On Fri, Apr 10, 2015 at 2:47 PM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Well... using the HTTP Rest API would make no sense when using the PB
 API... would be extremely costly to maintain, also it may include some
 extra bytes on the transport.

 I would be interested on being able to know the size via Python itself
 using the PB API as I'm doing.

 Thanks anyway,
 Alex

 On Fri, Apr 10, 2015 at 1:58 PM, Ciprian Manea cipr...@basho.com wrote:

 Hi Alex,

 You can always query the size of a riak object using `curl` and the REST
 API:

 i.e. curl -I riak-node-ip:8098/buckets/test/keys/demo


 Regards,
 Ciprian

 On Thu, Apr 9, 2015 at 12:11 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi there,

 I'm using the python client (by the way).

 obj = RIAK.bucket('my_bucket').get('my_key')

 Is there any way to know the actual size of an object stored in Riak?
 to make sure something mutable (like a set) didn't added up to more than
 1MB in storage size.

 Thanks!
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


object sizes

2015-04-09 Thread Alex De la rosa
Hi there,

I'm using the python client (by the way).

obj = RIAK.bucket('my_bucket').get('my_key')

Is there any way to know the actual size of an object stored in Riak? to
make sure something mutable (like a set) didn't added up to more than 1MB
in storage size.

Thanks!
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


non-indexable SOLR schema

2015-04-04 Thread Alex De la rosa
Hi all,

To be able to use counters/sets/maps in Riak I have to store the object
into a defined bucket_type indexed via SOLR.

However, this will require extra disk space as data will be indexed (if
using the default schema). Can I create a custom schema ignoring all
fields so nothing is indexed? I don't need to use Riak Search on these
objects, as I always know the KEY to fetch them. A schema like this would
work? ( I guess that I can not put [ indexed=false ] in the _yz* fields
required by Riak, right? Or is it possible to not index that data either? )

—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—
?xml version=1.0 encoding=UTF-8 ?
schema name=schedule version=1.5
 fields
   dynamicField name=* type=ignored /
   !-- All of these fields are required by Riak Search --
   field name=_yz_id   type=_yz_str indexed=true stored=true
 multiValued=false required=true/
   field name=_yz_ed   type=_yz_str indexed=true stored=false
multiValued=false/
   field name=_yz_pn   type=_yz_str indexed=true stored=false
multiValued=false/
   field name=_yz_fpn  type=_yz_str indexed=true stored=false
multiValued=false/
   field name=_yz_vtag type=_yz_str indexed=true stored=false
multiValued=false/
   field name=_yz_rk   type=_yz_str indexed=true stored=true
 multiValued=false/
   field name=_yz_rt   type=_yz_str indexed=true stored=true
 multiValued=false/
   field name=_yz_rb   type=_yz_str indexed=true stored=true
 multiValued=false/
   field name=_yz_err  type=_yz_str indexed=true stored=false
multiValued=false/
 /fields

 uniqueKey_yz_id/uniqueKey

 types
fieldtype name=ignored stored=false indexed=false
multiValued=true class=solr.StrField /
!-- YZ String: Used for non-analyzed fields --
fieldType name=_yz_str class=solr.StrField sortMissingLast=true
/
 /types
/schema
—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—

Another question, can I use the [ compressed=true ] to save disk space?
in both dynamicField name=* type=ignored / and _yz* fields?

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: non-indexable SOLR schema

2015-04-04 Thread Alex De la rosa
How to do that?

For what I have seen, to use these special data types you need to create a
bucket under a bucket_type; and this bucket type is created following an
index so data gets indexed in Solr.

On Saturday, April 4, 2015, Shawn Debnath sh...@debnath.net wrote:

  You do not have to set up Yokozuna (yz) to be able to use counters,
 sets, maps in Riak. Those types can be used independently via the Riak key
 value store. You only need to set up the indexes if you were to search for
 data via yz or search 2.0. And in that case, you can remove the generic
 mappings, keep the _yz specific ones and then explicitly add the fields you
 want to index/search on.

   On 4/4/15, 7:25 AM, Alex De la rosa alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

   Hi all,

  To be able to use counters/sets/maps in Riak I have to store the object
 into a defined bucket_type indexed via SOLR.

  However, this will require extra disk space as data will be indexed (if
 using the default schema). Can I create a custom schema ignoring all
 fields so nothing is indexed? I don't need to use Riak Search on these
 objects, as I always know the KEY to fetch them. A schema like this would
 work? ( I guess that I can not put [ indexed=false ] in the _yz* fields
 required by Riak, right? Or is it possible to not index that data either? )

  —+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—
  ?xml version=1.0 encoding=UTF-8 ?
 schema name=schedule version=1.5
  fields
dynamicField name=* type=ignored /
!-- All of these fields are required by Riak Search --
 field name=_yz_id   type=_yz_str indexed=true stored=true
  multiValued=false required=true/
field name=_yz_ed   type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_pn   type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_fpn  type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_vtag type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_rk   type=_yz_str indexed=true stored=true
  multiValued=false/
field name=_yz_rt   type=_yz_str indexed=true stored=true
  multiValued=false/
field name=_yz_rb   type=_yz_str indexed=true stored=true
  multiValued=false/
field name=_yz_err  type=_yz_str indexed=true stored=false
 multiValued=false/
  /fields

   uniqueKey_yz_id/uniqueKey

   types
 fieldtype name=ignored stored=false indexed=false
 multiValued=true class=solr.StrField /
 !-- YZ String: Used for non-analyzed fields --
 fieldType name=_yz_str class=solr.StrField sortMissingLast=true
 /
  /types
 /schema
 —+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—

  Another question, can I use the [ compressed=true ] to save disk
 space? in both dynamicField name=* type=ignored / and _yz* fields?

  Thanks,
 Alex


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: non-indexable SOLR schema

2015-04-04 Thread Alex De la rosa
Uhm, interesting, I didn't see that page; I believe the one I saw was about
using Riak Search and that's where my assumption on needing to create an
index to make a bucket_type came along.

Much easier though! Thanks!

Alex

On Saturday, April 4, 2015, Shawn Debnath sh...@debnath.net wrote:

   Riak does not in any way rely on solr for its KV operations. Not sure
 where you are seeing that or what code you are looking at but you can
 define bucket types, activate them without ever touching solr.  The basic
 bucket type set up instructions can be found here:
 http://docs.basho.com/riak/latest/dev/advanced/bucket-types/.

   On 4/4/15, 8:51 AM, Alex De la rosa alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

   How to do that?

  For what I have seen, to use these special data types you need to create
 a bucket under a bucket_type; and this bucket type is created following an
 index so data gets indexed in Solr.

 On Saturday, April 4, 2015, Shawn Debnath sh...@debnath.net
 javascript:_e(%7B%7D,'cvml','sh...@debnath.net'); wrote:

  You do not have to set up Yokozuna (yz) to be able to use counters,
 sets, maps in Riak. Those types can be used independently via the Riak key
 value store. You only need to set up the indexes if you were to search for
 data via yz or search 2.0. And in that case, you can remove the generic
 mappings, keep the _yz specific ones and then explicitly add the fields you
 want to index/search on.

   On 4/4/15, 7:25 AM, Alex De la rosa alex.rosa@gmail.com wrote:

   Hi all,

  To be able to use counters/sets/maps in Riak I have to store the object
 into a defined bucket_type indexed via SOLR.

  However, this will require extra disk space as data will be indexed (if
 using the default schema). Can I create a custom schema ignoring all
 fields so nothing is indexed? I don't need to use Riak Search on these
 objects, as I always know the KEY to fetch them. A schema like this would
 work? ( I guess that I can not put [ indexed=false ] in the _yz* fields
 required by Riak, right? Or is it possible to not index that data either? )

  —+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—
  ?xml version=1.0 encoding=UTF-8 ?
 schema name=schedule version=1.5
  fields
dynamicField name=* type=ignored /
!-- All of these fields are required by Riak Search --
 field name=_yz_id   type=_yz_str indexed=true stored=true
  multiValued=false required=true/
field name=_yz_ed   type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_pn   type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_fpn  type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_vtag type=_yz_str indexed=true stored=false
 multiValued=false/
field name=_yz_rk   type=_yz_str indexed=true stored=true
  multiValued=false/
field name=_yz_rt   type=_yz_str indexed=true stored=true
  multiValued=false/
field name=_yz_rb   type=_yz_str indexed=true stored=true
  multiValued=false/
field name=_yz_err  type=_yz_str indexed=true stored=false
 multiValued=false/
  /fields

   uniqueKey_yz_id/uniqueKey

   types
 fieldtype name=ignored stored=false indexed=false
 multiValued=true class=solr.StrField /
 !-- YZ String: Used for non-analyzed fields --
 fieldType name=_yz_str class=solr.StrField
 sortMissingLast=true /
  /types
 /schema
 —+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—+—

  Another question, can I use the [ compressed=true ] to save disk
 space? in both dynamicField name=* type=ignored / and _yz* fields?

  Thanks,
 Alex


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Packagecloud.io issue

2015-04-02 Thread Alex De la rosa
Awesome, I just re-run the aptitude update and aptitude safe-upgrade
and riak didn't come again : )

Thanks!
Alex

On Wed, Apr 1, 2015 at 10:25 PM, Greg Cymbalski gcymbal...@basho.com
wrote:

 Alex-
   Turns out that there was an indexing issue with Packagecloud, which
 should be resolved now. Good news, no need to respin packages :)

   Thanks,
 —Greg

 On Apr 1, 2015, at 11:38 AM, Greg Cymbalski gcymbal...@basho.com wrote:

 Alex-
   We’ve managed to reproduce that issue and are researching it now. It
 does appear to be something in the package itself; not sure how that didn’t
 turn up until now.

   We’ll keep you updated. Thanks!
 —Greg

 On Mar 30, 2015, at 9:00 AM, riak-users-requ...@lists.basho.com wrote:

 Date: Mon, 30 Mar 2015 11:55:41 +0200
 From: Alex De la rosa alex.rosa@gmail.com
 To: riak-users riak-users@lists.basho.com
 Subject: Packagecloud.io issue
 Message-ID:
 CAPphDGoinadFe+qBSaLPzf5z_8XJx1kAfXcGfc=o4t1aea6...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8

 Hi there,

 I want to report again a problem I have with packagecloud.io; everytime I
 do aptitude update I get a hit as if a new version of Riak is available
 (that is not)... and I have to re-install Riak to a version I already have.

 I just did an aptitude safe-upgrade and upgraded Riak to version 2.0.5
 (latest):

 # aptitude safe-upgrade
 The following packages will be upgraded:
  ... ... ... riak ... ... ... ... ... ...
 [...]
 Unpacking riak (2.0.5-1) over (2.0.1) ...
 [...]

 # riak version
 2.0.5

 # aptitude update
 Hit https://packagecloud.io trusty InRelease
 Hit https://packagecloud.io trusty/main Sources
 Hit https://packagecloud.io trusty/main amd64 Packages
 Hit https://packagecloud.io trusty/main i386 Packages

 # aptitude safe-upgrade
 The following packages will be upgraded:
  riak
 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 Need to get 0 B/55.4 MB of archives. After unpacking 0 B will be used.
 Do you want to continue? [Y/n/?]

 if i select Y i get the following:

 Unpacking riak (2.0.5-1) over (2.0.5-1) ...

 and everytime I check for updates, riak comes as requiring an update
 although is not true. How to fix that? makes no sense it continues coming
 as needing to be upgraded when is the same exact version.

 Thanks,
 Alex
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20150330/b2e2620c/attachment-0001.html
 




 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Query on Riak Search in a cluster of 3 nodes behind ELB is giving different result everytime

2015-04-01 Thread Alex De la rosa
Oh, ok. Cool to know :) thanks

On Wed, Apr 1, 2015 at 10:06 AM, Vitaliy E 13vitam...@gmail.com wrote:

 Hi Alex,

 riak-admin member-status is the command

 (N - 1) nodes were showing all N nodes as part of the cluster, but Ring
 Ready: false at the same time. One node was showing just itself in
 risk-admin member-status. Strangely, when shut down its status was
 reflected in the admin console of the cluster, but no replication was done
 so I'm sure it wasn't working together with the rest of the nodes.

 Repaired by joining the cluster as if it was never attempted before:

 riak-admin cluster join node
 riak-admin cluster plan
 riak-admin cluster commit

 Regards,
 Vitaliy



 On Wed, Apr 1, 2015 at 10:54 AM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Hi Vitaliy,

 How did you find out a node in the cluster was not part in the cluster?
 any commands to check that? And then, how did you fix that? (Just curious
 and for future references)

 Thanks,
 Alex

 On Wed, Apr 1, 2015 at 9:50 AM, Vitaliy E 13vitam...@gmail.com wrote:

 Hello everyone,

 I've just joined the list, and am a bit late to the party. Sorry about
 that. Thought I would contribute an answer anyway.

 Santi, what is you n_val?

 I observed the behavior you are describing on Riak 2.0.0 with n_val=3 in
 two cases:

 1. One of the nodes was not part of the cluster although the cluster was
 thinking it was. Don't ask me how that happened. Obviously, when a request
 hit that node part of the entries could not be found there.

 2. Look for indexing errors in Solr console and Riak logs. Each Riak
 node has its own Solr repository, so if an entry fails to be indexed on
 any of them, search results will be inconsistent depending on which set of
 nodes returns it. Let's say you have replicas on nodes A, B, and C. Entry X
 failed to be indexed on A, entry Y failed to be indexed on A and B, and
 entry Z was indexed OK on all nodes. Then you may get {X,Y,Z}, {X,Z}, or
 {Z} as your search results.

 In our case the indexing failures were caused by disk/filesystem errors.

 Regards,
 Vitaly

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Query on Riak Search in a cluster of 3 nodes behind ELB is giving different result everytime

2015-04-01 Thread Alex De la rosa
Hi Vitaliy,

How did you find out a node in the cluster was not part in the cluster? any
commands to check that? And then, how did you fix that? (Just curious and
for future references)

Thanks,
Alex

On Wed, Apr 1, 2015 at 9:50 AM, Vitaliy E 13vitam...@gmail.com wrote:

 Hello everyone,

 I've just joined the list, and am a bit late to the party. Sorry about
 that. Thought I would contribute an answer anyway.

 Santi, what is you n_val?

 I observed the behavior you are describing on Riak 2.0.0 with n_val=3 in
 two cases:

 1. One of the nodes was not part of the cluster although the cluster was
 thinking it was. Don't ask me how that happened. Obviously, when a request
 hit that node part of the entries could not be found there.

 2. Look for indexing errors in Solr console and Riak logs. Each Riak node
 has its own Solr repository, so if an entry fails to be indexed on any of
 them, search results will be inconsistent depending on which set of nodes
 returns it. Let's say you have replicas on nodes A, B, and C. Entry X
 failed to be indexed on A, entry Y failed to be indexed on A and B, and
 entry Z was indexed OK on all nodes. Then you may get {X,Y,Z}, {X,Z}, or
 {Z} as your search results.

 In our case the indexing failures were caused by disk/filesystem errors.

 Regards,
 Vitaly

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Packagecloud.io issue

2015-04-01 Thread Alex De la rosa
Good to know it will finally get fixed : )

[not sure how that didn’t turn up until now] --- I actually reported it
several times in the past:

August 2014:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-August/015702.html

September 2014:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-September/015884.html

and now again in March 2015 : P

Thanks,
Alex

On Wed, Apr 1, 2015 at 8:38 PM, Greg Cymbalski gcymbal...@basho.com wrote:

 Alex-
   We’ve managed to reproduce that issue and are researching it now. It
 does appear to be something in the package itself; not sure how that didn’t
 turn up until now.

   We’ll keep you updated. Thanks!
 —Greg

 On Mar 30, 2015, at 9:00 AM, riak-users-requ...@lists.basho.com wrote:

 Date: Mon, 30 Mar 2015 11:55:41 +0200
 From: Alex De la rosa alex.rosa@gmail.com
 To: riak-users riak-users@lists.basho.com
 Subject: Packagecloud.io issue
 Message-ID:
 CAPphDGoinadFe+qBSaLPzf5z_8XJx1kAfXcGfc=o4t1aea6...@mail.gmail.com
 Content-Type: text/plain; charset=utf-8


 Hi there,

 I want to report again a problem I have with packagecloud.io; everytime I
 do aptitude update I get a hit as if a new version of Riak is available
 (that is not)... and I have to re-install Riak to a version I already have.

 I just did an aptitude safe-upgrade and upgraded Riak to version 2.0.5
 (latest):

 # aptitude safe-upgrade
 The following packages will be upgraded:
  ... ... ... riak ... ... ... ... ... ...
 [...]
 Unpacking riak (2.0.5-1) over (2.0.1) ...
 [...]

 # riak version
 2.0.5

 # aptitude update
 Hit https://packagecloud.io trusty InRelease
 Hit https://packagecloud.io trusty/main Sources
 Hit https://packagecloud.io trusty/main amd64 Packages
 Hit https://packagecloud.io trusty/main i386 Packages

 # aptitude safe-upgrade
 The following packages will be upgraded:
  riak
 1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 Need to get 0 B/55.4 MB of archives. After unpacking 0 B will be used.
 Do you want to continue? [Y/n/?]

 if i select Y i get the following:

 Unpacking riak (2.0.5-1) over (2.0.5-1) ...

 and everytime I check for updates, riak comes as requiring an update
 although is not true. How to fix that? makes no sense it continues coming
 as needing to be upgraded when is the same exact version.

 Thanks,
 Alex
 -- next part --
 An HTML attachment was scrubbed...
 URL: 
 http://lists.basho.com/pipermail/riak-users_lists.basho.com/attachments/20150330/b2e2620c/attachment-0001.html
 



 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Packagecloud.io issue

2015-03-30 Thread Alex De la rosa
Hi there,

I want to report again a problem I have with packagecloud.io; everytime I
do aptitude update I get a hit as if a new version of Riak is available
(that is not)... and I have to re-install Riak to a version I already have.

I just did an aptitude safe-upgrade and upgraded Riak to version 2.0.5
(latest):

# aptitude safe-upgrade
The following packages will be upgraded:
  ... ... ... riak ... ... ... ... ... ...
[...]
Unpacking riak (2.0.5-1) over (2.0.1) ...
[...]

# riak version
2.0.5

# aptitude update
Hit https://packagecloud.io trusty InRelease
Hit https://packagecloud.io trusty/main Sources
Hit https://packagecloud.io trusty/main amd64 Packages
Hit https://packagecloud.io trusty/main i386 Packages

# aptitude safe-upgrade
The following packages will be upgraded:
  riak
1 packages upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/55.4 MB of archives. After unpacking 0 B will be used.
Do you want to continue? [Y/n/?]

if i select Y i get the following:

Unpacking riak (2.0.5-1) over (2.0.5-1) ...

and everytime I check for updates, riak comes as requiring an update
although is not true. How to fix that? makes no sense it continues coming
as needing to be upgraded when is the same exact version.

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: How to refresh Riak's data

2015-03-30 Thread Alex De la rosa
cool, but only this one?

/var/lib/riak/leveldb

Is there anything else to delete for SOLR indexes, etc...?

Thanks,
Alex

On Mon, Mar 30, 2015 at 4:14 PM, Alexander Sicular sicul...@gmail.com
wrote:

 Hi Alex,

 It basically works the same way. Shut down riak. Locate the data folder
 and delete all the stuff in it.

 -Alexander

 @siculars
 http://siculars.posthaven.com

 Sent from my iRotaryPhone

  On Mar 30, 2015, at 04:41, Alex De la rosa alex.rosa@gmail.com
 wrote:
 
  Hi there,
 
  I have a 1-node riak 2.0.5 cluster for testing stuff on my laptop
 (Ubuntu); how can I refresh the node without having to uninstall and
 install it again?
 
  I remember in riak 0.14 there was a way to do it stopping the node,
 deleting some folders and restarting the node back. How to do it for 2.0.5?
 I'm using levelDB backend.
 
  Thanks!
  Alex
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


deleting a register (Python client)

2014-12-19 Thread Alex De la rosa
Hi there,

This is a pretty dumb question but I think I have never recalled doing it
before. Imagine that I have a register called something with a value:

obj.registers['something'].assign('blah')

If later on I want to remove this something register from the riak map
object, how to do it? I don't seem to find anywhere in the documentation
how to remove a register.

I could set up an empty string: obj.registers['something'].assign(''), as I
see that when you fetch a register that doesn't exist it returns an empty
string instead of a None. Is this the only way? Or can we remove it in some
other way?

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: deleting a register (Python client)

2014-12-19 Thread Alex De la rosa
Cool! good to know :) I think is not explained anywhere in the docs.

Thanks!
Alex

On Fri, Dec 19, 2014 at 3:50 PM, Sean Cribbs s...@basho.com wrote:

 Alex,

 This will remove the register from the map:

 del obj.registers['something']
 obj.store()


 On Fri, Dec 19, 2014 at 5:37 AM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Hi there,

 This is a pretty dumb question but I think I have never recalled doing it
 before. Imagine that I have a register called something with a value:

 obj.registers['something'].assign('blah')

 If later on I want to remove this something register from the riak map
 object, how to do it? I don't seem to find anywhere in the documentation
 how to remove a register.

 I could set up an empty string: obj.registers['something'].assign(''), as
 I see that when you fetch a register that doesn't exist it returns an empty
 string instead of a None. Is this the only way? Or can we remove it in some
 other way?

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: deleting a register (Python client)

2014-12-19 Thread Alex De la rosa
My bad, seems is explained here:

http://riak-python-client.readthedocs.org/en/master/datatypes.html

By the way, what's the difference between these calls?

*.*registers['user']*.*assign(alex)
*.*registers['user']*.*set(alex)
*.*registers['user']*.*set_value(alex)

Thanks,
Alex

On Fri, Dec 19, 2014 at 4:31 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Cool! good to know :) I think is not explained anywhere in the docs.

 Thanks!
 Alex

 On Fri, Dec 19, 2014 at 3:50 PM, Sean Cribbs s...@basho.com wrote:

 Alex,

 This will remove the register from the map:

 del obj.registers['something']
 obj.store()


 On Fri, Dec 19, 2014 at 5:37 AM, Alex De la rosa alex.rosa@gmail.com
  wrote:

 Hi there,

 This is a pretty dumb question but I think I have never recalled doing
 it before. Imagine that I have a register called something with a value:

 obj.registers['something'].assign('blah')

 If later on I want to remove this something register from the riak map
 object, how to do it? I don't seem to find anywhere in the documentation
 how to remove a register.

 I could set up an empty string: obj.registers['something'].assign(''),
 as I see that when you fetch a register that doesn't exist it returns an
 empty string instead of a None. Is this the only way? Or can we remove it
 in some other way?

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects being on an indexed bucket_type

2014-12-18 Thread Alex De la rosa
Hi Sean,

I wonder how the Python client progress goes? I didn't see any news about
it and PIP seems to not have a new version... when will the delete bug
fixed?

Thanks,
Alex

On Mon, Nov 17, 2014 at 4:53 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Awesome, thanks :)

 On Mon, Nov 17, 2014 at 4:52 PM, Sean Cribbs s...@basho.com wrote:

 I'll confer with Brett, who is wrapping up some Python 3 compatibility,
 another release is needed soon.

 On Mon, Nov 17, 2014 at 1:12 AM, Alex De la rosa alex.rosa@gmail.com
  wrote:

 Yeah! this time worked :) thanks! Any ideas when a new release for the
 Python client coming with that bug fixed?

 Thanks,
 Alex

 On Mon, Nov 17, 2014 at 2:02 AM, Sean Cribbs s...@basho.com wrote:

 Sorry, I made a mistake in the example. Try this:

 RiakObject(bucket._client, bucket, 'testkey').delete()

 On Sun, Nov 16, 2014 at 3:15 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi Sean,

 Seams that the workaround suggested hits the same error:

 Traceback (most recent call last):
   File x.py, line 9, in module
 RiakObject(bucket, 'testkey').delete()
   File /usr/local/lib/python2.7/dist-packages/riak/riak_object.py,
 line 335, in delete
 timeout=timeout)
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 Thanks,
 Alex

 On Sun, Nov 16, 2014 at 8:02 PM, Sean Cribbs s...@basho.com wrote:

 Hi Alex,

 That's a bug in the Python client. There's an existing issue on the
 repo for it: https://github.com/basho/riak-python-client/issues/365

 In the meantime, here's a workaround:

 from riak.riak_object import RiakObject

 RiakObject(bucket, 'testkey').delete()

 Sorry for the inconvenience.

 On Sat, Nov 15, 2014 at 5:54 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi there,

 I created an index and a MAP bucket-type in the following way:

 curl -XPUT http://x.x.x.x:8098/search/index/ix_users;
 riak-admin bucket-type create tp_users '{props:
 {allow_mult:true,search_index:ix_users,datatype:map}}'
 riak-admin bucket-type activate tp_users

 Then I saved some data and is working fine; but when I try to delete
 a key, I get a nasty error; what am I doing wrong?:

 import riak

 client = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
 'x.x.x.x', 'http_port': 8098, 'pb_port': 8087}])
 bucket = client.bucket_type('tp_users').bucket('users')
 bucket.delete('testkey')

 Output of the script:

 Traceback (most recent call last):
   File x.py, line 6, in module
 bucket.delete('testkey')
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 This are my riak and python client versions:

 ~ # pip show riak
 ---
 Name: riak
 Version: 2.1.0
 Location: /usr/local/lib/python2.7/dist-packages
 Requires: riak-pb, pyOpenSSL

 ~ # riak version
 2.0.2

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects being on an indexed bucket_type

2014-12-18 Thread Alex De la rosa
Awesome! Thank you so much :)

Alex

On Friday, December 19, 2014, Brett Hazen br...@basho.com wrote:

 Alex —

 The new version was released today, including this feature.  Announcement
 to follow.

 Brett

 On December 18, 2014 at 4:43:10 AM, Alex De la rosa (
 alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com');) wrote:

 Hi Sean,

 I wonder how the Python client progress goes? I didn't see any news about
 it and PIP seems to not have a new version... when will the delete bug
 fixed?

 Thanks,
 Alex

 On Mon, Nov 17, 2014 at 4:53 PM, Alex De la rosa alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

 Awesome, thanks :)

 On Mon, Nov 17, 2014 at 4:52 PM, Sean Cribbs s...@basho.com
 javascript:_e(%7B%7D,'cvml','s...@basho.com'); wrote:

 I'll confer with Brett, who is wrapping up some Python 3 compatibility,
 another release is needed soon.

 On Mon, Nov 17, 2014 at 1:12 AM, Alex De la rosa 
 alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

 Yeah! this time worked :) thanks! Any ideas when a new release for the
 Python client coming with that bug fixed?

 Thanks,
 Alex

 On Mon, Nov 17, 2014 at 2:02 AM, Sean Cribbs s...@basho.com
 javascript:_e(%7B%7D,'cvml','s...@basho.com'); wrote:

 Sorry, I made a mistake in the example. Try this:

 RiakObject(bucket._client, bucket, 'testkey').delete()

 On Sun, Nov 16, 2014 at 3:15 PM, Alex De la rosa 
 alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

 Hi Sean,

 Seams that the workaround suggested hits the same error:

 Traceback (most recent call last):
   File x.py, line 9, in module
 RiakObject(bucket, 'testkey').delete()
   File /usr/local/lib/python2.7/dist-packages/riak/riak_object.py,
 line 335, in delete
 timeout=timeout)
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 Thanks,
 Alex

 On Sun, Nov 16, 2014 at 8:02 PM, Sean Cribbs s...@basho.com
 javascript:_e(%7B%7D,'cvml','s...@basho.com'); wrote:

 Hi Alex,

 That's a bug in the Python client. There's an existing issue on the
 repo for it: https://github.com/basho/riak-python-client/issues/365

 In the meantime, here's a workaround:

 from riak.riak_object import RiakObject

 RiakObject(bucket, 'testkey').delete()

 Sorry for the inconvenience.

  On Sat, Nov 15, 2014 at 5:54 PM, Alex De la rosa 
 alex.rosa@gmail.com
 javascript:_e(%7B%7D,'cvml','alex.rosa@gmail.com'); wrote:

   Hi there,

 I created an index and a MAP bucket-type in the following way:

 curl -XPUT http://x.x.x.x:8098/search/index/ix_users;
 riak-admin bucket-type create tp_users '{props:
 {allow_mult:true,search_index:ix_users,datatype:map}}'
 riak-admin bucket-type activate tp_users

 Then I saved some data and is working fine; but when I try to
 delete a key, I get a nasty error; what am I doing wrong?:

 import riak

 client = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
 'x.x.x.x', 'http_port': 8098, 'pb_port': 8087}])
 bucket = client.bucket_type('tp_users').bucket('users')
 bucket.delete('testkey')

 Output of the script:

 Traceback (most recent call last):
   File x.py, line 6, in module
 bucket.delete('testkey')
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py,
 line 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 This are my riak and python client versions:

 ~ # pip show riak
 ---
 Name: riak
 Version: 2.1.0
 Location: /usr/local/lib/python2.7/dist-packages
 Requires: riak-pb, pyOpenSSL

 ~ # riak version
 2.0.2

 Thanks,
 Alex

  ___
 riak-users mailing list
 riak-users@lists.basho.com
 javascript:_e(%7B%7D,'cvml','riak-users@lists.basho.com');
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Sean Cribbs s...@basho.com
 javascript:_e(%7B%7D,'cvml','s...@basho.com');
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 javascript:_e(%7B%7D,'cvml','s...@basho.com');
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 javascript:_e(%7B%7D,'cvml','s...@basho.com');
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/


___
 riak-users mailing list
 riak-users@lists.basho.com
 javascript:_e(%7B%7D,'cvml','riak-users@lists.basho.com');
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects being on an indexed bucket_type

2014-11-17 Thread Alex De la rosa
Awesome, thanks :)

On Mon, Nov 17, 2014 at 4:52 PM, Sean Cribbs s...@basho.com wrote:

 I'll confer with Brett, who is wrapping up some Python 3 compatibility,
 another release is needed soon.

 On Mon, Nov 17, 2014 at 1:12 AM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Yeah! this time worked :) thanks! Any ideas when a new release for the
 Python client coming with that bug fixed?

 Thanks,
 Alex

 On Mon, Nov 17, 2014 at 2:02 AM, Sean Cribbs s...@basho.com wrote:

 Sorry, I made a mistake in the example. Try this:

 RiakObject(bucket._client, bucket, 'testkey').delete()

 On Sun, Nov 16, 2014 at 3:15 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi Sean,

 Seams that the workaround suggested hits the same error:

 Traceback (most recent call last):
   File x.py, line 9, in module
 RiakObject(bucket, 'testkey').delete()
   File /usr/local/lib/python2.7/dist-packages/riak/riak_object.py,
 line 335, in delete
 timeout=timeout)
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 Thanks,
 Alex

 On Sun, Nov 16, 2014 at 8:02 PM, Sean Cribbs s...@basho.com wrote:

 Hi Alex,

 That's a bug in the Python client. There's an existing issue on the
 repo for it: https://github.com/basho/riak-python-client/issues/365

 In the meantime, here's a workaround:

 from riak.riak_object import RiakObject

 RiakObject(bucket, 'testkey').delete()

 Sorry for the inconvenience.

 On Sat, Nov 15, 2014 at 5:54 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi there,

 I created an index and a MAP bucket-type in the following way:

 curl -XPUT http://x.x.x.x:8098/search/index/ix_users;
 riak-admin bucket-type create tp_users '{props:
 {allow_mult:true,search_index:ix_users,datatype:map}}'
 riak-admin bucket-type activate tp_users

 Then I saved some data and is working fine; but when I try to delete
 a key, I get a nasty error; what am I doing wrong?:

 import riak

 client = riak.RiakClient(protocol = 'pbc', nodes = [{'host':
 'x.x.x.x', 'http_port': 8098, 'pb_port': 8087}])
 bucket = client.bucket_type('tp_users').bucket('users')
 bucket.delete('testkey')

 Output of the script:

 Traceback (most recent call last):
   File x.py, line 6, in module
 bucket.delete('testkey')
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 This are my riak and python client versions:

 ~ # pip show riak
 ---
 Name: riak
 Version: 2.1.0
 Location: /usr/local/lib/python2.7/dist-packages
 Requires: riak-pb, pyOpenSSL

 ~ # riak version
 2.0.2

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects being on an indexed bucket_type

2014-11-16 Thread Alex De la rosa
Hi Sean,

Seams that the workaround suggested hits the same error:

Traceback (most recent call last):
  File x.py, line 9, in module
RiakObject(bucket, 'testkey').delete()
  File /usr/local/lib/python2.7/dist-packages/riak/riak_object.py, line
335, in delete
timeout=timeout)
  File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line 539,
in delete
return self.new(key).delete(**kwargs)
AttributeError: 'Map' object has no attribute 'delete'

Thanks,
Alex

On Sun, Nov 16, 2014 at 8:02 PM, Sean Cribbs s...@basho.com wrote:

 Hi Alex,

 That's a bug in the Python client. There's an existing issue on the repo
 for it: https://github.com/basho/riak-python-client/issues/365

 In the meantime, here's a workaround:

 from riak.riak_object import RiakObject

 RiakObject(bucket, 'testkey').delete()

 Sorry for the inconvenience.

 On Sat, Nov 15, 2014 at 5:54 PM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Hi there,

 I created an index and a MAP bucket-type in the following way:

 curl -XPUT http://x.x.x.x:8098/search/index/ix_users;
 riak-admin bucket-type create tp_users '{props:
 {allow_mult:true,search_index:ix_users,datatype:map}}'
 riak-admin bucket-type activate tp_users

 Then I saved some data and is working fine; but when I try to delete a
 key, I get a nasty error; what am I doing wrong?:

 import riak

 client = riak.RiakClient(protocol = 'pbc', nodes = [{'host': 'x.x.x.x',
 'http_port': 8098, 'pb_port': 8087}])
 bucket = client.bucket_type('tp_users').bucket('users')
 bucket.delete('testkey')

 Output of the script:

 Traceback (most recent call last):
   File x.py, line 6, in module
 bucket.delete('testkey')
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line 539,
 in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 This are my riak and python client versions:

 ~ # pip show riak
 ---
 Name: riak
 Version: 2.1.0
 Location: /usr/local/lib/python2.7/dist-packages
 Requires: riak-pb, pyOpenSSL

 ~ # riak version
 2.0.2

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Can't delete objects being on an indexed bucket_type

2014-11-16 Thread Alex De la rosa
Yeah! this time worked :) thanks! Any ideas when a new release for the
Python client coming with that bug fixed?

Thanks,
Alex

On Mon, Nov 17, 2014 at 2:02 AM, Sean Cribbs s...@basho.com wrote:

 Sorry, I made a mistake in the example. Try this:

 RiakObject(bucket._client, bucket, 'testkey').delete()

 On Sun, Nov 16, 2014 at 3:15 PM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Hi Sean,

 Seams that the workaround suggested hits the same error:

 Traceback (most recent call last):
   File x.py, line 9, in module
 RiakObject(bucket, 'testkey').delete()
   File /usr/local/lib/python2.7/dist-packages/riak/riak_object.py, line
 335, in delete
 timeout=timeout)
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line 539,
 in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 Thanks,
 Alex

 On Sun, Nov 16, 2014 at 8:02 PM, Sean Cribbs s...@basho.com wrote:

 Hi Alex,

 That's a bug in the Python client. There's an existing issue on the repo
 for it: https://github.com/basho/riak-python-client/issues/365

 In the meantime, here's a workaround:

 from riak.riak_object import RiakObject

 RiakObject(bucket, 'testkey').delete()

 Sorry for the inconvenience.

 On Sat, Nov 15, 2014 at 5:54 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi there,

 I created an index and a MAP bucket-type in the following way:

 curl -XPUT http://x.x.x.x:8098/search/index/ix_users;
 riak-admin bucket-type create tp_users '{props:
 {allow_mult:true,search_index:ix_users,datatype:map}}'
 riak-admin bucket-type activate tp_users

 Then I saved some data and is working fine; but when I try to delete a
 key, I get a nasty error; what am I doing wrong?:

 import riak

 client = riak.RiakClient(protocol = 'pbc', nodes = [{'host': 'x.x.x.x',
 'http_port': 8098, 'pb_port': 8087}])
 bucket = client.bucket_type('tp_users').bucket('users')
 bucket.delete('testkey')

 Output of the script:

 Traceback (most recent call last):
   File x.py, line 6, in module
 bucket.delete('testkey')
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 539, in delete
 return self.new(key).delete(**kwargs)
 AttributeError: 'Map' object has no attribute 'delete'

 This are my riak and python client versions:

 ~ # pip show riak
 ---
 Name: riak
 Version: 2.1.0
 Location: /usr/local/lib/python2.7/dist-packages
 Requires: riak-pb, pyOpenSSL

 ~ # riak version
 2.0.2

 Thanks,
 Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com




 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/





 --
 Sean Cribbs s...@basho.com
 Sr. Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Custom data-types

2014-09-06 Thread Alex De la rosa
Hi there,

Can somebody explain the use for custom search schemas? I still don't get
why would I want to have a custom schema if the default schema seems to be
able to get me the info of all the fields i have in my object.

Thanks!
Alex


On Fri, Aug 29, 2014 at 4:48 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Hi Sean,

 Seems I was wrong, that makes total sense now that you exposed it, looked
 a too good feature to me, but seems is not that easy.

 By the way, how does schemas really work for Riak Search? I went back
 and read the documentation but didn't see a real difference from using the
 default schema.

 Thanks!
 Alex


 On Fri, Aug 29, 2014 at 3:36 PM, Sean Cribbs s...@basho.com wrote:

 Alex,

 In short, no, you can't create custom types through schemas. Schemas
 currently only refer to Riak Search 2.

 We would love that too, but it hasn't happened yet. The problem is not
 conceiving of a data type but making its behavior both sensible and
 convergent in the face of concurrent activity or network partitions.
 For instance, say that two tweets come in around the same time. Who
 goes first in the stack you described? How can multiple independent
 copies reason about which ones to drop from the bottom of the stack to
 keep it bounded to 100? What happens if a replica is separated from
 the others for a while and has really stale entries, is it valid to
 serve those to a user? What happens when one replica pushes an element
 and another one pops it at the same time?

 These sound like they might be trivial problems, but they are
 incredibly hard to reason about in the general case. You have to
 reason about the ordering of events, the scope of their effects, and
 decide on a least-surprising behavior to expose to the user. Although
 we have given a pretty familiar/friendly interface to the data types
 shipping in 2.0, their behavior is strictly different from the types
 you would use in a single-threaded program in local memory.

 On Thu, Aug 28, 2014 at 4:47 PM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  Hi there,
 
  Correct me if I'm wrong, but I think I read somewhere that custom
 data-types
  can be created through schemas or something like that. So, apart from
  COUNTERS, SETS and MAPS we could have some custom defined ones.
 
  I would love to have a STACKS data-type that would work like a FIFO
 stack,
  so I could save the last 100 objects for some action. Imagine we are
  building Twitter where millions of tweets are sent all the time, but we
 want
  to quickly know the last 100 tweets for a user. Imagine something like:
 
  obj.stacks['last_tweets'].add(id_of_last_tweet)
 
  IN: last_tweet --- STACK_OF_100_TWEETS --- OUT: older than the 100th
 goes
  out
 
  Is this possible? If so, how to do it?
 
  Thanks and Best Regards,
  Alex
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 



 --
 Sean Cribbs s...@basho.com
 Software Engineer
 Basho Technologies, Inc.
 http://basho.com/



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Custom data-types

2014-09-06 Thread Alex De la rosa
Hi Luke,

That seems useful :) will check the Solr documentation!

Thanks!
Alex


On Sat, Sep 6, 2014 at 4:16 PM, Luke Bakken lbak...@basho.com wrote:

 Alex,

 Custom schemas allow you to only index a subset of your object's data
 (saving disk space). They also allow data type specification, field
 copying (to have full-text search across your object easily), and
 several other features.

 The Solr documentation has more information here:


 https://cwiki.apache.org/confluence/display/solr/Documents%2C+Fields%2C+and+Schema+Design

 --
 Luke Bakken
 Engineer / CSE
 lbak...@basho.com


 On Sat, Sep 6, 2014 at 2:20 AM, Alex De la rosa alex.rosa@gmail.com
 wrote:
  Hi there,
 
  Can somebody explain the use for custom search schemas? I still don't get
  why would I want to have a custom schema if the default schema seems to
 be
  able to get me the info of all the fields i have in my object.
 
  Thanks!
  Alex
 
 
  On Fri, Aug 29, 2014 at 4:48 PM, Alex De la rosa 
 alex.rosa@gmail.com
  wrote:
 
  Hi Sean,
 
  Seems I was wrong, that makes total sense now that you exposed it,
 looked
  a too good feature to me, but seems is not that easy.
 
  By the way, how does schemas really work for Riak Search? I went back
  and read the documentation but didn't see a real difference from using
 the
  default schema.
 
  Thanks!
  Alex
 
 
  On Fri, Aug 29, 2014 at 3:36 PM, Sean Cribbs s...@basho.com wrote:
 
  Alex,
 
  In short, no, you can't create custom types through schemas. Schemas
  currently only refer to Riak Search 2.
 
  We would love that too, but it hasn't happened yet. The problem is not
  conceiving of a data type but making its behavior both sensible and
  convergent in the face of concurrent activity or network partitions.
  For instance, say that two tweets come in around the same time. Who
  goes first in the stack you described? How can multiple independent
  copies reason about which ones to drop from the bottom of the stack to
  keep it bounded to 100? What happens if a replica is separated from
  the others for a while and has really stale entries, is it valid to
  serve those to a user? What happens when one replica pushes an element
  and another one pops it at the same time?
 
  These sound like they might be trivial problems, but they are
  incredibly hard to reason about in the general case. You have to
  reason about the ordering of events, the scope of their effects, and
  decide on a least-surprising behavior to expose to the user. Although
  we have given a pretty familiar/friendly interface to the data types
  shipping in 2.0, their behavior is strictly different from the types
  you would use in a single-threaded program in local memory.
 
  On Thu, Aug 28, 2014 at 4:47 PM, Alex De la rosa
  alex.rosa@gmail.com wrote:
   Hi there,
  
   Correct me if I'm wrong, but I think I read somewhere that custom
   data-types
   can be created through schemas or something like that. So, apart from
   COUNTERS, SETS and MAPS we could have some custom defined ones.
  
   I would love to have a STACKS data-type that would work like a FIFO
   stack,
   so I could save the last 100 objects for some action. Imagine we are
   building Twitter where millions of tweets are sent all the time, but
 we
   want
   to quickly know the last 100 tweets for a user. Imagine something
 like:
  
   obj.stacks['last_tweets'].add(id_of_last_tweet)
  
   IN: last_tweet --- STACK_OF_100_TWEETS --- OUT: older than the
 100th
   goes
   out
  
   Is this possible? If so, how to do it?
  
   Thanks and Best Regards,
   Alex
  
   ___
   riak-users mailing list
   riak-users@lists.basho.com
   http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
  
 
 
 
  --
  Sean Cribbs s...@basho.com
  Software Engineer
  Basho Technologies, Inc.
  http://basho.com/
 
 
 
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Retrieve keys on bucket by timestamp

2014-09-04 Thread Alex De la rosa
This is really useful!

So we can use now as a constant to get a current timestamp?

Thanks!
Alex


On Thu, Sep 4, 2014 at 3:44 PM, Sean Cribbs s...@basho.com wrote:

 Hi tele,

 Yes, a secondary index is the most reasonable way to accomplish this.
 Here's an example using the Python client:

 now = time.gmtime()
 myobj.add_index('modified_int', now)
 myobj.store()

 bucket.get_index('modified_int', now-3600, now+3600)

 Hope that helps.

 On Wed, Sep 3, 2014 at 9:09 PM, tele t...@rhizomatica.org wrote:
  Hi All,
 
  Is there any way i can retrieve from a bucket the keys that last
  change N minutes ago for example.
  If possible cis there a way to do it with riak-python-client?
 
  Or the only way is to add a timestamp index in the bucket and update in
  case of changes, so that i will be able to query that index.
 
  Thanks
 
  :tele
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



 --
 Sean Cribbs s...@basho.com
 Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Retrieve keys on bucket by timestamp

2014-09-04 Thread Alex De la rosa
damn... i got excited i missed the first line now = time.gmtime()...
forget my stupid question... lol

Thanks,
Alex


On Thu, Sep 4, 2014 at 4:10 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 This is really useful!

 So we can use now as a constant to get a current timestamp?

 Thanks!
 Alex


 On Thu, Sep 4, 2014 at 3:44 PM, Sean Cribbs s...@basho.com wrote:

 Hi tele,

 Yes, a secondary index is the most reasonable way to accomplish this.
 Here's an example using the Python client:

 now = time.gmtime()
 myobj.add_index('modified_int', now)
 myobj.store()

 bucket.get_index('modified_int', now-3600, now+3600)

 Hope that helps.

 On Wed, Sep 3, 2014 at 9:09 PM, tele t...@rhizomatica.org wrote:
  Hi All,
 
  Is there any way i can retrieve from a bucket the keys that last
  change N minutes ago for example.
  If possible cis there a way to do it with riak-python-client?
 
  Or the only way is to add a timestamp index in the bucket and update in
  case of changes, so that i will be able to query that index.
 
  Thanks
 
  :tele
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



 --
 Sean Cribbs s...@basho.com
 Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: [ANN] Riak 2.0.0

2014-09-02 Thread Alex De la rosa
Awesome! Thank you very much!

Alex


On Tue, Sep 2, 2014 at 11:53 PM, Alexander Sicular sicul...@gmail.com
wrote:

 Congrats to the whole Basho team. Great achievement! -Alexander


 On Tue, Sep 2, 2014 at 5:30 PM, Jared Morrow ja...@basho.com wrote:

  Riak Users,

 We are overjoyed to announce the final release of Riak 2.0.0.

 The documentation page http://docs.basho.com/riak/latest/ has been
 completely redone and updated for 2.0, so please see that for the most
 complete information on 2.0. A full listing of the new features in 2.0,
 along with links to all the relevant docs, can be found in Intro to 2.0
 http://docs.basho.com/riak/latest/intro-v20/, while a guide to
 upgrading to version 2.0 can be found in our 2.0 upgrade guide
 http://docs.basho.com/riak/latest/upgrade-v20/.

 Downloads can also be found on the documentation page
 http://docs.basho.com/riak/latest/downloads/, and Apt/Yum repositories
 can be found on our packagecloud.io page
 https://packagecloud.io/basho/riak.

 The complete release notes for 2.0.0 can be found on GitHub
 https://github.com/basho/riak/blob/riak-2.0.0/RELEASE-NOTES.md.

 There are roughly 700 people in our THANKS file, and about 30 tags were
 made for 2.0 from the first “pre” to now. We appreciate both the patience
 and the feedback of all of Riak’s users through this long release cycle.

 From the entire Basho team, thanks!
 ​

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Packagecloud.io

2014-09-02 Thread Alex De la rosa
Hi there,

The official documentation doesn't explain how to install through
packagecloud.io but instead it uses riak's own repositories:

http://docs.basho.com/riak/2.0.0/ops/building/installing/debian-ubuntu/

Also, I want to report again a problem I have with packagecloud.io; I
followed the instructions there to install my Riak 2.0 RC1 without any
issues, however, everytime I do aptitude update I get a hit as if a new
version of Riak is available (that is not)... so now I have no wait to do
aptitude safe-upgrade without having to re-install Riak to a version I
already have.

Should we avoid using packagecloud.io?

Thanks,
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Custom data-types

2014-08-29 Thread Alex De la rosa
Hi Sean,

Seems I was wrong, that makes total sense now that you exposed it, looked a
too good feature to me, but seems is not that easy.

By the way, how does schemas really work for Riak Search? I went back and
read the documentation but didn't see a real difference from using the
default schema.

Thanks!
Alex


On Fri, Aug 29, 2014 at 3:36 PM, Sean Cribbs s...@basho.com wrote:

 Alex,

 In short, no, you can't create custom types through schemas. Schemas
 currently only refer to Riak Search 2.

 We would love that too, but it hasn't happened yet. The problem is not
 conceiving of a data type but making its behavior both sensible and
 convergent in the face of concurrent activity or network partitions.
 For instance, say that two tweets come in around the same time. Who
 goes first in the stack you described? How can multiple independent
 copies reason about which ones to drop from the bottom of the stack to
 keep it bounded to 100? What happens if a replica is separated from
 the others for a while and has really stale entries, is it valid to
 serve those to a user? What happens when one replica pushes an element
 and another one pops it at the same time?

 These sound like they might be trivial problems, but they are
 incredibly hard to reason about in the general case. You have to
 reason about the ordering of events, the scope of their effects, and
 decide on a least-surprising behavior to expose to the user. Although
 we have given a pretty familiar/friendly interface to the data types
 shipping in 2.0, their behavior is strictly different from the types
 you would use in a single-threaded program in local memory.

 On Thu, Aug 28, 2014 at 4:47 PM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  Hi there,
 
  Correct me if I'm wrong, but I think I read somewhere that custom
 data-types
  can be created through schemas or something like that. So, apart from
  COUNTERS, SETS and MAPS we could have some custom defined ones.
 
  I would love to have a STACKS data-type that would work like a FIFO
 stack,
  so I could save the last 100 objects for some action. Imagine we are
  building Twitter where millions of tweets are sent all the time, but we
 want
  to quickly know the last 100 tweets for a user. Imagine something like:
 
  obj.stacks['last_tweets'].add(id_of_last_tweet)
 
  IN: last_tweet --- STACK_OF_100_TWEETS --- OUT: older than the 100th
 goes
  out
 
  Is this possible? If so, how to do it?
 
  Thanks and Best Regards,
  Alex
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 



 --
 Sean Cribbs s...@basho.com
 Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak VS Graph Databases

2014-08-29 Thread Alex De la rosa
Hi there,

For some time already I have in mind building a kind of social network
myself. Is pretty ambitious project although it doesn't have in mind to be
a new facebook; but still data will be quite big and complex.

I like Riak and I had been following since version 0.14, and new additions
in Riak 2.0 seem to help a lot in how to model the data; although
relationships will be unavoidable.

Some friends suggested me to use Graph Databases instead. How would Riak
compare to Graph Databases for this use case? Is it doable to create a
social network entirely from Riak? Or may not be recommended?

Thanks!
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak VS Graph Databases

2014-08-29 Thread Alex De la rosa
Hi Guido,

This could be a solution; although I would try to do it in an homogeneous
system where only one NoSQL DB would be around if possible :)

Thanks!
Alex


On Fri, Aug 29, 2014 at 5:03 PM, Guido Medina guido.med...@temetra.com
wrote:

  Maybe what you are looking for is a combination of both, say, your KV
 data in Riak with a combination of background processes able to build the
 necessary searching graphs in Neo4J, in such way your data is secure in a
 Riak cluster and searchable on several Neo4J servers.

 That's just an idea which might be not do-able, hope it helps,

 Guido.


 On 29/08/14 15:54, Alex De la rosa wrote:

 Hi there,

  For some time already I have in mind building a kind of social network
 myself. Is pretty ambitious project although it doesn't have in mind to be
 a new facebook; but still data will be quite big and complex.

  I like Riak and I had been following since version 0.14, and new
 additions in Riak 2.0 seem to help a lot in how to model the data; although
 relationships will be unavoidable.

  Some friends suggested me to use Graph Databases instead. How would Riak
 compare to Graph Databases for this use case? Is it doable to create a
 social network entirely from Riak? Or may not be recommended?

  Thanks!
 Alex


 ___
 riak-users mailing 
 listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak VS Graph Databases

2014-08-29 Thread Alex De la rosa
Yeah, I know it might be hard to only use Riak, but I want to try see how
much I can do with only 1 system. If later I have to add more complexity to
the system, so be it :) but i will squeeze my brain as much as i can to
model the data in a way not much relationships may be required and probably
Riak might be enough... but we will see :)

Thanks!
Alex


On Fri, Aug 29, 2014 at 5:20 PM, Guido Medina guido.med...@temetra.com
wrote:

  In a dream world my friend, we have Riak, PostgreSQL and Solr and might
 have to include a sort of query-able Big Table implementation in the future
 like Cassandra (we will try to avoid this last thing until we can't)

 Your Graph DB will have trade off versus KV fetch in general, I don't
 think you have the tools you need in Riak to find the relationships per
 category nor the Riak advantages to do quick KV operations (data storage
 wise) nor cluster replication.

 It won't be that simple without many trade off to build an homogeneous
 system.

 Guido.


 On 29/08/14 16:06, Alex De la rosa wrote:

 Hi Guido,

  This could be a solution; although I would try to do it in an
 homogeneous system where only one NoSQL DB would be around if possible :)

  Thanks!
 Alex


 On Fri, Aug 29, 2014 at 5:03 PM, Guido Medina guido.med...@temetra.com
 wrote:

  Maybe what you are looking for is a combination of both, say, your KV
 data in Riak with a combination of background processes able to build the
 necessary searching graphs in Neo4J, in such way your data is secure in a
 Riak cluster and searchable on several Neo4J servers.

 That's just an idea which might be not do-able, hope it helps,

 Guido.


 On 29/08/14 15:54, Alex De la rosa wrote:

  Hi there,

  For some time already I have in mind building a kind of social network
 myself. Is pretty ambitious project although it doesn't have in mind to be
 a new facebook; but still data will be quite big and complex.

  I like Riak and I had been following since version 0.14, and new
 additions in Riak 2.0 seem to help a lot in how to model the data; although
 relationships will be unavoidable.

  Some friends suggested me to use Graph Databases instead. How would
 Riak compare to Graph Databases for this use case? Is it doable to create a
 social network entirely from Riak? Or may not be recommended?

  Thanks!
 Alex


   ___
 riak-users mailing 
 listriak-users@lists.basho.comhttp://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com





___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search on Sets data-types

2014-08-22 Thread Alex De la rosa
Answering my own question, in case somebody has the same need some day,
seems that a SET works like a collection of REGISTERS and you can use it as
follows:

r = client.fulltext_search('ix_images', 'keywords_set:DLSR')

Thanks!
Alex

On Thu, Aug 21, 2014 at 8:32 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Hi there,

 For a project I'm building I'm saving a keyword and a site for an
 image like this:

   bucket = client.bucket_type('tp_images').bucket('images')
   key = bucket.new(image_hash)
   key.registers['raw'].assign(base64.b64encode(image_data))
   key.registers['site'].assign('johnlewis.com')
   key.registers['keywords'].assign('cameras')
   key.store()

 and querying the index like this to get the desired images:

   r = client.fulltext_search('ix_images', 'site_registers:johnlewis.com AND
 keywords_registers:cameras', sort='clicks_counter desc', rows=5)

 Imagine now that i want to associate the same image to different keywords,
 having a SET instead of a REGISTER:

   bucket = client.bucket_type('tp_images').bucket('images')
   key = bucket.new(image_hash)
   key.registers['raw'].assign(base64.b64encode(image_data))
   key.registers['site'].assign('johnlewis.com')
   key.sets['keywords'].add('digital cameras')
   key.sets['keywords'].add('DLSR')
   key.store()

 How to query the SET using Riak Search? Something like the IN ()
 statement in traditional SQL:

   r = client.fulltext_search('ix_images', 'site_registers:johnlewis.com AND
 keywords_set:???', sort='clicks_counter desc', rows=5)

 Thanks!
 Alex

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: duplicate keys in Riak secondary index

2014-08-22 Thread Alex De la rosa
Might be siblings?

Thanks,
Alex


On Thu, Aug 21, 2014 at 10:29 PM, Chaim Peck chaimp...@gmail.com wrote:

 I am looking for some clues as to why there might be duplicate keys in a
 Riak Secondary Index. I am using version 1.4.0.

 Thanks,
 Chaim
 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Avoid siblings in data-type bucket

2014-08-21 Thread Alex De la rosa
Hi there,

I was trying to create a bucket-type using the datatype MAP and it didn't
allow me to create it with allow_mult:false:

# riak-admin bucket-type create tp_images
'{props:{allow_mult:false,search_index:ix_images,datatype:map}}'Error
creating bucket type tp_images:Data Type buckets must be allow_mult=true
How to avoid siblings in this bucket then?

Thanks,Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Avoid siblings in data-type bucket

2014-08-21 Thread Alex De la rosa
Cool, thank you very much

Alex


On Thu, Aug 21, 2014 at 9:18 PM, John Daily jda...@basho.com wrote:

 Siblings are resolved automatically by Riak when using our data types,
 thus the requirement that allow_mult=true.

 -John


 On Thu, Aug 21, 2014 at 3:17 PM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Hi there,

 I was trying to create a bucket-type using the datatype MAP and it didn't
 allow me to create it with allow_mult:false:

 # riak-admin bucket-type create tp_images
 '{props:{allow_mult:false,search_index:ix_images,datatype:map}}'Error
 creating bucket type tp_images:Data Type buckets must be allow_mult=true
 How to avoid siblings in this bucket then?

 Thanks,Alex

 ___
 riak-users mailing list
 riak-users@lists.basho.com
 http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search VS other query systems

2014-08-20 Thread Alex De la rosa
Any thoughts about this?

One thing it worries me about Riak Search is that if one index has several
millions of object to search for maybe it becomes slow? 2i might be faster
then?

Thanks!
Alex


On Tue, Aug 19, 2014 at 8:47 AM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Hi there,

 I had been seeing lately Riak Search as an ultimate way to query Riak...
 and it seems recommended to use over MapReduce and even 2i... said so...
 should we try to always use Riak Search over the other systems?

 Is there any situation in which MapReduce could be a better approach than
 Riak Search?

 Same goes for 2i... I believe 2i is an optimal approach if you just want
 keys and know very well what are you looking for, but out of that, should
 Riak Search try to replace all 2i uses?

 Practical example: If you are twitter and want to get twits for the
 hashtag #Riak, what would be the best approach? 2i? Riak Search? MapReduce?

 Thanks!
 Alex

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Issue

2014-08-19 Thread Alex De la rosa
Hi Eric,

You were right on naming the bucket the same as the index... it worked that
way:

bucket = client.bucket_type('futbolistas').bucket('famoso')
results = bucket.search('name_s:Lion*')
print results

{'num_found': 2, 'max_score': 1.0, 'docs': [{u'age_i': u'30', u'name_s':
u'Lionel', u'_yz_rk': u'lionel', u'_yz_rb': u'fcb', u'score':
u'1.e+00', u'leader_b': u'true', u'_yz_id':
u'1*futbolistas*fcb*lionel*59', u'_yz_rt': u'futbolistas'}, {u'age_i':
u'30', u'name_s': u'Lionel', u'_yz_rk': u'lionel', u'_yz_rb': u'famoso',
u'score': u'1.e+00', u'leader_b': u'true', u'_yz_id':
u'1*futbolistas*famoso*lionel*8', u'_yz_rt': u'futbolistas'}]}

Later will check to install GIT's version and see if it works with a
different bucket name.

Thanks.
Alex


On Mon, Aug 18, 2014 at 11:12 PM, Alex De la rosa alex.rosa@gmail.com
wrote:

 Hi Eric,

 I will try this suggestion, also I will try Luke's suggestion on using
 GIT's latest version instead of PIP to see if is something already fixed.

 Once done that, I will tell you guys if is really a bug or if it was fixed
 already on GIT cloning.

 Thanks,
 Alex


 On Mon, Aug 18, 2014 at 11:10 PM, Eric Redmond eredm...@basho.com wrote:

 Alex,

 You may have discovered a legitimate bug in the python driver. In the
 meantime, if you give your bucket and index the same name, you can proceed,
 while we investigate.

 Thanks,
 Eric


 On Aug 18, 2014, at 2:00 PM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Yes, I did it in purpose, because I did so many testings that I wanted to
 start fresh... so I kinda translated the documentation, but that is
 irrelevant to the case.

 Thanks,
 Alex


 On Mon, Aug 18, 2014 at 10:59 PM, Eric Redmond eredm...@basho.com
 wrote:

 Your steps seemed to have named the index famoso.

 Eric


 On Aug 18, 2014, at 1:56 PM, Alex De la rosa alex.rosa@gmail.com
 wrote:

 Ok, I found the first error in the documentation, parameters are in
 reverse order:

 bucket = client.bucket('animals', 'cats')

 should be:

 bucket = client.bucket('cats', 'animals')

 Now I could save and it found the bucket type: bucket =
 client.bucket('fcb','futbolistas') VS bucket = client.bucket('futbolistas',
 'fcb')

 However, even fixing that, the next step fails as it was failing before:

              
          
 PYTHON:
   bucket = client.bucket('fcb','futbolistas')
   results = bucket.search('name_s:Lion*')
   print results
              
          
 Traceback (most recent call last):
   File x.py, line 13, in module
 results = bucket.search('name_s:Lion*')
   File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 420, in search
 return self._client.fulltext_search(self.name, query, **params)
   File
 /usr/local/lib/python2.7/dist-packages/riak/client/transport.py, line
 184, in wrapper
 return self._with_retries(pool, thunk)
   File
 /usr/local/lib/python2.7/dist-packages/riak/client/transport.py, line
 126, in _with_retries
 return fn(transport)
   File
 /usr/local/lib/python2.7/dist-packages/riak/client/transport.py, line
 182, in thunk
 return fn(self, transport, *args, **kwargs)
   File
 /usr/local/lib/python2.7/dist-packages/riak/client/operations.py, line
 573, in fulltext_search
 return transport.search(index, query, **params)
   File
 /usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py,
 line 564, in search
 MSG_CODE_SEARCH_QUERY_RESP)
   File
 /usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py,
 line 50, in _request
 return self._recv_msg(expect)
   File
 /usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py,
 line 142, in _recv_msg
 raise RiakError(err.errmsg)
 riak.RiakError: 'No index fcb found.'

 Again it says fcb index not found... and this time I fully followed
 the right documentation and didn't use bucket.enable_search()

 Thanks,
 Alex


 On Mon, Aug 18, 2014 at 10:49 PM, Alex De la rosa 
 alex.rosa@gmail.com wrote:

 Hi Eric,

 I'm sorry but I followed the documentation that you provided me and
 still raises issues:

              
          
 STEP 1: Create Index: famoso
              
          
 PYTHON:
   client.create_search_index('famoso')

              
          
 STEP 2: Create Bucket Type: futbolistas
              
          
 SHELL:
   riak-admin bucket-type create futbolistas
 '{props:{search_index:famoso

Re: Riak Search Issue

2014-08-19 Thread Alex De la rosa
Hi Sean,

Yeah, I opted to follow that pattern on my latest attempt as I see it more
clear that the way in the documentation. Still same issue although with
Eric we saw it works fine when index and bucket has the same name.

Thanks!
Alex


On Mon, Aug 18, 2014 at 11:27 PM, Sean Cribbs s...@basho.com wrote:

 Don't use bucket with 2 arguments, use
 client.bucket_type('futbolistas').bucket('fcb'). This makes your
 intent more clear. The 2-arity version of bucket() was for
 backwards-compatibility.

 On Mon, Aug 18, 2014 at 4:10 PM, Eric Redmond eredm...@basho.com wrote:
  Alex,
 
  You may have discovered a legitimate bug in the python driver. In the
  meantime, if you give your bucket and index the same name, you can
 proceed,
  while we investigate.
 
  Thanks,
  Eric
 
 
  On Aug 18, 2014, at 2:00 PM, Alex De la rosa alex.rosa@gmail.com
  wrote:
 
  Yes, I did it in purpose, because I did so many testings that I wanted to
  start fresh... so I kinda translated the documentation, but that is
  irrelevant to the case.
 
  Thanks,
  Alex
 
 
  On Mon, Aug 18, 2014 at 10:59 PM, Eric Redmond eredm...@basho.com
 wrote:
 
  Your steps seemed to have named the index famoso.
 
  Eric
 
 
  On Aug 18, 2014, at 1:56 PM, Alex De la rosa alex.rosa@gmail.com
  wrote:
 
  Ok, I found the first error in the documentation, parameters are in
  reverse order:
 
  bucket = client.bucket('animals', 'cats')
 
  should be:
 
  bucket = client.bucket('cats', 'animals')
 
  Now I could save and it found the bucket type: bucket =
  client.bucket('fcb','futbolistas') VS bucket =
 client.bucket('futbolistas',
  'fcb')
 
  However, even fixing that, the next step fails as it was failing before:
 
               
 
          
  PYTHON:
bucket = client.bucket('fcb','futbolistas')
results = bucket.search('name_s:Lion*')
print results
               
 
          
  Traceback (most recent call last):
File x.py, line 13, in module
  results = bucket.search('name_s:Lion*')
File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line
 420,
  in search
  return self._client.fulltext_search(self.name, query, **params)
File
 /usr/local/lib/python2.7/dist-packages/riak/client/transport.py,
  line 184, in wrapper
  return self._with_retries(pool, thunk)
File
 /usr/local/lib/python2.7/dist-packages/riak/client/transport.py,
  line 126, in _with_retries
  return fn(transport)
File
 /usr/local/lib/python2.7/dist-packages/riak/client/transport.py,
  line 182, in thunk
  return fn(self, transport, *args, **kwargs)
File
 /usr/local/lib/python2.7/dist-packages/riak/client/operations.py,
  line 573, in fulltext_search
  return transport.search(index, query, **params)
File
 
 /usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py,
  line 564, in search
  MSG_CODE_SEARCH_QUERY_RESP)
File
 
 /usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py,
  line 50, in _request
  return self._recv_msg(expect)
File
 
 /usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py,
  line 142, in _recv_msg
  raise RiakError(err.errmsg)
  riak.RiakError: 'No index fcb found.'
 
  Again it says fcb index not found... and this time I fully followed
 the
  right documentation and didn't use bucket.enable_search()
 
  Thanks,
  Alex
 
 
  On Mon, Aug 18, 2014 at 10:49 PM, Alex De la rosa
  alex.rosa@gmail.com wrote:
 
  Hi Eric,
 
  I'm sorry but I followed the documentation that you provided me and
 still
  raises issues:
 
               
           
  STEP 1: Create Index: famoso
               
           
  PYTHON:
client.create_search_index('famoso')
 
               
           
  STEP 2: Create Bucket Type: futbolistas
               
           
  SHELL:
riak-admin bucket-type create futbolistas
  '{props:{search_index:famoso}}'
= futbolistas created
riak-admin bucket-type activate futbolistas
= futbolistas has been activated
 
               
           
  STEP 3: Create Bucket and Add data: fcb
               
           
  PYTHON:
bucket = client.bucket('futbolistas', 'fcb')
c = bucket.new('lionel', {'name_s': 'Lionel', 'age_i': 30

Counters inside Maps

2014-08-19 Thread Alex De la rosa
Imagine I have a Riak object footballer with some static fields: name,
team, number. I store them like this now:

1: CREATE INDEX FOR RIAK SEARCH
curl -XPUT http://148.251.140.229:8098/search/index/ix_footballers;

2: CREATE BUCKET TYPE
riak-admin bucket-type create tp_footballers
'{props:{allow_mult:false,search_index:ix_footballers}}'
riak-admin bucket-type activate tp_footballers

3: INSERT A PLAYER
bucket = client.bucket_type('tp_footballers').bucket('footballers')
key = bucket.new('lionelmessi', data={'name_s':'Messi',
'team_s':'Barcelona', 'number_i':10}, content_type='application/json')
key.store()

4: SEARCH FOR BARCELONA PLAYERS
r = client.fulltext_search('ix_footballers', 'team_s:Barcelona')

So far so good :) BUT... what if I want to have a field goals_i that is a
counter that will be incremented each match day with the number of goals he
scored? What is the syntax/steps to do to set up footballers as a MAP and
then put a COUNTER inside? I know is possible as I read it in some data
dump some Basho employee passed me some time ago, but I can't manage to see
how to do it now.

Thanks!
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Counters inside Maps

2014-08-19 Thread Alex De la rosa
Cool! Understood :)

Thanks!
Alex

On Wednesday, August 20, 2014, Sean Cribbs s...@basho.com wrote:

 On Tue, Aug 19, 2014 at 3:34 PM, Alex De la rosa
 alex.rosa@gmail.com javascript:; wrote:
  Hi Sean,
 
  I didn't created the bucket_type as a map datatype as at first i was just
  testing simple Riak Search... then it occurred to me what if I want a
  counter in the data? :)
 
  Your example is pretty straightforward to follow and simple. Just 2
  questions:
 
  1. key.counters['number'].increment(1) = No need to define a counters
  data-type somewhere before putting it inside the map as we normally need
 in
  simple buckets? If it works automatically is great :)

 Yes, it works automatically. All included datatypes are available inside
 maps.

 
  2. if we use number_counter instead of number_i does Search/SOLR
  understand is an integer? in case you want to do a range... as somewhere
 in
  the docs I read that better to use _s for strings, _b for binary,
 _i
  for integers, etc... so SOLR knows how to treat the data... I believe
 there
  will be no strange behaviours for having _register instead of _s and
  _counter instead of _i, right?

 The default Solr schema that ships with Riak accounts for these
 datatypes automatically and uses the appropriate index field type:

 https://github.com/basho/yokozuna/blob/develop/priv/default_schema.xml#L96-L104

 If you write your own schema, you will want to include or change the
 schema fields appropriately.

 
  Thanks!
  Alex
 
 
  On Wed, Aug 20, 2014 at 12:24 AM, Sean Cribbs s...@basho.com
 javascript:; wrote:
 
  Alex,
 
  Assuming you've already made your bucket-type with map as the
  datatype, then bucket.new() will return you a Map instead of a
  RiakObject. Translating your example above:
 
  key = bucket.new('lionelmessi')
  key.registers['name'].assign('Messi')
  key.registers['team'].assign('Barcelona')
  key.counters['number'].increment(10)
  key.store()
 
  Note that because Maps are based on mutation operations and not
  replacing the value with new ones, you can later do this without
  setting the entire value:
 
  key.counters['number'].increment(1)
  key.store()
 
  This will also change your searches, however, in that the fields will
  be suffixed with the embedded type you are using:
 
  r = client.fulltext_search('ix_footballers', 'team_register:Barcelona')
 
  Hope that helps!
 
  On Tue, Aug 19, 2014 at 2:59 PM, Alex De la rosa
  alex.rosa@gmail.com javascript:; wrote:
   Imagine I have a Riak object footballer with some static fields:
 name,
   team, number. I store them like this now:
  
   1: CREATE INDEX FOR RIAK SEARCH
   curl -XPUT http://148.251.140.229:8098/search/index/ix_footballers;
  
   2: CREATE BUCKET TYPE
   riak-admin bucket-type create tp_footballers
   '{props:{allow_mult:false,search_index:ix_footballers}}'
   riak-admin bucket-type activate tp_footballers
  
   3: INSERT A PLAYER
   bucket = client.bucket_type('tp_footballers').bucket('footballers')
   key = bucket.new('lionelmessi', data={'name_s':'Messi',
   'team_s':'Barcelona', 'number_i':10}, content_type='application/json')
   key.store()
  
   4: SEARCH FOR BARCELONA PLAYERS
   r = client.fulltext_search('ix_footballers', 'team_s:Barcelona')
  
   So far so good :) BUT... what if I want to have a field goals_i that
   is a
   counter that will be incremented each match day with the number of
 goals
   he
   scored? What is the syntax/steps to do to set up footballers as a
 MAP
   and
   then put a COUNTER inside? I know is possible as I read it in some
 data
   dump
   some Basho employee passed me some time ago, but I can't manage to see
   how
   to do it now.
  
   Thanks!
   Alex
  
   ___
   riak-users mailing list
   riak-users@lists.basho.com javascript:;
   http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
  
 
 
 
  --
  Sean Cribbs s...@basho.com javascript:;
  Software Engineer
  Basho Technologies, Inc.
  http://basho.com/
 
 



 --
 Sean Cribbs s...@basho.com javascript:;
 Software Engineer
 Basho Technologies, Inc.
 http://basho.com/

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Help: Riak Search on Counters

2014-08-18 Thread Alex De la rosa
Hi there,

Can somebody help me with Riak Search 2.0 on Counters? Imagine we have a
counter called visitors to store how many people visits certain cities:

              
        

  client.create_search_index('testing')
  bucket = client.bucket_type('visitors').bucket('counter_bucket')
  bucket.enable_search()
  bucket.set_property('search_index', 'testing')

  c = bucket.new('Barcelona')
  c.increment(5)
  c.store()

  c = bucket.new('Tokyo')
  c.increment(2)
  c.store()

  c = bucket.new('Paris')
  c.increment(4)
  c.store()

              
        

How would we use Riak Search 2.0 in this visitors bucket to get which
city has more visitors? (in this case Barcelona)

              
        

  r = bucket.search('?') # ---  Pattern to fill up

              
        

Thanks!
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak Search Issue

2014-08-18 Thread Alex De la rosa
Hi there,

I had been following the documentation [
http://docs.basho.com/riak/2.0.0/dev/using/search/ ] about Riak Search and
the code provided in the site doesn't seem to work?

Everything I try ends up with an error saying no index found taking the
name of the bucket as a not found index :(

riak.RiakError: 'No index name_of_the_bucket found.'

Somebody knows what's going on?

Thanks!
Alex
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Issue

2014-08-18 Thread Alex De la rosa
If I do this, I got the right index:

http://RIAK:8098/search/index/famous
= {name:famous,n_val:3,schema:_yz_default}

If I do this, I get an error:
http://RIAK:8098/search/index/animals
= not found

What I don't understand is why it believes the index is the bucket name and
not the index I created for it?

riak-admin bucket-type create animals '{props:{search_index:famous}}'
riak-admin bucket-type activate animals

Shouldn't it be looking for a famous index instead of an animals index??

Thanks!
Alex


On Mon, Aug 18, 2014 at 4:26 PM, Luke Bakken lbak...@basho.com wrote:

 What is the output of this command? Please replace RIAK_HOST and
 name_of_the_bucket with the correct information:

 curl $RIAK_HOST/search/index/name_of_the_bucket

 If the above returns a 404, please use this guide to ensure you've
 created the index correctly:

 http://docs.basho.com/riak/2.0.0/dev/using/search/

 If you expect the index to be there and it is not, the solr.log file
 in /var/log/riak could provide a clue.

 --
 Luke Bakken
 CSE
 lbak...@basho.com


 On Mon, Aug 18, 2014 at 6:59 AM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  Hi there,
 
  I had been following the documentation [
  http://docs.basho.com/riak/2.0.0/dev/using/search/ ] about Riak Search
 and
  the code provided in the site doesn't seem to work?
 
  Everything I try ends up with an error saying no index found taking the
 name
  of the bucket as a not found index :(
 
  riak.RiakError: 'No index name_of_the_bucket found.'
 
  Somebody knows what's going on?
 
  Thanks!
  Alex
 
 
  ___
  riak-users mailing list
  riak-users@lists.basho.com
  http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
 

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Issue

2014-08-18 Thread Alex De la rosa
Hi Luke,

I also tried with a normal bucket cats using the type animals as the
documentation seemed to suggest and gave me the same error but this time
saying that cats was not found as an index... so... still no clue how to
do it.

This is an alternate code I did looking at the Python client API
documentation, etc...

client.create_search_index('men')
bucket = client.bucket('accounts')
bucket.enable_search()
bucket.set_property('search_index', 'men')
key = bucket.new('alex', data={username:Alex,age:25,sex:male},
content_type='application/json')
key.store()
print bucket.search('sex=male')

Again, it says accounts is not an index... in this code no bucket types
are used, just a plain bucket accounts... what is wrong? what is missing
for it to work??

This is really frustrating.

Thanks,
Alex


On Mon, Aug 18, 2014 at 4:44 PM, Luke Bakken lbak...@basho.com wrote:

 Hi Alex -

 You correctly created the famous index, as well as correctly
 associated it with the bucket *type* animals. Note that a bucket
 type is not the same thing as a bucket in previous versions of Riak. A
 bucket type is a way to give 1 or more buckets within that type the
 same properties. You'll have to use different code in your Riak client
 to use bucket types:

 http://docs.basho.com/riak/2.0.0/dev/advanced/bucket-types/

 --
 Luke Bakken
 CSE
 lbak...@basho.com


 On Mon, Aug 18, 2014 at 7:32 AM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  If I do this, I got the right index:
 
  http://RIAK:8098/search/index/famous
  = {name:famous,n_val:3,schema:_yz_default}
 
  If I do this, I get an error:
  http://RIAK:8098/search/index/animals
  = not found
 
  What I don't understand is why it believes the index is the bucket name
 and
  not the index I created for it?
 
  riak-admin bucket-type create animals
 '{props:{search_index:famous}}'
  riak-admin bucket-type activate animals
 
  Shouldn't it be looking for a famous index instead of an animals
 index??
 
  Thanks!
  Alex
 
 
  On Mon, Aug 18, 2014 at 4:26 PM, Luke Bakken lbak...@basho.com wrote:
 
  What is the output of this command? Please replace RIAK_HOST and
  name_of_the_bucket with the correct information:
 
  curl $RIAK_HOST/search/index/name_of_the_bucket
 
  If the above returns a 404, please use this guide to ensure you've
  created the index correctly:
 
  http://docs.basho.com/riak/2.0.0/dev/using/search/
 
  If you expect the index to be there and it is not, the solr.log file
  in /var/log/riak could provide a clue.
 
  --
  Luke Bakken
  CSE
  lbak...@basho.com
 
 
  On Mon, Aug 18, 2014 at 6:59 AM, Alex De la rosa
  alex.rosa@gmail.com wrote:
   Hi there,
  
   I had been following the documentation [
   http://docs.basho.com/riak/2.0.0/dev/using/search/ ] about Riak
 Search
   and
   the code provided in the site doesn't seem to work?
  
   Everything I try ends up with an error saying no index found taking
 the
   name
   of the bucket as a not found index :(
  
   riak.RiakError: 'No index name_of_the_bucket found.'
  
   Somebody knows what's going on?
  
   Thanks!
   Alex

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak Search Issue

2014-08-18 Thread Alex De la rosa
Hi Luke,

Same error:

bucket = client.bucket_type('animals').bucket('cats')
bucket.enable_search()
bucket.set_property('search_index', 'famous') # NEW: Setting the search
index to the bucket
key = bucket.new('feliz', data={name:Felix,species:Felis catus},
content_type='application/json')
key.store()
print bucket.search('name=Felix')

Output:

Traceback (most recent call last):
  File x.py, line 11, in module
print bucket.search('name=Felix')
  File /usr/local/lib/python2.7/dist-packages/riak/bucket.py, line 420,
in search
return self._client.fulltext_search(self.name, query, **params)
  File /usr/local/lib/python2.7/dist-packages/riak/client/transport.py,
line 184, in wrapper
return self._with_retries(pool, thunk)
  File /usr/local/lib/python2.7/dist-packages/riak/client/transport.py,
line 126, in _with_retries
return fn(transport)
  File /usr/local/lib/python2.7/dist-packages/riak/client/transport.py,
line 182, in thunk
return fn(self, transport, *args, **kwargs)
  File /usr/local/lib/python2.7/dist-packages/riak/client/operations.py,
line 573, in fulltext_search
return transport.search(index, query, **params)
  File
/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/transport.py,
line 564, in search
MSG_CODE_SEARCH_QUERY_RESP)
  File
/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py,
line 50, in _request
return self._recv_msg(expect)
  File
/usr/local/lib/python2.7/dist-packages/riak/transports/pbc/connection.py,
line 142, in _recv_msg
raise RiakError(err.errmsg)
riak.RiakError: 'No index cats found.'

Thanks,
Alex


On Mon, Aug 18, 2014 at 5:00 PM, Luke Bakken lbak...@basho.com wrote:

 Alex -

 Let's take a step back and try out the famous index and animals
 bucket type, which you have confirmed are set up correctly. This
 (untested) code should create an object (cat-1) in the cats bucket
 (within the animals bucket type), which will then be indexed by the
 famous index:

 bucket = client.bucket_type('animals').bucket('cats')
 obj = RiakObject(client, bucket, 'cat-1')
 obj.content_type = 'application/json'
 obj.data = { 'name': 'Felix', 'species': 'Felis catus' }
 obj.store()

 --
 Luke Bakken
 CSE
 lbak...@basho.com

 On Mon, Aug 18, 2014 at 7:50 AM, Alex De la rosa
 alex.rosa@gmail.com wrote:
  Hi Luke,
 
  I also tried with a normal bucket cats using the type animals as the
  documentation seemed to suggest and gave me the same error but this time
  saying that cats was not found as an index... so... still no clue how
 to
  do it.
 
  This is an alternate code I did looking at the Python client API
  documentation, etc...
 
  client.create_search_index('men')
  bucket = client.bucket('accounts')
  bucket.enable_search()
  bucket.set_property('search_index', 'men')
  key = bucket.new('alex', data={username:Alex,age:25,sex:male},
  content_type='application/json')
  key.store()
  print bucket.search('sex=male')
 
  Again, it says accounts is not an index... in this code no bucket types
  are used, just a plain bucket accounts... what is wrong? what is
 missing
  for it to work??
 
  This is really frustrating.
 
  Thanks,
  Alex
 
 
  On Mon, Aug 18, 2014 at 4:44 PM, Luke Bakken lbak...@basho.com wrote:
 
  Hi Alex -
 
  You correctly created the famous index, as well as correctly
  associated it with the bucket *type* animals. Note that a bucket
  type is not the same thing as a bucket in previous versions of Riak. A
  bucket type is a way to give 1 or more buckets within that type the
  same properties. You'll have to use different code in your Riak client
  to use bucket types:
 
  http://docs.basho.com/riak/2.0.0/dev/advanced/bucket-types/
 
  --
  Luke Bakken
  CSE
  lbak...@basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


  1   2   >