Re: Question about counters

2014-06-30 Thread Christian Dahlqvist
Hi Alex, In Riak 2.0 you can create a map data type that can contain a number of related counters. If you have a limited number of named counters, e.g. for scorers in the World Cup, this would allow you to update multiple counters using a single operation and read all of then as a single key retri

Re: Timeout when accessing a key in a strongly consistent bucket

2014-06-27 Thread Christian Dahlqvist
Hi Zsolt, Riak 2.0.0pre5 is a quite old pre-release. Please upgrade to the latest release, which can be found here: http://docs.basho.com/riak/2.0.0beta1/downloads/ Best regards, Christian On Thu, Jun 26, 2014 at 9:07 PM, wrote: > Hello There, > > > > I installed 2.0.0pre5 on OSX and I exp

Re: 1.4.8, memory_backend max_memory not honored?

2014-06-24 Thread Christian Dahlqvist
Hi Allen, Apart from the memory backend, AAE will also be consuming memory as you currently have it enabled. I would recommend running the test again with anti entropy turned off to see how much impact that has. Best regards, Christian On Tue, Jun 24, 2014 at 5:29 PM, Allen Landsidel wrote

Re: Bash bench key/value generation strategies

2014-05-19 Thread Christian Dahlqvist
Hi Simon, If you have a limited set of object types and your data model allow you to estimate and express the raw size of your data using one of the default value generators, using one of the standard drivers, e.g. basho_bench_driver_riakc_pb, is the easiest way to get a benchmark up and running.

Re: CRDT objects and 2i?

2014-03-26 Thread Christian Dahlqvist
Hi Paul, CRDTs are stored as a normal object in Riak, although in a format that allows Riak to resolve conflicts automatically, meaning that the normal restrictions on object size applies. As secondary indexes do not have to be related to the data in any way, Riak would not be able to determine ho

Re: link walking in the Riak 2.0 Ruby client?

2014-03-25 Thread Christian Dahlqvist
Hi Paul, Link walking is being deprecated as per the following note in the 2.0 documentation: http://docs.basho.com/riak/2.0.0pre11/dev/using/link-walking/ Best regards, Christian On Tue, Mar 25, 2014 at 2:56 PM, Paul Walk wrote: > I'm gradually getting up to speed with the technical releas

Re: Cleaning up bucket after basho_bench run

2014-03-19 Thread Christian Dahlqvist
{partitioned_sequential_int, 1000}}}. > > I has completed but unfortunately was not deleting any data > > Next is to use the Erlang client and see if I can list the keys and > delete them, or try to use the Erlang interface for MR. > > Regards, > Istvan > >

Re: Cleaning up bucket after basho_bench run

2014-03-15 Thread Christian Dahlqvist
Hi Istvan, Depending on how you have run your Basho Bench job(s), you could try deleting the generated keys by running a separate Basho Bench job based on a partitioned_sequential_int key generator and only delete operations. Best regards, Christian On Fri, Mar 14, 2014 at 5:00 PM, István wr

Re: 60 second timeout in Riak

2014-03-11 Thread Christian Dahlqvist
Hi Matthew, I believe 60 seconds is the default timeout in the client, so it is possible the `busy_dist_port` issues have caused a timeout and that the automatic retry then has succeeded. A small +zdbbl value will cause the internal buffers to fill up and result in `busy_dist_port` messages, whic

Re: Questions about the Riak Horizontal Scalability

2014-03-08 Thread Christian Dahlqvist
her tweaks: > > - If you're running on machines with multiple numa zones, +sbt db can help. > - the 2.0 pre-releases have some changes to the networking code that > can help increase, if you're willing to try out an early release. > > On Fri, Mar 7, 2014 at 3:06 PM, Chris

Re: Questions about the Riak Horizontal Scalability

2014-03-07 Thread Christian Dahlqvist
As the Protocol Buffer interface generally is faster and more efficient than the HTTP interface, I would also recommend using the basho_bench_driver_riakc_pb driver instead of basho_bench_driver_http_raw for your benchmarks. Best regards, Christian On Fri, Mar 7, 2014 at 10:54 PM, Sean Cribbs

Re: Looking for an easy cache solution based on pre-commit hook

2014-03-05 Thread Christian Dahlqvist
Hi Ivaylo, For read intensive applications a common solution is to perform the caching outside Riak, e.g. using Varnish, memcached or Redis. Often the application will be responsible for populating the cache, but either the application or Riak (through a commit hook) can be responsible for purging

Re: Dont work SSL distribution

2013-12-04 Thread Christian Dahlqvist
Hi, It looks like you are trying to run Riak with Erlang R16B01. Version 1.4.2 of Riak does not support this and requires Erlang R15B01 [0]. Best regards, Christian [0] http://docs.basho.com/riak/latest/ops/building/installing/rhel-centos/ On 4 Dec 2013, at 12:08, Игонин Михаил wrote: > Hi

Re: Using Riak to perform aggregate queries

2013-11-24 Thread Christian Dahlqvist
Hi, Instead of updating a summary object for every insert, you could also precompute by periodically aggregating data for a specific time period. If you are using LevelDB and e.g. have a secondary index that contains a timestamp, you could create an hourly summary using a batch job every hour.

Re: vnode_proxy_timeout during mapreduce

2013-11-18 Thread Christian Dahlqvist
Hi Jason, Processing all records in a large bucket will cause a lot of data to be read, possibly from disk, and can therefore be slow. Having said that, there are a number of things you can do to make your job more efficient. The first thing is to increase the batch size for the reduce phase. T

Re: Multiple named counters per key

2013-11-09 Thread Christian Dahlqvist
Hi Mark, Riak is at its core a key-value store, and accessing objects directly through the key is therefore the by far the most efficient and scalable method. For each query, Riak can through consistent hashing determine exactly where the data should live and GET/PUT/DELETE it very efficiently,

Re: Corrupted data using Riak Python Client on Win7 64 bit

2013-11-09 Thread Christian Dahlqvist
Hi, I see from your code sample that you are using the HTTP mode. Do you see the same issue if you switch to using Protocol Buffers? Best regards, Christian On 9 Nov 2013, at 09:53, finkle mcgraw wrote: > Hi John and Engel, > > Here's a link to a Dropbox folder with a set of file pairs (

Re: error starting risk 1.4.2 from prebuilt distro for MacOS

2013-09-16 Thread Christian Dahlqvist
Hi Roman, Can you please verify that your app.config file is correct and does not contain any errors by running `riak chkconfig`? Best regards, Christian On 15 Sep 2013, at 01:05, Roman Shestakov wrote: > hello, > > I am trying to run pre-built Riak 1.4.2 distribution for Mac OS but gett

Re: mapreduce timeout

2013-07-26 Thread Christian Dahlqvist
willing to wait for it. We may have other such cases in the future.. > > best regards, > Deyan > > On Jul 15, 2013, at 4:49 PM, Christian Dahlqvist wrote: > >> Hi Deyan, >> >> When running mapreduce jobs, reduce phases often end up being the >> bottlen

Re: Riak not binding to anything but localhost

2013-07-24 Thread Christian Dahlqvist
Hi John, As you started up the node set to 127.0.0.1, a ring file was created with this information. As I believe the node is isolated and not yet part of a cluster, you should be able to remove this and restart it with the new IP. Best regards, Christian On 24 Jul 2013, at 13:14, John Le D

Re: Performance problem: put operation takes seconds

2013-07-22 Thread Christian Dahlqvist
Hi Kirill, Raw output from stats and graphs showing trends would be very useful. Access to the log files would also help. Best regards, Christian On 22 Jul 2013, at 20:36, ks wrote: > Hi Christian, > >> Are you running key or bucket listings? >> Are you running secondary index queries or

Re: Performance problem: put operation takes seconds

2013-07-22 Thread Christian Dahlqvist
Hi, If you do not have siblings enabled and are certain ALL values are small, there are a few other things that can cause performance problems: - Are you running key or bucket listings? - Are you running secondary index queries or mapreduce jobs? - Can you confirm you have disabled swap? - Are y

Re: Performance problem: put operation takes seconds

2013-07-22 Thread Christian Dahlqvist
Hi, A reasonably common cause for sudden spikes in latencies is that the buffers used for internal communication gets exhausted. This tends to manifest itself through large number of 'busy_dist_port' messages in the logs. This is especially common if you have large objects or objects with lots

Re: Performance problem: put operation takes seconds

2013-07-22 Thread Christian Dahlqvist
Hello, Please can you attach your app.config and vm.args files as well as a zipped up log directory from one of the nodes? Best regards, Christian On 22 Jul 2013, at 12:40, ks wrote: > Hi there, > > We're using Riak 1.3.1 cluster with 3 nodes (CentOS 64-bit, Intel® Xeon® > E3-1245 Quadcore

Re: Riak-CS Question

2013-07-19 Thread Christian Dahlqvist
, Christian On 19 Jul 2013, at 15:32, Vahric Muhtaryan wrote: > attached, thanks > VM > > > On Fri, Jul 19, 2013 at 4:00 PM, Christian Dahlqvist > wrote: > Hi Vahric, > > Please can you send me the logs and configuration files from Riak, Riak-CS > and Stanchion

Re: Riak-CS Question

2013-07-19 Thread Christian Dahlqvist
Hi Vahric, Please can you send me the logs and configuration files from Riak, Riak-CS and Stanchion from all nodes in the cluster? Best regards, Christian On 18 Jul 2013, at 21:45, Vahric Muhtaryan wrote: > Hello All, > > i got such error > > [7/18/13 8:22:24 PM] Vahric MUHTARYAN: 2013-

Re: using 2i with protocol buffers

2013-07-19 Thread Christian Dahlqvist
Hi Joey, When using secondary indexes the index names must end in either `_int` for integer indexes or `_bin` for binary indexes in order for them to be recognised by the system [1]. All indexes must be set when you write the objects. You must also ensure that the bucket you are writing the obj

Re: TCP recv timeout and handoffs almost all the time

2013-07-19 Thread Christian Dahlqvist
Hi Simon, If you have objects that can be a s big as 15MB, it is probably wise to increase the size of +zdbbl in order to avoid filling up buffers when these large objects need to be transferred between nodes. What an appropriate level is depends a lot on the size distribution of your data and

Re: mapreduce timeout

2013-07-15 Thread Christian Dahlqvist
Hi Deyan, When running mapreduce jobs, reduce phases often end up being the bottleneck. This is especially true when all input data needs to be gathered on the coordinating node before it can be executed, as is the case if the reduce_phase_only_1 flag is enabled. Having this flag set will cause

Re: Servers keep dying. How to understand why?

2013-05-23 Thread Christian Dahlqvist
nts, which sucks, but we're not > sure what's the best way to deal with them. > > That's it for now :) > > Thanks a lot! > > > > > -- > Got a blog? Make following it simple: https://www.subtome.com/ > > Julien Genestoux, > http://twitter.com/

Re: How to achieve good performance on riak with AWS.

2013-05-19 Thread Christian Dahlqvist
Hi Abhishek, I have a few suggestions that may be worthwhile testing in order to get better performance from your cluster: 1) Ensure you have followed the guidelines for EC2 instance tuning: http://docs.basho.com/riak/latest/cookbooks/Performance-Tuning-AWS/ 2) The Protocol Buffer interface is

Re: Servers keep dying. How to understand why?

2013-05-17 Thread Christian Dahlqvist
o stop the server? > Also, would it make sense to change the backend if we have a lot of delete? > > Thanks, > > On Fri, May 17, 2013 at 2:45 PM, Christian Dahlqvist > wrote: > Hi Julien, > > I believe from an earlier email that you are using bitcask as a backend. Th

Re: Servers keep dying. How to understand why?

2013-05-17 Thread Christian Dahlqvist
rare (maybe 10 times a day at most at > this point...) so I don't think that's our issue. > I'll try to run the cluster without any call to that to see if that's better, > but I'd be very surprised. Also, we were doing this already even > before we allowed for

Re: Riak crashing on Map Reduce

2013-05-17 Thread Christian Dahlqvist
Hi Kurt, A Riak cluster can handle very large amounts of data, and 500 000 000 keys should not be a problem. Riak's MapReduce implementation is however not designed or meant to be used for this type of large bulk processing, so inserting all the data and then periodically performing MapReduce

Re: Riak crashing on Map Reduce

2013-05-15 Thread Christian Dahlqvist
Hi Kurt, In order to be able to provide some feedback on why the mapreduce job might be timing out and try to help you address this, I will need some additional information: - Which version of Riak are you running on? - What does your app.config file look like? - What does you data look like? -

Re: riak crash

2013-05-14 Thread Christian Dahlqvist
Hi Alexander, The node appears to be crashing due to it hitting the Erlang process limit. You can increase this from the default value of 32768 to e.g. the double by setting adding the line +P 65536 to the vm.args file. Having said that I am wondering what is causing you to hit this limit. What

Re: Servers keep dying. How to understand why?

2013-05-14 Thread Christian Dahlqvist
x27;m convinced we must be doing something > wrong. The question is: what? > > Thanks > > > On Sun, May 12, 2013 at 8:07 PM, Christian Dahlqvist > wrote: > Hi Julien, > > I was not able to access the logs based on the link you provided. > > Could you please atta

Re: Which nodes in a cluster are responsible for hinted handoff, when one node fails?

2013-05-12 Thread Christian Dahlqvist
Hi Christian, One of the reasons we always recommend to have a minimum of 5 nodes in the cluster is that the responsibility for fall-back partitions can be distributed fairly evenly among the remaining nodes if one goes down. With only 4 nodes in the cluster it is much less likely that the resp

Re: Servers keep dying. How to understand why?

2013-05-12 Thread Christian Dahlqvist
Hi Julien, I was not able to access the logs based on the link you provided. Could you please attach a copy of your app.config file so we can get a better understanding of the configuration of your cluster? Also, what is the specification of the machines in the cluster? How much data do you ha

Re: Simple mapreduce with 2i returns different result

2013-04-16 Thread Christian Dahlqvist
Hi Mattias, The following curl query simply counts the number of inputs, and has worked well for me in the past. Can you please run it against the cluster a couple of times and see if it also return varying number of results? curl -XPOST http://localhost:8098/mapred -H 'Content-Type: applica

Re: stream_list_keys

2013-04-16 Thread Christian Dahlqvist
Hi Tom, Here is an example that runs from the Riak console and dumps all keys of a bucket to a file and shows how stream_list_keys can be used: -module(keylister). -export([list_keys_to_file/2]). -define(TIMEOUT, 1000). %% @spec list_keys_to_file(binary(), string()) -> %% ok | {e

Re: Simple mapreduce with 2i returns different result

2013-04-12 Thread Christian Dahlqvist
eems to happen for > all our MapReduce queries. The bucket in the example have both allow_mult and > last_write_wins set to false. > > Regards > Mattias > > > 2013/4/12 Christian Dahlqvist > Hi Mattias, > > MapReduce in Riak executes based on the data in a s

Re: Advice for storing records in Riak

2013-04-12 Thread Christian Dahlqvist
Hi Toby, Inserting lots of small records in Riak and querying the full data set via MapReduce is definitely not the best way to go around things. As Alexander points out, easy object is stored with metadata, which adds some overhead and Riak MapReduce tends to work best when run over smaller da

Re: Am I misunderstand read and write quorum? Or am I losing writes?

2013-04-12 Thread Christian Dahlqvist
Hi Rob, If you print the fields in the document you retrieved from Riak, does it appear to be an older version? Could there perhaps be some problem with serialisation/de-serialisation of the data? Regards, Christian On 12 Apr 2013, at 07:30, Erik Søe Sørensen wrote: > Is there any chance

Re: Simple mapreduce with 2i returns different result

2013-04-12 Thread Christian Dahlqvist
Hi Mattias, MapReduce in Riak executes based on the data in a single partition and does, for efficiency reasons, not perform a quorum read (which greatly reduces the required amount of network traffic). As Riak is eventually consistent, it is possible that all partitions do not hold exactly the

Re: Minimal number of nodes for production

2013-04-11 Thread Christian Dahlqvist
Hi Daniel, If you have 3 nodes in the cluster, you should not lose any data if one node goes down, but you may experience that some records, for which 2 replicas are gone, will return false not founds before read-repair can fix it. If you therefore retry whenever you do not find a key that you

Re: Riak 2i http query much faster than python api?

2013-04-11 Thread Christian Dahlqvist
Hi Jeff, Segmenting the keys based on some random number will allow you to run smaller jobs, but if you still need to run all the batches in order to get a result, I am not sure you have gained much. Exactly what the best way to process and prepare data depends highly on your use case and what

Re: Delete bucket(s) by deleting bitcask files?

2013-04-08 Thread Christian Dahlqvist
Hi Christian, As long as you stop the nodes first, deleting all files from the /var/lib/riak/bitcask directory is a perfectly valid way to clear out the database. Best regards, Christian On 8 Apr 2013, at 10:45, Christian Steinmann wrote: > Hi, > > i run some tests with riak for a study

Re: Map phase timeout

2013-04-08 Thread Christian Dahlqvist
/references/appendices/MapReduce-Implementation/#Configuration-Tuning-for-Javascript Christian Dahlqvist Client Services Engineer Basho Technologies EMEA Office E-mail: christ...@basho.com Skype: c.dahlqvist Mobile: +44 7890 590 910 On 8 Apr 2013, at 08:20, Matt Black wrote

Re: Map phase timeout

2013-04-08 Thread Christian Dahlqvist
Hi, Without having access to the mapreduce functions you are running, I would assume that a mapreduce job both writing data to disk as well as deleting the written record from Riak might be quite slow. This is not really a use case mapreduce was designed for, and when a mapreduce job crashes or

Re: Map reduce weirdness on Riak 1.3

2013-04-06 Thread Christian Dahlqvist
? > > > > > On Sat, Apr 6, 2013 at 4:09 PM, Christian Dahlqvist > wrote: > > Hi Kartik, > > The reduce phase will normally run recursively a number of times as results > come in from map phases on different nodes [1]. This allows Riak to start > reducing

Re: Map reduce weirdness on Riak 1.3

2013-04-06 Thread Christian Dahlqvist
e happens after the whole map is done. Do I > have to implement a count waiter for the map results in the reduce code? > > > On Sat, Apr 6, 2013 at 3:19 PM, Christian Dahlqvist > wrote: > Hi Kartik, > > What you are seeing is a result of you not accounting for re-reduc

Re: Map reduce weirdness on Riak 1.3

2013-04-06 Thread Christian Dahlqvist
Hi Kartik, What you are seeing is a result of you not accounting for re-reduce in you reduce phase function. In Riak reduce phases generally run recursively and the input for each run may contain both values from preceding map phase as well as output from previous iterations of the reduce pha

Re: The riak-contrib reduce phase to delete Bucket/Key pairs returns many "not_found" exceptions

2013-03-24 Thread Christian Dahlqvist
Hi, The error message indicates that the map phase function is not handling notfounds correctly. The map phase function need to be modified to handle this, which can be accomplished as follows: get_keys({error, notfound}, _, _) -> []; get_keys(Value,_Keydata,_Arg) -> [{riak_object:bucket(

Re: Need help with moving to Riak

2013-03-22 Thread Christian Dahlqvist
://basho.com/schema-design-in-riak-relationships/ I hope this helps. Best regards, Christian Christian Dahlqvist Client Services Engineer Basho Technologies EMEA Office E-mail: christ...@basho.com Skype: c.dahlqvist Mobile: +44 7890 590 910 On 22 Mar 2013, at 06:38, Max Lapshin wrote

Re: MapReduce scalability

2013-02-28 Thread Christian Dahlqvist
, there will > be one local read in the node receiving the subset and only one more read in > another node that holds a copy. Then this distributed processing can handle > read-repair, aggregate data and send the result to the coordinating node. > > Best Regards, > >

Re: MapReduce scalability

2013-02-28 Thread Christian Dahlqvist
ere's too many such "page" requests. So what would be the > proper way to deal with the situation when we need to emulate multiple key > retrieval? > > On Tue, Feb 26, 2013 at 1:57 AM, Christian Dahlqvist > wrote: > Hi Boris, > > MapReduce is a very fl

Re: MapReduce scalability

2013-02-25 Thread Christian Dahlqvist
Hi Boris, MapReduce is a very flexible and powerful way of querying Riak and allows processing to be performed locally where the data resides, which allows for efficient processing of larger data sets. A result of this is that every mapreduce job requires a covering set of vnodes (all vnodes th

Re: Riak - low throughput

2013-02-21 Thread Christian Dahlqvist
Hi, Does this mean that you are using a single connection to each node in your test? If this is the case I would recommend increasing the number of threads as Dmitri Zagidulin recommended in his response in order to get better throughput. Also, what is the size of the objects you are writing?

Re: 2i: url encoding/decoding of index names/values

2013-02-20 Thread Christian Dahlqvist
Hi Age, One thing to consider is that Riak allows a secondary index to have multiple values. I can e.g. create an object with two values for a integer secondary index as follows: curl -X PUT -H "Content-Type: text/plain" -H 'x-riak-index-idx_int: 12' -H 'x-riak-index-idx_int: 13' -d 'd

Re: Secondary index maintenance

2013-02-20 Thread Christian Dahlqvist
Hi Theo, The Riak HTTP client for Erlang uses the 'riakc_obj' from the PB client to represent records. You can therefore use any utility functions available there to manipulate metadata. The HTTP client for Erlang does however currently not support secondary indexes [1], meaning that these will

Re: ListKeys or MapReduce

2013-02-14 Thread Christian Dahlqvist
. > OJ > > On Thu, Feb 14, 2013 at 6:19 PM, Christian Dahlqvist > wrote: > Hi, > > For buckets with a significant number of records, it makes a lot of sense to > run the example I provided with 'do_prereduce' enabled as it will result in > considerably

Re: ListKeys or MapReduce

2013-02-14 Thread Christian Dahlqvist
{"language":"erlang", "module":"riak_kv_mapreduce", "function":"reduce_count_inputs", "arg":{"do_prereduce":true}}}]}' Best regards, Christian On 1

Re: ListKeys or MapReduce

2013-02-14 Thread Christian Dahlqvist
t" and Google > says "No results found for site:docs.basho.com $bucket." > > --- > Jeremiah Peschka - Founder, Brent Ozar Unlimited > MCITP: SQL Server 2008, MVP > Cloudera Certified Developer for Apache Hadoop > > > On Wed, Feb 13, 2013 at 10:08 AM, Chri

Re: ListKeys or MapReduce

2013-02-13 Thread Christian Dahlqvist
Hi, In addition to the $key index, there is also a $bucket index available by default. This contains the name of the bucket, and can be used to get all keys in a specific bucket. Best regards, Christian On 12 Feb 2013, at 22:39, Jeremiah Peschka wrote: > As best as I understand the magical

Re: Nightly Prune

2013-02-08 Thread Christian Dahlqvist
Hi, I would strongly advise against setting up a portion of your nodes with memory backend and using W=1 in order to speed up writes as you will run the risk of losing data, especially in failure scenarios where one of the nodes you rely on for writing to disk fails. As Riak manages spreading o

Re: big cache vs. many partitions and replicas placement

2013-01-28 Thread Christian Dahlqvist
Hi Simon, By cache size, I am assuming you are referring to the leveldb internal cache. Is this correct? The arguably most important parameter in your configuration is the ring size. Defining an appropriate ring size is very important as it can't change later on and determines how far you can

Re: Using the Local Client from a riak-attach session

2013-01-18 Thread Christian Dahlqvist
quot;bucket">>, <<"index_int">>, <<"1">>, <<"50">>}, [{reduce, {modfun, 'riak_kv_mapreduce', 'reduce_count_inputs'}, none, true}]). Hope this helps. Best regards, Christ

Re: Looking for good tutorial on writing Erlang Map/Reduce queries?

2013-01-16 Thread Christian Dahlqvist
Hi Derek, The best place to start is probably the mapreduce functions officially included in Riak (https://github.com/basho/riak_kv/blob/master/src/riak_kv_mapreduce.erl) if you have not already found these. These do however not make use of arguments. If you are looking for an example of how a

Re: Map function in erlang that takes entire bucket as input?

2013-01-10 Thread Christian Dahlqvist
right hand side value > {error,<<"{inputs,{\"Inputs target tuples must be {B,K} or > {{B,K},KeyData}:\",\n [<<\"groceries\">>]}}">>} > > > On Thu, Jan 10, 2013 at 9:43 AM, Shaan Sapra wrote: > Ah thank you! > > > On Th

Re: Map function in erlang that takes entire bucket as input?

2013-01-10 Thread Christian Dahlqvist
Hi Shaan, The riakc_pb_socket:mapred function can take several different types of input: a bucket name, a list of bucket/key pairs or a secondary index query specification. If you wanted to run the example in the tutorial based on all keys in the groceries bucket instead of having to specify

Re: Same MR query, different results every run...........

2013-01-08 Thread Christian Dahlqvist
Hi David, Is it always the same entry that is missing from the result set? If so, does the issue go away if you issue a read request for the record(s) causing problems (resulting in read-repair)? If this is the case, the cause of the problem might be explained by how MapReduce works in Riak.

Re: Erlang API for MapReduce

2012-12-21 Thread Christian Dahlqvist
Hi, The client you are trying to use is the internal Riak client. This will work if you attach [1] to the Riak console and want to query Riak. If you are writing an application to be run from outside Riak, you should instead use the Riak PB Erlang Client [2]. Best regards, Christian [1] htt

Re: How to delete the bucket?

2012-11-19 Thread Christian Dahlqvist
Apart from retrieving all keys and deleting the records from the client, you can delete them through a map reduce job. I created a map phase function that deletes all keys passed in, and it is available in my map reduce utilities library (https://github.com/whitenode/riak_mapreduce_utils). There

Re: How to make Riak work faster (writing)

2012-11-02 Thread Christian Dahlqvist
On 02/11/2012 19:10, Uruka Dark wrote: Hi Dimitri, I don't know why, but I could not receive your reply. I saw it following the url of the mailing list. Any way, thank you for your reply. This is my PHP script: *isAlive())* *{* *echo "$str - ALIVE\n";* *}* *else* *{* *echo "$str - DEAD\n";*

Re: riakc doesn't save metadata properly?

2012-10-22 Thread Christian Dahlqvist
On 22/10/2012 22:25, David Parfitt wrote: Hello Metin - I'm trying to get that into the docs right now. Will italics do? :-) https://github.com/basho/riak-erlang-client/pull/74 Cheers - Dave On Mon, Oct 22, 2012 at 5:22 PM, Metin Akat wrote: Oh, I see, thanks for this clarification, much app

Re: riak memstore clarification on enomem error

2012-10-10 Thread Christian Dahlqvist
t of yours. Regards sangeetha -Original Message----- From: Christian Dahlqvist [mailto:christ...@whitenode.com] Sent: Tuesday, October 09, 2012 3:57 PM To: Pattabi Raman, Sangeetha (Cognizant) Cc: sh...@mcewan.id.au; riak-users@lists.basho.com Subject: Re: riak memstore clarification on enomem error

Re: riak memstore clarification on enomem error

2012-10-09 Thread Christian Dahlqvist
On 09/10/2012 10:39, sangeetha.pattabiram...@cognizant.com wrote: Thanks Shane , Load script used is as follows (basically a curl) #!/usr/local/bin/escript main([Filename]) -> {ok, Data} = file:read_file(Filename), Lines = tl(re:split(Data, "\r?\n", [{return, binary},trim])), lis

Re: links vs 2i

2012-09-21 Thread Christian Dahlqvist
Hello Timo, I recently played around with using secondary indexes instead of links, and since I did not find any mapreduce functions that allowed me to follow 21 "links", I wrote a couple myself. These are available on my GitHub accountand I also