Hi there,
In some AQL query, I need to iterate over all docs connected to a starting
doc through some edge type + the starting doc itself. The way I'm doing
this currently is:
LET followings = (FOR f in 1 OUTBOUND 'users/123' follows RETURN f)
FOR following in UNION(followings, [
Hi again!
Another weird thing I just witnessed; I had the following Foxx route which
was working fine:
controller.post('/:issuerId/rejectFriendRequest', function (req, res) {
var response = new common.Response();
var issuerId = req.params('issuerId');
var sourceUserId =
ebug'],
> write: ['foxxdebug']
> },
> allowImplicit: false,
> action: function () {
>...
> });
>
> I think in your code example the flag is set in some nested level, inside
> the "collections" attribute, from where it will not be picked up.
>
> Best regards
Hi everyone,
- I’m using my Mac for dev and run Arango on a Ubuntu VM for staging; I’ve
noticed that with 8 (empty) collections, Arango reports more than 600MB of
memory usage on Ubuntu whereas it’s only 150MB on my Mac… any reason for
that difference?
- The number of client connections
Hi everyone,
I'm trying to build an AQL query that iterates over edges that point to
objects from 2 different collections (say, A and B). The goal is to return
a list of those heterogenous objects and within the query, I have to
perform some specific projections (depending whether they are A
Thanks a lot for this very helpful answer!
Yes indeed, I would be interested to hear about other ways to achieve such
heterogenous "feed" of documents. I may have to deal with more than 2
source collections in the future, so using ternary operators for that will
get cumbersome. Also, I'm open
Hi everyone,
I'm having some issues after upgrading from 3.0.0 to 3.0.4. Here is the
process I followed:
- sudo /etc/init.d/arangodb3 stop
- sudo dpkg --purge arangodb3
- dpkg -i arangodb3_3.0.4_amd64.deb
- sudo /etc/init.d/arangodb3 stop
- sudo cp dbdata/arangodb.conf
Hi there,
Following the recent issues I had with 2.8, I'm currently evaluating the
migration to 3.0 so I'm in the process of porting my Foxx app.
I've just noticed that when a db insert violates a unique constraint on an
index, the route now returns with 500 / Internal Server Error, whereas it
ewhat more restrictive
> in that regards. The startup timing and how we retreat to arangodb user
> rights have changed since.
>
> Kind regards,
> Kaveh
>
> On Friday, July 1, 2016 at 8:01:53 AM UTC+2, Thomas Weiss wrote:
>>
>> Hi there,
>>
>> I'm trying to run
Hi there,
I'm trying to run 3.0 on an Ubuntu VM and it worked well until I tried to
locate the databases in a different location.
I modified arangod.conf like:
[database]
directory = /dbdata
(which worked fine with 2.8 btw)
but now I get an error when I *sudo /etc/init.d/arangodb3 start*, and
gt; Sorry for the confusion, it wasn't fully obvious to me in the beginning,
> and I was convinced that `allowImplicit` belongs on the top-level, but you
> were right.
> Best regards
> Jan
>
> Am Mittwoch, 29. Juni 2016 10:15:47 UTC+2 schrieb Thomas Weiss:
>>
>> T
I can answer this! :)
I'm a one-man shop as well, working on small-to-medium web projects.
ArangoDB has been my choice for my last 3 projects and I don't regret it at
all.
You get the flexibility of schemaless combined with transactions and
joins... From that perspective, it's just the best
Well, slow adoption is mostly due to safety and is rarely related to
features or "how good" the product is. Even more with databases which are
the most critical parts of any system.
Big companies stick with RDBMS because those have been around for ages,
have proved their robustness, are
Hi everyone,
Quick question about the behavior of the LIMIT operator. Here is my
playground (using 3.0.12):
FOR i IN [1,2,3,4]
FOR j IN APPEND([i],[5,6,7])
RETURN j
returns
[ 1, 5, 6, 7, 2, 5, 6, 7, 3, 5, 6, 7, 4, 5, 6, 7 ]
as expected.
FOR i IN [1,2,3,4]
FOR j IN APPEND([i],[5,6,7])
LIMIT 2,4
gt; Regards,
>
> El jueves, 18 de agosto de 2016, 9:15:58 (UTC-3), Thomas Weiss escribió:
>>
>> Hi everyone,
>>
>> Continuing on a topic I briefly mentioned before on Slack... I'm
>> currently prototyping some event-sourcing on ArangoDB; I think ADB is a
&g
(those
> that will be accessed dynamically) in a `WITH` clause at the beginning of
> an AQL query, e.g.
>
> WITH extracollection1, extracollection2
> FOR doc IN collection
> ...
>
> Best regards
> Jan
>
>
>
> Am Mittwoch, 5. April 2017 04:59:53 UT
Working on a social network project, I've recently realized that the way I
was fetching users' feed was not scalable:
for f in 1 outbound 'users/' follows
for c, p in 1 outbound f hasPublished
sort p.publishedAt
return c
As the number of users and published content grows, this query would always
A workaround for avoiding deadlocks is to specify extra collections (those
> that will be accessed dynamically) in a `WITH` clause at the beginning of
> an AQL query, e.g.
>
> WITH extracollection1, extracollection2
> FOR doc IN collection
> ...
>
> Best regar
Hi everyone,
I just wanted to share with you my recent experience in troubleshooting
strange problems.
Background: This project uses Foxx where most of the app logic is
implemented. From Foxx functions, I used the request module to post events
to Azure Table Storage.
Everything was really
ut I am not aware of any changes that introduced new
>> issues there. And TLS support should have been there in 3.0 already. So I
>> am wondering if you could provide some more info on this.
>>
>> Thanks!
>> Jan
>>
>> Am Montag, 17. April 2017 10
n other ASCII characters except TAB (which you can
> probably ignore) and the space character.
>
> The other solution posted by Scott will also work, but it will not use an
> index, because a function is applied on the indexed values before
> comparison.
>
> Best regards
&
Hi there,
I've just seen in my logs that requests have failed with the "deadlock
detected (while executing)" message, but those requests were all read-only
AQL queries. How can such requests be involved in a deadlock?
Thanks,
Thomas
--
You received this message because you are subscribed to
Hi everyone,
A nice little issue that I've just encountered! :)
I need to provide a case-insensitive search on some string field, where the
beginning of the string should suggest the different results.
Ex: 'thom' and 'Thom' should find 'Thomas'
Initially, I used a full-text index and it worked
use the
skiplist to target the userId AND sort. Your comments would be very
appreciated here!
Thanks,
Thomas
On Sunday, April 9, 2017 at 8:19:21 PM UTC+8, Thomas Weiss wrote:
>
> Working on a social network project, I've recently realized that the way I
> was fetching users' feed was
ndex them with the skiplist index on
> ["userId", "time"].
> The query then becomes
>
> for s in shares
> filter s.userId == 'userId'
> sort s.time desc
> return s
>
> That should also allow usage of the skiplist index.
> Best regards
> Jan
>
>
s
> <https://docs.arangodb.com/3.1/Manual/Foxx/Scripts.html#deleting-a-job-from-the-queue>).
>
> This can also be done with a periodic job.
>
> Best
> Mark
>
> Am Dienstag, 25. April 2017 04:05:39 UTC+2 schrieb Thomas Weiss:
>>
>> Hi everyone,
>>
>&
Hi everyone,
I would like to get your comments concerning the very high memory usage I'm
seeing on my production server.
Some numbers:
- Running Ubuntu 16.04 and ArangoDB 3.1.15
- 5 document collections, 12 edge collections
- Around 200,000 documents
- A dump takes approximately 50MB of disk
Hi everyone,
Still working on my social network project (I swear I will blog about how
it uses Arango once things get less busy!), I'm now considering using Foxx
queues to asynchronously denormalize the users' feed. Some questions:
1. I guess that queues are stored in system collections, so by
(but again, steadily growing since then although no one uses the
DB).
<https://lh3.googleusercontent.com/-7LBfny5TIo8/WQftZM3_b-I/AJw/yKZQhS_A2iYf5PjkLQUdGu_O7Y394es5gCLcB/s1600/Screen%2BShot%2B2017-05-02%2Bat%2B10.15.51%2BAM.png>
On Thursday, April 27, 2017 at 6:02:11 PM UTC+8, Thomas
emove the graphs you see in the webinterface)
>
> --server.statistics false
>
>
> If you don't use foxx queues, you can also disable them:
> --foxx.queues false
>
> You should then see that this behaviour stops.
>
> Cheers,
> Willi
>
> Since
> On Thursday,
30 matches
Mail list logo