The spark context reference is transient.
On Fri, Dec 26, 2014 at 6:11 PM, Alessandro Baretta
wrote:
> How, O how can this be? Doesn't the SQLContext hold a reference to the
> SparkContext?
>
> Alex
>
ari Shreedharan"
wrote:
> In general such discussions happen or is posted on the dev lists. Could
> you please post a summary? Thanks.
>
> Thanks,
> Hari
>
>
> On Wed, Dec 24, 2014 at 11:46 PM, Cody Koeninger
> wrote:
>
>> After a long talk with Patr
:
> yup, we at tresata do the idempotent store the same way. very simple
> approach.
>
> On Fri, Dec 19, 2014 at 5:32 PM, Cody Koeninger
> wrote:
>>
>> That KafkaRDD code is dead simple.
>>
>> Given a user specified map
>>
>> (topic1, parti
[
https://issues.apache.org/jira/browse/SPARK-4964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14258670#comment-14258670
]
Cody Koeninger commented on SPARK-4964:
---
Usage example of the dstream for
Cody Koeninger created SPARK-4964:
-
Summary: Exactly-once semantics for Kafka
Key: SPARK-4964
URL: https://issues.apache.org/jira/browse/SPARK-4964
Project: Spark
Issue Type: Improvement
Is there a reason not to go ahead and move the _cache and _lock files
created by Utils.fetchFiles into the work directory, so they can be cleaned
up more easily? I saw comments to that effect in the discussion of the PR
for 2713, but it doesn't look like it got done.
And no, I didn't just have a
it
> might help.
>
> Thanks,
> Hari
>
>
> On Fri, Dec 19, 2014 at 1:48 PM, Cody Koeninger
> wrote:
>
>>
>> The problems you guys are discussing come from trying to store state in
>> spark, so don't do that. Spark isn't a distributed database
[
https://issues.apache.org/jira/browse/SPARK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14254100#comment-14254100
]
Cody Koeninger commented on SPARK-3146:
---
This is a real problem for production
t implementation of ReliableKafkaReceiver cannot fully guarantee
> the
> >>> exact once semantics once failed, first is the ordering of data
> replaying
> >>> from last checkpoint, this is hard to guarantee when multiple
> partitions
> >>> are injected in; s
[
https://issues.apache.org/jira/browse/SPARK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252397#comment-14252397
]
Cody Koeninger commented on SPARK-3146:
---
((K, V), (topicAndPartition, offset)
lly possible to guarantee - though I really would
> love to have that!
>
> Thanks,
> Hari
>
>
> On Thu, Dec 18, 2014 at 12:26 PM, Cody Koeninger
> wrote:
>
>> Thanks for the replies.
>>
>> Regarding skipping WAL, it's not just about optimization.
[
https://issues.apache.org/jira/browse/SPARK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252299#comment-14252299
]
Cody Koeninger commented on SPARK-3146:
---
Yes, for the specific case of k
[
https://issues.apache.org/jira/browse/SPARK-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252261#comment-14252261
]
Cody Koeninger commented on SPARK-4122:
---
+1 for the idea of making this write
[
https://issues.apache.org/jira/browse/SPARK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252210#comment-14252210
]
Cody Koeninger commented on SPARK-3146:
---
(1) is important bec
this recently so it's worth
>> revisiting given the developments in Kafka.
>>
>> Please do bring things up like this on the dev list if there are
>> blockers for your usage - thanks for pinging it.
>>
>> - Patrick
>>
>> On Thu, Dec 18, 2014 at 7:07 AM, C
[
https://issues.apache.org/jira/browse/SPARK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252146#comment-14252146
]
Cody Koeninger commented on SPARK-3146:
---
>From my point of view, the inte
Now that 1.2 is finalized... who are the go-to people to get some
long-standing Kafka related issues resolved?
The existing api is not sufficiently safe nor flexible for our production
use. I don't think we're alone in this viewpoint, because I've seen
several different patches and libraries to
Do you actually need spark streaming per se for your use case? If you're
just trying to read data out of kafka into hbase, would something like this
non-streaming rdd work for you:
https://github.com/koeninger/spark-1/tree/kafkaRdd/external/kafka/src/main/scala/org/apache/spark/rdd/kafka
Note th
For an alternative take on a similar idea, see
https://github.com/koeninger/spark-1/tree/kafkaRdd/external/kafka/src/main/scala/org/apache/spark/rdd/kafka
An advantage of the approach I'm taking is that the lower and upper offsets
of the RDD are known in advance, so it's deterministic.
I haven't
For an alternative take on a similar idea, see
https://github.com/koeninger/spark-1/tree/kafkaRdd/external/kafka/src/main/scala/org/apache/spark/rdd/kafka
An advantage of the approach I'm taking is that the lower and upper offsets
of the RDD are known in advance, so it's deterministic.
I haven't
I'm wondering why
https://issues.apache.org/jira/browse/SPARK-3638
only updated the version of http client for the kinesis-asl profile and
left the base dependencies unchanged.
Spark built without that profile still has the same
java.lang.NoSuchMethodError:
org.apache.http.impl.conn.DefaultClie
My 2 cents:
Spark since pre-Apache days has been the most friendly and welcoming open
source project I've seen, and that's reflected in its success.
It seems pretty obvious to me that, for example, Michael should be looking
at major changes to the SQL codebase. I trust him to do that in a way
th
Opened
https://issues.apache.org/jira/browse/SPARK-4229
Sent a PR
https://github.com/apache/spark/pull/3102
On Tue, Nov 4, 2014 at 11:48 AM, Marcelo Vanzin wrote:
> On Tue, Nov 4, 2014 at 9:34 AM, Cody Koeninger wrote:
> > 2. Is there a reason StreamingContext.getOrCreate defaults t
[
https://issues.apache.org/jira/browse/SPARK-4229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Cody Koeninger updated SPARK-4229:
--
Description:
Some places use SparkHadoopUtil.get.conf, some create a new hadoop config
Cody Koeninger created SPARK-4229:
-
Summary: Create hadoop configuration in a consistent way
Key: SPARK-4229
URL: https://issues.apache.org/jira/browse/SPARK-4229
Project: Spark
Issue Type
[
https://issues.apache.org/jira/browse/SPARK-4196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14196641#comment-14196641
]
Cody Koeninger commented on SPARK-4196:
---
Have you tried repla
3 quick questions, then some background:
1. Is there a reason not to document the fact that spark.hadoop.* is
copied from spark config into hadoop config?
2. Is there a reason StreamingContext.getOrCreate defaults to a blank
hadoop configuration rather than
org.apache.spark.deploy.SparkHadoopUt
[
https://issues.apache.org/jira/browse/SPARK-3146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14195146#comment-14195146
]
Cody Koeninger commented on SPARK-3146:
---
I think this PR is an elegant way to s
[
https://issues.apache.org/jira/browse/MESOS-123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14173810#comment-14173810
]
Cody Koeninger commented on MESOS-123:
--
In the meantime, can we at least modify t
[
https://issues.apache.org/jira/browse/SPARK-3851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14164191#comment-14164191
]
Cody Koeninger commented on SPARK-3851:
---
So I have a couple of questions
1.
On Wed, Oct 8, 2014 at 3:19 PM, Michael Armbrust
wrote:
>
> I was proposing you manually convert each different format into one
> unified format (by adding literal nulls and such for missing columns) and
> then union these converted datasets. It would be weird to have union all
> try and do thi
hange the on-disk
>> sstable format, it will do a convert-on-read as you access the sstables,
>> or
>> you can run the upgradesstables command to convert them all at once
>> post-upgrade.
>>
>> Andrew
>>
>> On Fri, Oct 3, 2014 at 4:33 PM, Cody Koening
Wondering if anyone has thoughts on a path forward for parquet schema
migrations, especially for people (like us) that are using raw parquet
files rather than Hive.
So far we've gotten away with reading old files, converting, and writing to
new directories, but that obviously becomes problematic a
you tell us more about your configuration. In particular how much
> memory/cores do the executors have and what does the schema of your data
> look like?
>
> On Tue, Sep 23, 2014 at 7:39 AM, Cody Koeninger
> wrote:
>
>> So as a related question, is there any reason the settings in S
or
spark sql to ignore those.
On Mon, Sep 22, 2014 at 4:34 PM, Cody Koeninger wrote:
> After commit 8856c3d8 switched from gzip to snappy as default parquet
> compression codec, I'm seeing the following when trying to read parquet
> files saved using the new default (same schema and
After commit 8856c3d8 switched from gzip to snappy as default parquet
compression codec, I'm seeing the following when trying to read parquet
files saved using the new default (same schema and roughly same size as
files that were previously working):
java.lang.OutOfMemoryError: Direct buffer memor
Optional.class file
> from the Spark assembly you're using.
>
> On Mon, Sep 22, 2014 at 12:46 PM, Cody Koeninger
> wrote:
> > We're using Mesos, is there a reasonable expectation that
> > spark.files.userClassPathFirst will actually work?
> >
> >
hould then
> work.
>
> I'll investigate a way to fix it in Spark in the meantime.
>
>
> On Fri, Sep 19, 2014 at 10:30 PM, Cody Koeninger
> wrote:
> > After the recent spark project changes to guava shading, I'm seeing
> issues
> > with the d
file.
On Mon, Sep 22, 2014 at 10:54 AM, Sandy Ryza
wrote:
> Thanks for the heads up Cody. Any indication of what was going wrong?
>
> On Mon, Sep 22, 2014 at 7:16 AM, Cody Koeninger
> wrote:
>
>> Just as a heads up, we deployed 471e6a3a of master (in order to get some
&g
Just as a heads up, we deployed 471e6a3a of master (in order to get some
sql fixes), and were seeing jobs fail until we set
spark.shuffle.manager=HASH
I'd be reluctant to change the default to sort for the 1.1.1 release
After the recent spark project changes to guava shading, I'm seeing issues
with the datastax spark cassandra connector (which depends on guava 15.0)
and the datastax cql driver (which depends on guava 16.0.1)
Building an assembly for a job (with spark marked as provided) that
includes either guava
I noticed that the release notes for 1.1.0 said that spark doesn't support
Hive buckets "yet". I didn't notice any jira issues related to adding
support.
Broadly speaking, what would be involved in supporting buckets, especially
the bucketmapjoin and sortedmerge optimizations?
Wed, Sep 10, 2014 at 9:31 AM, Cody Koeninger
> wrote:
>
>> Tested the patch against a cluster with some real data. Initial results
>> seem like going from one table to a union of 2 tables is now closer to a
>> doubling of query time as expected, instead of 5 to 10x.
>>
to do it. I'll see
> about testing performance against some actual data sets.
>
> On Tue, Sep 9, 2014 at 6:09 PM, Cody Koeninger wrote:
>
>> Ok, so looking at the optimizer code for the first time and trying the
>> simplest rule that could possibly work,
>>
>&
[
https://issues.apache.org/jira/browse/SPARK-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14128682#comment-14128682
]
Cody Koeninger commented on SPARK-3462:
---
Tested this on a cluster against union
ut
testing performance against some actual data sets.
On Tue, Sep 9, 2014 at 6:09 PM, Cody Koeninger wrote:
> Ok, so looking at the optimizer code for the first time and trying the
> simplest rule that could possibly work,
>
> object UnionPushdown extends Rule[LogicalPlan] {
> de
[
https://issues.apache.org/jira/browse/SPARK-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14128451#comment-14128451
]
Cody Koeninger commented on SPARK-3462:
---
Created a PR for feedback.
h
ix the case you are mentioning where a union is
>> used directly from within spark. But that's the context.
>>
>> - Patrick
>>
>> On Tue, Sep 9, 2014 at 12:01 PM, Cody Koeninger
>> wrote:
>> > Maybe I'm missing something, I thought parquet wa
this was not run into before. Do people not
>> segregate their data by day/week in the HDFS directory structure?
>>
>>
>> On Tue, Sep 9, 2014 at 2:08 PM, Michael Armbrust
>> wrote:
>>
>>> Thanks!
>>>
>>> On Tue, Sep 9, 2014 at 11:07 AM,
Opened
https://issues.apache.org/jira/browse/SPARK-3462
I'll take a look at ColumnPruning and see what I can do
On Tue, Sep 9, 2014 at 12:46 PM, Michael Armbrust
wrote:
> On Tue, Sep 9, 2014 at 10:17 AM, Cody Koeninger
> wrote:
>>
>> Is there a reason in general not
Cody Koeninger created SPARK-3462:
-
Summary: parquet pushdown for unionAll
Key: SPARK-3462
URL: https://issues.apache.org/jira/browse/SPARK-3462
Project: Spark
Issue Type: Improvement
I've been looking at performance differences between spark sql queries
against single parquet tables, vs a unionAll of two tables. It's a
significant difference, like 5 to 10x
Is there a reason in general not to push projections and predicates down
into the individual ParquetTableScans in a union
I definitely saw a case where
a. the only job running was a 256m shell
b. I started a 2g job
c. a little while later the same user as in a started another 256m shell
My job immediately stopped making progress. Once user a killed his shells,
it started again.
This is on nodes with ~15G of memory
I definitely saw a case where
a. the only job running was a 256m shell
b. I started a 2g job
c. a little while later the same user as in a started another 256m shell
My job immediately stopped making progress. Once user a killed his shells,
it started again.
This is on nodes with ~15G of memory
job? I'd
> like to repro it.
>
> Tim
>
> > On Aug 20, 2014, at 12:39 PM, Cody Koeninger wrote:
> >
> > I'm seeing situations where starting e.g. a 4th spark job on Mesos
> results in none of the jobs making progress. This happens even with
> --execu
job? I'd
> like to repro it.
>
> Tim
>
> > On Aug 20, 2014, at 12:39 PM, Cody Koeninger wrote:
> >
> > I'm seeing situations where starting e.g. a 4th spark job on Mesos
> results in none of the jobs making progress. This happens even with
> --execu
I'm seeing situations where starting e.g. a 4th spark job on Mesos results
in none of the jobs making progress. This happens even with
--executor-memory set to values that should not come close to exceeding the
availability per node, and even if the 4th job is doing something
completely trivial (e
I'm seeing situations where starting e.g. a 4th spark job on Mesos results
in none of the jobs making progress. This happens even with
--executor-memory set to values that should not come close to exceeding the
availability per node, and even if the 4th job is doing something
completely trivial (e
So in 2.0, the signature of ColumnFamilyInputFormat changed from using
IColumn to Cell:
import org.apache.cassandra.db.Cell;
public class ColumnFamilyInputFormat extends
AbstractColumnFamilyInputFormat>
But Cell isn't included in cassandra-all, even though
ColumnFamilyInputFormat is:
object Ce
Just wanted to check in on this, see if I should file a bug report
regarding the mesos argument propagation.
On Thu, Jul 31, 2014 at 8:35 AM, Cody Koeninger wrote:
> 1. I've tried with and without escaping equals sign, it doesn't affect the
> results.
>
> 2. Yeah, expor
The stmt.isClosed just looks like stupidity on my part, no secret
motivation :) Thanks for noticing it.
As for the leaking in the case of malformed statements, isn't that
addressed by
context.addOnCompleteCallback{ () => closeIfNeeded() }
or am I misunderstanding?
On Tue, Aug 5, 2014 at 3:15
t; for mesos). We should probably document this. In this case you need to
> > either use --driver-java-options or set SPARK_SUBMIT_OPTS.
> >
> > 3. Arguments aren't propagated on Mesos (this might be because of the
> > other issues, or a separate bug).
> >
&
stg,null),
(dn-01.mxstg,null), (dn-01.mxstg,null), (dn-02.mxstg,null),
(dn-02.mxstg,null), ...
Note that this is a mesos deployment, although I wouldn't expect that to
affect the availability of spark.driver.extraJavaOptions in a local spark
shell.
On Wed, Jul 30, 2014 at 4:18 PM, Cody Ko
x27;t one already?
> >
> > For system properties SparkSubmit should be able to read those
> > settings and do the right thing, but that obviously won't work for
> > other JVM options... the current code should work fine in cluster mode
> > though, since the driver is a dif
We were previously using SPARK_JAVA_OPTS to set java system properties via
-D.
This was used for properties that varied on a per-deployment-environment
basis, but needed to be available in the spark shell and workers.
On upgrading to 1.0, we saw that SPARK_JAVA_OPTS had been deprecated, and
repla
We tested that patch from aarondav's branch, and are no longer seeing that
deadlock. Seems to have solved the problem, at least for us.
On Mon, Jul 14, 2014 at 7:22 PM, Patrick Wendell wrote:
> Andrew and Gary,
>
> Would you guys be able to test
> https://github.com/apache/spark/pull/1409/file
:36 PM, Patrick Wendell
wrote:
> Cody - did you mean to send this to the spark dev list?
>
> On Tue, Jul 15, 2014 at 7:15 AM, Cody Koeninger
> wrote:
> > I'm going to be on a plane wed 23, return flight monday 28, so will miss
> > daily call those days. I'll be pu
I'm going to be on a plane wed 23, return flight monday 28, so will miss
daily call those days. I'll be pushing forward on projects as I can, but
skype availability may be limited, so email if you need something from me.
Hi all, just wanted to give a heads up that we're seeing a reproducible
deadlock with spark 1.0.1 with 2.3.0-mr1-cdh5.0.2
If jira is a better place for this, apologies in advance - figured talking
about it on the mailing list was friendlier than randomly (re)opening jira
tickets.
I know Gary had
If you're looking at consolidating build systems, I'd ask to consider ease
of cross-publishing for different Scala versions. My instinct is that sbt
will be less troublesome in that regard (although as I understand it, the
changes to the repl may present a problem).
We're needing to use 2.10 f
On Sep 10, 1:32 am, Mike Meyer wrote:
> I think that Java's strength is enterprise-level, highly scalable web
> servers make people assume that every problem must be a nail for that
> hammer.
I think that Unix's strength is small independent programs
communicating over standard I/O makes you as
On Mar 27, 11:55 pm, Mike Meyer wrote:
> But if
> you're serious about this, you need to talk to a real copyright
> lawyer.
This is the only correct answer to the OP's question.
Don't take legal advice from random people on a newsgroup.
--
You received this message because you are subscribed t
On Mar 23, 10:37 am, Sean Devlin wrote:
> Hey folks,
> I'm looking to add to my bookshelf. I was wondering what this groups
> experience with the Schemer series of books is?
>
> Sean
Little, seasoned, + the little MLer are awesome, only thing that comes
close in terms of pedagogical quality i
On Mar 6, 10:57 am, Marius wrote:
> Actually thinking more into it there is a good reason for -%> to not
> have a (NodeSeq) => NodeSeq support. -%> means that it preserves the
> attributes specified in the template to the resulting node.But having
> a bunch of attributes we can't apply them to a
On Mar 6, 7:28 pm, David Pollak wrote:
> Another failing of Rails is the community. The Rails community is a
> significant detractor to adoption outside of the young hip kids.
The rails community is a significant detractor to adoption even among
young hip kids. . . I hope I'm not the only one di
On Mar 6, 2:20 pm, Timothy Perrett wrote:
> > > Okay... sorry... but this is a gratuitous swipe. Ugly == Not Easy to Use.
> > > Nope. Sorry. I don't buy this.
>
> > Maven commands that wont copy and paste correctly == Not Easy To Use.
>
> Im not sure it is difficult to copy and paste:
> mvn a
On Mar 6, 11:35 am, David Pollak
wrote:
> Please allow me to rebut your thoughtful post.
I'm really glad to see David taking a more reasoned response to this
criticism compared to the early responders . . .
> Okay... sorry... but this is a gratuitous swipe. Ugly == Not Easy to Use.
> Nope. S
On Feb 10, 12:11 pm, Hugo Palma wrote:
> I'm not sure i understand your solution, so your build process find an
> index.html and replaces all the text there to all the languages and creates
> the appropriate index_.html file ?
Yes
May seem like a hack, but on the other hand I honestly don't
un
On Feb 8, 5:07 pm, Timothy Perrett wrote:
> Generally I find that to be only of use when needed specific adjustments to
> templates. For instance, english vs german... the german language is
> significantly more verbose so requires different div heights etc sometimes.
> Its not generally a stra
On Feb 2, 1:19 pm, Dave Angulo wrote:
> There is lots of work to get done and we're planning a kickoff event
> in Boulder, Feb 19-21, to get some
> momentumhttp://www.snapimpact.org/blog/?p=468.
> Outside of that, we'd love to figure out how to best leverage any
> interest from this community t
On Jan 21, 6:52 pm, Timothy Perrett wrote:
> The site is not perfect, we know that... we are trying to work on it but
> progress is slow for a variety of reasons.
If "lack of people to work on it" is one of those reasons, I'm willing
to help.
FWIW, I had no problem finding the getting starte
On Jan 20, 11:03 am, Stefan Koenig wrote:
> So basically my questions: Did I do something wrong?
Complete requests: 1
Failed requests:9866
(Connect: 0, Receive: 0, Length: 9866, Exceptions: 0)
Unless you're intentionally returning a page with dynamic length,
which from a c
On Jan 2, 11:50 am, Mike Meyer wrote:
> There are definitely some good ideas there - and I agree with most of
> the goals. But Lift, like most other page-centric web frameworks,
> seems to break one of the fundamental rules of good API design: Simple
> things should be simple.
Having implemented
On Dec 30 2009, 3:16 pm, David Pollak
wrote:
> > Dreamweaver doesn't deal with head merge, it will automatically move
> > the head tags outside of the lift:surround. Not really a lift issue.
>
> Is there a way we could do something to make Lift stuff more DW friendly?
Not that I can see - shor
We recently implemented a small, mostly static content site (http://
www.goldenfrog.com/) using Lift. Thought I'd offer up a postmortem on
what worked well and what didn't, in case it's useful to anyone
considering Lift. We're a small team with essentially no prior Scala
experience, so take this
On Dec 8, 2:53 pm, Jeppe Nejsum Madsen wrote:
> Maybe I don't really understand what you mean when you say data
> consistency. A pessimistic lock is only useful within a single database
> transaction. In my experience, in a web app, a user transaction (such as
> loading a record, changing data,
On Dec 8, 2:19 am, Jeppe Nejsum Madsen wrote:
> record will not be unlocked until the session times out
I thought it was stated above that the transaction is scoped to the
request by default, not the session?
> A much better solution imo is to use optimistic locking.
I'm not going to di
On Dec 7, 1:54 pm, David Pollak wrote:
> Feel free to open a ticket. We prioritize work for production sites (or
> sites that are destined for production). If you meet that criteria, please
> add this to the ticket so we can decide on what the priority is.
Thanks, that's totally fair. If we
On Dec 6, 9:16 pm, Alex Boisvert wrote:
> Lift's mapper doesn't change the default isolation level of your
> connections, nor does it make explicit use of pessimistic concurrency
> control.
> Anything beyond that we can probably implement, we just need a good
> reason...
>
> alex
Isn't the pos
Do mapper or record provide any assistance for avoiding race
conditions caused by the database transaction isolation level? I
didn't notice anything in my initial skim of the lift book, and
grepping the code for obvious suspects like "for update" didn't return
anything.
If not, what are people wi
http://clojure.org/lisps
"All (global) Vars can be dynamically rebound without interfering with
lexical local bindings. No special declarations are necessary to
distinguish between dynamic and lexical bindings."
Other part of that explanation is whether x in a given piece of code
refers to a lexi
On Nov 18, 11:32 am, harryh wrote:
> - Don't use Lift with MySQL, they don't play nicely. Use PostgreSQL
>
Can you elaborate?
--
You received this message because you are subscribed to the Google Groups
"Lift" group.
To post to this group, send email to lift...@googlegroups.com.
To unsubs
On Nov 13, 9:42 am, Sean Devlin wrote:
> In this case, you provide the docs for each method after parameters.
> Would the following be possible:
>
> (defprotocol AProtocol :on AnInterface
> "A doc string for AProtocol abstraction"
> (bar "bar docs" [a b] :on barMethod)
> (baz "baz docs" ([a
On Oct 31, 11:42 am, Richard Newman wrote:
> VimClojure relies on Nailgun, with a bunch of people on this list
> using it with Clojure every day.
My recollection from list and IRC was that (aside from random nailgun
issues + the project not being updated in 4 years) there was an issue
with d
On Oct 31, 5:22 am, alxtoth wrote:
> Why not use the OS task scheduler? On un*x there is good old cron or
> at. On windoze there is similar task scheduler.
>
Overhead from starting and stopping the JVM every couple of minutes
would probably be unacceptable. My understanding is that solutions
On Oct 4, 1:31 am, Meikel Brandmeyer wrote:
> Here we have the smell! You cannot define functions with a function.
> You have to use a macro!
I am not clear on what you mean by this. From a user's point of view,
what is the difference between defining a function, and interning a
var with a fn
On Sep 11, 10:56 am, Michael Teter wrote:
> What I would like to find now is some kind of guide or document to
> help me learn to design the functional way, instead of just writing
> Java in Clojure.
http://htdp.org/
--~--~-~--~~~---~--~~
You received this mess
On Aug 28, 12:16 am, ngocdaothanh wrote:
> Hi all,
>
> Is there an i18n library for Clojure? What Java i18n library should I
> use in a Clojure program (it suits Clojure syntax for example)? For
> Ruby and Erlang I prefer Gettext, but for Java it seems
> that .properties files are in major use.
On Aug 26, 5:29 am, Christian Vest Hansen
wrote:
> Another Scala downer: "Scala is very powerful, some developers might
> shoot themselves into the foot" - I don't see how this applies more to
> Scala than Clojure. If we want to talk about foot-shooting, we could
> talk about macros. There are
Assuming people aren't patching clojure ala dave griffith's external
transactions patch in the group files, what are people doing in
practice to durably store the state of refs?
Storing within a transaction and somehow ensuring your store operation
is idempotent (not to mention reversible)?
Sendi
1301 - 1400 of 1425 matches
Mail list logo