Hi, Try to use DataStreamer for fast cache load [1]. If you need to use SQL, you can try to use bulk mode updated via JDBC [2]
Also a COPY SQL command [3] will be available in next 2.5 release. The feature is already in master, you can try to build from it. See example [4] . [1] https://apacheignite.readme.io/docs/data-streamers [2] https://apacheignite.readme.io/v2.0/docs/jdbc-driver#section-streaming-mode [3] https://issues.apache.org/jira/browse/IGNITE-6917 [4] https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/sql/SqlJdbcCopyExample.java On Thu, Mar 29, 2018 at 11:30 AM, <[email protected]> wrote: > Dear all, > > > > I am trying to use the SqlFieldsQuery sdk to insert data to one cache on > Ignite. > > > > I can insert one data into one cache at a time. > > > > However, I have no idea to insert multiple rows/data into the cache once. > > > > For example, I would like to insert 1000 rows/data into the cache once. > > > > Here, I provide my code to everyone to reproduce my situation. > > *public* *class* IgniteCreateServer { > > *public* *class* Person { > > @QuerySqlField > > *private* String firstName; > > @QuerySqlField > > *private* String lastName; > > *public* Person(String firstName, String lastName) { > > *this*.firstName = firstName; > > *this*.lastName = lastName; > > } > > } > > *public* *static* *void* main(String[] args) { > > cacheConf.setName("igniteCache"); > > *cacheConf**.setIndexedTypes(String.**class**, String.**class**)*; > > cacheConf.setCacheMode(CacheMode.*REPLICATED*); > > cacheConf.setAtomicityMode(CacheAtomicityMode.*ATOMIC*); > > cfg.setCacheConfiguration(cacheConf); > > Ignite igniteNode = Ignition.*getOrStart*(cfg); > > *IgniteCache* cacheKeyvalue = *igniteNode**.getOrCreateCache(* > *cacheConf**)*; > > > > *long* starttime, endtime; > > starttime = System.*currentTimeMillis*(); > > *int* datasize = 100000; > > *for* (*int* i = 0; i < datasize; i++) { > > cacheKeyvalue.put("key " + Integer.*toString*(i), Integer.*toString*(i > )); > > } > > endtime = System.*currentTimeMillis*(); > > System.*out*.println("write " + datasize + " pairkeyvalue data: spend " > + (endtime - starttime) + "milliseconds"); > > //========================================================== > ======================================================= > > > > cacheCfg.setName("personCache"); > > cacheCfg.setIndexedTypes(String.*class*, Person.*class*); > > cacheCfg.setCacheMode(CacheMode.*REPLICATED*); > > cacheCfg.setAtomicityMode(CacheAtomicityMode.*ATOMIC*); > > *IgniteCache* cacheKeyTable = igniteNode.getOrCreateCache(cacheCfg); > > > > *long* starttime1, endtime1; > > starttime1 = System.*currentTimeMillis*(); > > *for* (*int* i = 0; i < datasize; i++) { > > cacheKeyTable.query(*new* SqlFieldsQuery("INSERT INTO > Person(_key,firstName,lastName) VALUES(?,?,?)") > > .setArgs(i, "key " + Integer.*toString*(i), Integer.*toString*(i))); > > } > > endtime1 = System.*currentTimeMillis*(); > > System.*out*.println("write" + datasize + " table data: spend " + ( > endtime1 - starttime1) + "milliseconds"); > > } > > > > The my code show as: > > “write 100000 pairkeyvalue data: spend 4734 milliseconds > > write 100000 table data: spend 2846 milliseconds” > > > > From the above result, I feel that using the SQL to insert data to cache > is faster than using cache.getall(). > > > > I am not sure if this is correct or not? > > > > In addition, that is important for me to insert data into cache via the > use of SQL, > > so I would like to insert multiple rows/data to accelerate it. > > > > if any further information is needed, I am glad to be informed and will > provide to you as soon as possible. > > > > Thanks > > > > Rick > > > > > > > > > > > > > -- > 本信件可能包含工研院機密資訊,非指定之收件者,請勿使用或揭露本信件內容,並請銷毀此信件。 This email may contain > confidential information. Please do not use or disclose it in any way and > delete it if you are not the intended recipient. > -- Best regards, Andrey V. Mashenkov
