Thanks Ily ,
could share any guidelines to control groupby?, Like didicated client nodes
for connectivity from Tableau and SQL?
Thanks
Bhaskar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks Alex,
We have large pool of developers who uses TOAD, just thought of making TOAD
connect to Ignite to have similar experience. We are using DBeaver right
now.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello Ilya Kasnacheev,
I set IGNITE_SQL_FORCE_LAZY_RESULT_SET=true just below
ENABLE_ASSERTIONS="0" in ignite.sh but still I am getting out of memory
error when I do select * from table. Is this right place to set this
parameter? please confirm.
Thanks
--
Sent from:
Great,
It works perfectly, thankyou.
Bhaskar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello Ignite Team,
Is it possible to connect to Ignite from TOAD tool? for SQL Querying?.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello Ignite team,
I a writing data from Spark Dataframe to Ignite, frequently one node goes
down, I dont see any error in log file below is the trace. If i restart it
doesn't join Cluster unless I stop the Spark job which is writing data to
Ignite Cluster.
I have 4 nodes with 4CPU/16GB RAM
Hello Ignite team,
We are using Apache Ignite are SQL reporting cluster . Ignite Persistence
and authenticationEnabled . We need a read only user role apart from ignite
user, is there any role or a way to create user with read only previllages?
Thanks
--
Sent from:
Hello Ilya Kasnacheev,
using Ignite 2.6.
SQL through Tableau using ODBC connection is getting OOME when selct 8 from
table without limit.
I have set export IGNITE_SQL_FORCE_LAZY_RESULT_SET=true in ignite.sh.
What else should I configure to avoid OOME when using ODBC?
thanks
--
Sent from:
Thanks Denis,
When Submit Spark job which connects to Ignite cluster creates an Ignite
Client. The Ignite Client gets disconnected whe I close the window(Linux
Shell).
Regular Spark jobs are running fine with & or nohup, but in Spark/Ignite
case, the clienst ae getting killed and spark job
attaching log of the tow nodes crashing everytime, I have 4 nodes but the
other two nodes ver rarely crashed. All nodes(VM) are 4CPU/16GB RAm/200GB
HDD(Shared Storage)
node 3:
[16:35:21,938][INFO][main][IgniteKernal]
>>>__
>>> / _/ ___/ |/ / _/_ __/ __/
Hello Ignite Team,
I have Spark job thats streams live data into Ignite Cache . The job gets
closed as soon as I close window(Linux shell) . The other spark streaming
jobs I run with "&" at the end of spark submit job and they run for very
long time untill they I stop or crash due to other
Hi Ignite Team,
I have Installed Ignite as Service using RPM, its running fine but how can I
use ignitevisorcmd.sh to check the topology etc.?
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Team,
We are using persistent storage . Could you please answer the following.
1. What is the data format(Binary?) .
2. Is it compressed on Disc and in Memory?
3. Is the Data format in Memory on Disc same?
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks Andrei,
I ceated user but can't alter user except for changing password, The user is
able to delete rows or truncate tables which I dont want except ignite user.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Slava,
Sorry to get into this thread,I have similar problem to control long running
SQLs. I want timeout SQLs running more than 500ms.
I sthere any way to set etLongQueryWarningTimeout() in CONFIG File?
Appreciate your response.
Thanks
--
Sent from:
Hi ilya,
I am using Ignite 2.5, The message pasted from "systemctl status
apache-ign...@default-config.xml "command. I did nt run any command.
full message:
]# systemctl status apache-ign...@default-config.xml
● apache-ign...@default-config.xml.service - Apache Ignite In-Memory
Computing
Hi,
/etc/systemd/system/apache-ignite@.service:
[Unit]
Description=Apache Ignite In-Memory Computing Platform Service
After=syslog.target network.target
[Service]
Type=forking
User=ignite
WorkingDirectory=/usr/share/apache-ignite/work
PermissionsStartOnly=true
ExecStartPre=-/usr/bin/mkdir
the service is running but cant access, full message below
[]# systemctl status apache-ign...@default-config.xml
● apache-ign...@default-config.xml.service - Apache Ignite In-Memory
Computing Platform Service
Loaded: loaded (/etc/systemd/system/apache-ignite@.service; enabled;
vendor preset:
Thanks Ilya,
Appreciate your help,
Is there any parameter in COnfig file to control the number of rows or
amount of resources a clinet connection can use and if exceeds disconnect?
thanks
Bhaskar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Evgenii,
We use Ignite-as Im Memory Database for Tableau and SQL, we dont use Java.
We use spark to load data into Ingite by Spark streaming realtime data.
So if any user runs select * from table, the server nodes going OOME. We
need to control that behaviour i sthere any way?
Thanks
--
Sent
Hi Ignite Team,
I am trying set SqlFieldsQuery to seTLazy to avoid OOME on Server nodes. By
Config file has below setting
but getting below
Evgenii,
what happens if the user doesn't set that limit or forget to set on client
tool?,
we set that but some one testing without the lazy=true to prove that Apache
Ignite is not stable.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Is it possible to use Spark Structured Streaming with Ignite?. I am getting
"Data source ignite does not support streamed writing" error.
Log trace:
Exception in thread "main" java.lang.UnsupportedOperationException: Data
source ignite does not support streamed writing
at
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after
reaching 350GB plus one or two nodes not processing data load and the data
load is stopped. Please advise, the CLuster , Server and Client Logs
Hi Team,
We have 6 node Ignite cluster with 72CPU, 256GB RAM and 5TB Storage . Data
ingested using Spark Streaming into Ignite Cluster for SQL and Tableau
Usage.
I have couple of Large tables with 200ml rows with (200GB) and 800ml rows
with (500GB) .
The insertion is taking more than 40secs
Thanks Andrei,
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello Team,
Any update on the Spark Structred streaming support with Ignite?
https://issues.apache.org/jira/browse/IGNITE-9357
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Team
Apache Ignite atomicityMode is "ATOMIC" by default or do we need to
explictly include in defaul-config.xml? Pleae advise.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks Stan,
planning to move on to 2.7.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
I am trying to save Spark DataFrame to Ignite, getting Unsupported data type
ArrayType(StringType,true) error. The same code was working fine.
This is the code
val qErrJson =
spark.read.json(qErrErr.select("err").filter(_.getStringOption("err").isDefined).map(row
=> row.getString(0)))
Hi Team,
How does the data stored when the persistenc eis enabled?
Does Ignite Store all data in all Nodes enabled persistence or just the
data which in that node?
Ex: I have 4 node Ignite Cluster and persistence is enabled in all 4 nodes
(activated after 4 nodes come up) . When the data is
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage) Spark Ignite Cluster .It happened tow times after
reaching 350GB plus one or two nodes not processing data load and the data
load is stopped. Please advise, the CLuster , Server and Client Logs
Hi Ilya,
I am able to start and run SQL Queryies but not able to write, while loading
data this error is thrown. please try to write some data in any dummy table
with couple fields. I am using affinity key and backups=1 .
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Team,
we have 6 node Ignite cluster, loading data with Spark. recently we have
added "cacheConfiguration", getting below error when we try to recreate
"cache" using spark data load.
any hint help please?
The error:
Caused by: org.springframework.beans.factory.BeanCreationException: Error
34 matches
Mail list logo