Re: Issues about removed topics with KafkaSource

2023-11-02 Thread Hector Rios
Hi Emily

One workaround that might help is to leverage the state-processor-api[1].
You would have to do some upfront work to create a state-processor job to
wipe the state (offsets) of the topic you want to remove and use the newly
generated savepoint without the removed state of the topic or topics. It
could even be parameterized to be more generic and thus be reusable across
multiple jobs.

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/libs/state_processor_api/#state-processor-api

Hope that helps
-Hector


On Thu, Nov 2, 2023 at 7:25 AM Emily Li via user 
wrote:

> Hey Martijn
>
> Thanks for the clarification. Now it makes sense.
>
> I saw this feature FLIP-246 is still a WIP and there's no release date
> yet, and it actually contains quite some changes in it. We noticed there's
> a WIP PR for this change, just wondering if there's any plan in releasing
> this feature?
>
> For our current situation, we are subscribing to hundreds of topics, and
> we add/remove topics quite often (every few days probably), adding topics
> seems to be okay at the moment, but with the current KafkaSource design, if
> removing a topic means we need to change the kafka soure id, and restart
> with non-restored state, I assume it means we will lose the states of other
> topics as well, and because we need to do this quite often, it seems quite
> inconvenient to keep restarting the application with non-restored state.
>
> We are thinking of introducing some temporary workaround while waiting for
> this dynamic adding/removing topics feature (probably by forking the
> flink-connector-kafka and add some custom logic there), just wondering if
> there's any direction you can point us if we are to do the work around, or
> is there any pre-existing work that we could potentially re-use?
>
> On Thu, Nov 2, 2023 at 3:30 AM Martijn Visser 
> wrote:
>
>> Hi,
>>
>> That's by design: you can't dynamically add and remove topics from an
>> existing Flink job that is being restarted from a snapshot. The
>> feature you're looking for is being planned as part of FLIP-246 [1]
>>
>> Best regards,
>>
>> Martijn
>>
>> [1]
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=217389320
>>
>>
>> On Wed, Nov 1, 2023 at 7:29 AM Emily Li via user 
>> wrote:
>> >
>> > Hey
>> >
>> > We have a flinkapp which is subscribing to multiple topics, we recently
>> upgraded our application from 1.13 to 1.15, which we started to use
>> KafkaSource instead of FlinkKafkaConsumer (deprecated).
>> >
>> > But we noticed some weird issue with KafkaSource after the upgrade, we
>> are setting the topics with the kafkaSource builder like this
>> >
>> > ```
>> >
>> > KafkaSource
>> >
>> >   .builder[CustomEvent]
>> >
>> >   .setBootstrapServers(p.bootstrapServers)
>> >
>> >   .setGroupId(consumerGroupName)
>> >
>> >   .setDeserializer(deserializer)
>> >
>> >   .setTopics(topics)
>> > ```
>> >
>> > And we pass in a list of topics to subscribe, but from time to time we
>> will add some new topics or remove some topics (stop consuming them), but
>> we noticed that ever since we upgraded to 1.15, when we remove a topic from
>> the list, it somehow still consuming the topic (committed offset to the
>> already unsubscribed topics, we also have some logs and metrics showing
>> that we are still consuming the already removed topic), and from the
>> aws.kafka.sum_offset_lag metric, we can also see the removed topic having
>> negative lag...
>> >
>> >
>> > And if we delete the topic in kafka, the running flink application will
>> crash and throw an error "
>> >
>> > saying the partition cannot be found (because the topic is already
>> deleted from Kafka).
>> >
>> >
>> > We'd like to understand what could have caused this and if this is a
>> bug in KafkaSource?
>> >
>> >
>> > When we were in 1.13, this never occurred, we were able to remove
>> topics without any issues.
>> >
>> >
>> > We also tried to upgrade to flink 1.17, but the same issue occurred.
>>
>


[jira] [Updated] (FLINK-33010) NPE when using GREATEST() in Flink SQL

2023-08-31 Thread Hector Rios (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-33010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hector Rios updated FLINK-33010:

Description: 
Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
FROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
FROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  

  was:
Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  


> NPE when using GREATEST() in Flink SQL
> --
>
> Key: FLINK-33010
> URL: https://issues.apache.org/jira/browse/FLINK-33010
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.16.1, 1.16.2
>Reporter: Hector Rios
>Priority: Minor
>
> Hi,
> I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
> and timestamps. Below is an example to help in reproducing the issue.
> {code:java}
> CREATE TEMPORARY VIEW Positions AS
> SELECT
> SecurityId,
> ccy1,
> CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
> FROM (VALUES
> (1, 'USD', '2022-01-01'),
> (2, 'GBP', '2022-02-02'),
> (3, 'GBX', '2022-03-03'),
> (4, 'GBX', '2022-04-4'))
> AS ccy(SecurityId, ccy1, publishTimestamp);
> CREATE TEMPORARY VIEW Benchmarks AS
> SELECT
> SecurityId,
> ccy1,
> CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestamp
> FROM (VALUES
> (3, 'USD', '2023-01-01'),
> (4, 'GBP', '2023-02-02'),
> (5, 'GBX', '2023-03-03'),
> (6, 'GBX', '2023-04-4'))
> AS ccy(SecurityId, ccy1, publishTimestamp);
> SELECT *,
> GREATEST(
> IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
> IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
> )
> FROM Positions
> FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
>  
> Using "IF" is a workaround at the moment instead of using "GREATEST"
>   



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33010) NPE when using GREATEST() in Flink SQL

2023-08-31 Thread Hector Rios (Jira)
Hector Rios created FLINK-33010:
---

 Summary: NPE when using GREATEST() in Flink SQL
 Key: FLINK-33010
 URL: https://issues.apache.org/jira/browse/FLINK-33010
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.16.2, 1.16.1
Reporter: Hector Rios


Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-33010) NPE when using GREATEST() in Flink SQL

2023-08-31 Thread Hector Rios (Jira)
Hector Rios created FLINK-33010:
---

 Summary: NPE when using GREATEST() in Flink SQL
 Key: FLINK-33010
 URL: https://issues.apache.org/jira/browse/FLINK-33010
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.16.2, 1.16.1
Reporter: Hector Rios


Hi,

I see NPEs in flink 1.14 and flink 1.16 when running queries with GREATEST() 
and timestamps. Below is an example to help in reproducing the issue.
{code:java}
CREATE TEMPORARY VIEW Positions AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(1, 'USD', '2022-01-01'),
(2, 'GBP', '2022-02-02'),
(3, 'GBX', '2022-03-03'),
(4, 'GBX', '2022-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

CREATE TEMPORARY VIEW Benchmarks AS
SELECT
SecurityId,
ccy1,
CAST(publishTimestamp AS TIMESTAMP(3)) as publishTimestampFROM (VALUES
(3, 'USD', '2023-01-01'),
(4, 'GBP', '2023-02-02'),
(5, 'GBX', '2023-03-03'),
(6, 'GBX', '2023-04-4'))
AS ccy(SecurityId, ccy1, publishTimestamp);

SELECT *,
GREATEST(
IFNULL(Positions.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3))),
IFNULL(Benchmarks.publishTimestamp,CAST('1970-1-1' AS TIMESTAMP(3)))
)
FROM Positions
FULL JOIN Benchmarks ON Positions.SecurityId = Benchmarks.SecurityId {code}
 

Using "IF" is a workaround at the moment instead of using "GREATEST"

  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [Issue] Repeatedly receiving same message from Kafka

2023-08-15 Thread Hector Rios
Hi there

It would be helpful if you could include the code for your pipeline. One
suggestion, can you disable the "EXACTLY_ONCE" semantics on the producer.
Using EXACTLY_ONCE will leverage Kafka transactions and thus add overhead.
I would disable it to see if you still get the same situation.

Also, can you look in the Flink UI for this job and see if checkpoints are
in fact being taken?

Hope that helps
-Hector

On Tue, Aug 15, 2023 at 11:36 AM Dennis Jung  wrote:

> Sorry, I've forgot putting title, so sending again.
>
> 2023년 8월 15일 (화) 오후 6:27, Dennis Jung 님이 작성:
>
>> (this is issue from Flink 1.14)
>>
>> Hello,
>>
>> I've set up following logic to consume messages from kafka, and produce
>> them to another kafka broker. For producer, I've configured
>> `Semantics.EXACTLY_ONCE` to send messages exactly once. (also setup
>> 'StreamExecutionEnvironment::enableCheckpointing' as
>> 'CheckpointingMode.EXACTLY_ONCE')
>>
>>
>> 
>> kafka A -> FlinkKafkaConsumer -> ... -> FlinkKafkaProducer -> kafka B
>>
>> 
>>
>> But though I've just produced only 1 message to 'kafka A', consumer
>> consumes the same message repeatedly.
>>
>> When I remove `FlinkKafkaProducer` part and make it 'read only', it does
>> not happen.
>>
>> Could someone suggest a way to debug or fix this?
>>
>> Thank you.
>>
>


[jira] [Commented] (FLINK-25920) Allow receiving updates of CommittableSummary

2023-06-05 Thread Hector Rios (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-25920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17729290#comment-17729290
 ] 

Hector Rios commented on FLINK-25920:
-

Hello all.

To add more context to this issue, I was working with a customer who was 
experiencing this error. I found the content in the following issue really 
helpful.

https://issues.apache.org/jira/browse/FLINK-30238

 

In the specific case of this customer, the issue was being caused by including 
--drain on their call to stop-with-savepoint. I was able to recreate the issue 
using a very simple job reading from a Kafka source and sinking back to Kafka. 
Unfortunately, it was not consistent across versions. I was able to reproduce 
it on 1.15.3 but not on 1.15.4. Granted, it was a quick test, and I wanted to 
do a more thorough test to reproduce the issue consistently.

One interesting wrinkle on this one is that it occurs in 1.15.x, but the same 
job deployed into 1.14.x does not produce the issue.

Thanks.

 

 

> Allow receiving updates of CommittableSummary
> -
>
> Key: FLINK-25920
> URL: https://issues.apache.org/jira/browse/FLINK-25920
> Project: Flink
>  Issue Type: Sub-task
>  Components: API / DataStream, Connectors / Common
>Affects Versions: 1.15.0, 1.16.0
>Reporter: Fabian Paul
>Priority: Major
>
> In the case of unaligned checkpoints, it might happen that the checkpoint 
> barrier overtakes the records and an empty committable summary is emitted 
> that needs to be correct at a later point when the records arrive.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Non-Determinism in Table-API with Kafka and Event Time

2023-02-13 Thread Hector Rios
Hi Theo

In your initial email, you mentioned that you have "a bit of Data on it"
when referring to your topic with ten partitions. Correct me if I'm wrong,
but that sounds like the data in your topic is bounded and trying to test a
streaming use-case. What kind of parallelism do you have configured for
this job? Is there a configuration to set the number of slots per task
manager?

I've seen varying results based on the amount of parallelism configured on
a job. In the end, it usually boils down to the fact that events might be
ingested into Flink out of order. If the event time on an event is earlier
than the current watermark, then the event might be discarded unless you've
configured some level of out-of-orderedness. Even with out-of-orderedness
configured, if your data is bounded, you might have events with later event
times arriving earlier, which will remain in the state waiting for the
watermark to progress. As you can imagine, if there are no more events,
then your records are on hold.

As a test, after all, your events have been ingested from the topic, try
producing one last event with an event time one or 2 hours later than your
latest event and see if they show up.

Hope it helps
-Hector

On Mon, Feb 13, 2023 at 8:47 AM Theodor Wübker 
wrote:

> Hey,
>
> so one more thing, the query looks like this:
>
> SELECT window_start, window_end, a, b, c, count(*) as x FROM 
> TABLE(TUMBLE(TABLE
> data.v1, DESCRIPTOR(timeStampData), INTERVAL '1' HOUR)) GROUP BY
> window_start, window_end, a, b, c
>
> When the non-determinism occurs, the topic is not keyed at all. When I key
> it by the attribute “a”, I get the incorrect, but deterministic results.
> Maybe in the second case, only 1 partition out of the 10 is consumed at
> once?
>
> Best,
> Theo
>
> On 13. Feb 2023, at 08:15, Theodor Wübker 
> wrote
>
> Hey Yuxia,
>
> thanks for your response. I figured too, that the events arrive in a
> (somewhat) random order and thus cause non-determinism. I used a
> Watermark like this:"timeStampData - INTERVAL '10' SECOND*”* . Increasing
> the Watermark Interval does not solve the problem though, the results are
> still not deterministic. Instead I keyed the 10 partition topic. Now
> results are deterministic, but they are incorrect (way too few). Am I doing
> something fundamentally wrong? I just need the messages to be in somewhat
> in order (just so they don’t violate the watermark).
>
> Best,
> Theo
>
> (sent again, sorry, I previously only responded to you, not the Mailing
> list by accident)
>
> On 13. Feb 2023, at 08:14, Theodor Wübker 
> wrote:
>
> Hey Yuxia,
>
> thanks for your response. I figured too, that the events arrive in a
> (somewhat) random order and thus cause non-determinism. I used a
> Watermark like this: "timeStampData - INTERVAL '10' SECOND*”* .
> Increasing the Watermark Interval does not solve the problem though, the
> results are still not deterministic. Instead I keyed the 10 partition
> topic. Now results are deterministic, but they are incorrect (way too few).
> Am I doing something fundamentally wrong? I just need the messages to be in
> somewhat in order (just so they don’t violate the watermark).
>
> Best,
> Theo
>
> On 13. Feb 2023, at 04:23, yuxia  wrote:
>
> HI, Theo.
> I'm wondering what the Event-Time-Windowed Query you are using looks like.
> For example, how do you define the watermark?
> Considering you read records from the 10 partitions, and it may well that
> the records will arrive the window process operator out of order.
> Is it possible that the records exceed the watermark, but there're still
> some records will arrive?
>
> If that's the case, every time, the records used to calculate result may
> well different and then result in non-determinism result.
>
> Best regards,
> Yuxia
>
> - 原始邮件 -
> 发件人: "Theodor Wübker" 
> 收件人: "User" 
> 发送时间: 星期日, 2023年 2 月 12日 下午 4:25:45
> 主题: Non-Determinism in Table-API with Kafka and Event Time
>
> Hey everyone,
>
> I experience non-determinism in my Table API Program at the moment and (as
> a relatively unexperienced Flink and Kafka user) I can’t really explain to
> myself why it happens. So, I have a topic with 10 Partitions and a bit of
> Data on it. Now I run a simple SELECT * query on this, that moves some
> attributes around and writes everything on another topic with 10
> partitions. Then, on this topic I run a Event-Time-Windowed Query. Now I
> experience Non-Determinism: The results of the windowed query differ with
> every execution.
> I thought this might be, because the SELECT query wrote the data to the
> partitioned topic without keys. So I tried it again with the same key I
> used for the original topic. It resulted in the exact same topic structure.
> Now when I run the Event-Time-Windowed query, I get incorrect results (too
> few result-entries).
>
> I have already read a lot of the Docs on this and can’t seem to figure it
> out. I would much appreciate, if someone could shed a bit of light on this.
> Is 

[JSch-users] Jsch Databases

2013-01-15 Thread Hector Rios
Hello! I have been playing around with Jsch to establish an SSH
connection to my remote server. I have followed numerous tutorials but
seem to be banging my head against the wall.

I am using Eclipse Juno  Windows 7 64-bit.

I am trying to connect to my web server, and access MySQL through an
SSH tunnel. The MySQL application lives in the web-server.

Web Server: web1.fisherdigital.org
Web Server Port Used for SSH: 22
MySQL Port in the web-server: 3306 (default mysql port)
My IP: 173.X.X.X
My machine name is: webdev02

My Code looks as follows:
Connection conn = null;

JSch jsc = new JSch();

java.util.Properties config = new java.util.Properties();
config.put(StrictHostKeyChecking, no);
String user = sshUserName;
String rhost =webdev02;
String host = web1.fisherdigital.org;

int sshPort = 22, lport = 2122, rport = 3306;
Session session = jsc.getSession(user, host, sshPort);

session.setConfig(config); session.setHost(host);
session.setPassword(sshUserPassword);
session.connect();
int assigned_port = session.setPortForwardingL(lport, rhost, 
rport);

Class.forName(com.mysql.jdbc.Driver).newInstance();
System.out.println(Attempting to connect);
conn = DriverManager.getConnection(jdbc:mysql://127.0.0.1: +
assigned_port + /schemaName, dbUser, dbUserPassword);
System.out.println(CONNECTED!);
conn.close();
System.exit(1);

The issue is that whenever I use DriverManager.getConnection, it
outputs Attempting to connect then hangs.

Any pointers would be appreciated.
- Elm

--
Master SQL Server Development, Administration, T-SQL, SSAS, SSIS, SSRS
and more. Get SQL Server skills now (including 2012) with LearnDevNow -
200+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
SALE $99.99 this month only - learn more at:
http://p.sf.net/sfu/learnmore_122512
___
JSch-users mailing list
JSch-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jsch-users


Re: [JSch-users] Jsch Databases [SOLVED]

2013-01-15 Thread Hector Rios
Hi Heiner!

Thank you for the advice! Shortly after e-mailing the distribution group, I
fiddled around with my hostnames.

One thing that I changed was the String rhost to be “127.0.0.1” instead
of “webdev02”. This seemed to have done the trick.

I appreciate the great and prompt response,

Hector Rios

 *From:* Heiner Westphal westp...@verit.de
*Sent:* ‎January‎ ‎15‎, ‎2013 ‎5‎:‎08‎ ‎PM
*To:* Hector Rios labeledlo...@gmail.com
*CC:* jsch-users@lists.sourceforge.net
*Subject:* Re: [JSch-users] Jsch  Databases

Hello Hector,

the long waits sounds to me like a firewall dropping the packages.
The java code looks ok, AFAICS.

There are some obstacles to circumvent in the setup outside of jsch/ssh (I
assume the webserver is running linux):
- mysqld must allow tcp connections
check with e.g.
  sudo netstat -nlp | grep 3306
   - should give s.t. like
tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1234/mysqld
or
  sudo lsof -i:3306|grep LISTEN
   - should return something similar to
mysqld 1234 mysql 10u IPv4 4649 0t0 TCP *:mysql (LISTEN)

- no firewall on web1.fisherdigital.org should block port 3306
try
  sudo iptables --list
to see the firewall rules.

On windows use
netstat -n|find 3306
to see if mysql listens on a tcp port (maybe mysql does it always on
windows, I'm not sure).

Best regards,

Heiner

Am 15.01.2013 23:27, schrieb Hector Rios:

 Hello! I have been playing around with Jsch to establish an SSH
 connection to my remote server. I have followed numerous tutorials but
 seem to be banging my head against the wall.

 I am using Eclipse Juno  Windows 7 64-bit.

 I am trying to connect to my web server, and access MySQL through an
 SSH tunnel. The MySQL application lives in the web-server.

 Web Server: web1.fisherdigital.org
 Web Server Port Used for SSH: 22
 MySQL Port in the web-server: 3306 (default mysql port)
 My IP: 173.X.X.X
 My machine name is: webdev02

 My Code looks as follows:
 Connection conn = null;

 JSch jsc = new JSch();

 java.util.Properties config = new java.util.Properties();
 config.put(**StrictHostKeyChecking, no);
 String user = sshUserName;
 String rhost =webdev02;
 String host = web1.fisherdigital.org;

 int sshPort = 22, lport = 2122, rport = 3306;
 Session session = jsc.getSession(user, host, sshPort);

 session.setConfig(config); session.setHost(host);
 session.setPassword(**sshUserPassword);
 session.connect();
 int assigned_port = session.setPortForwardingL(**lport,
 rhost, rport);

 Class.forName(com.mysql.jdbc.**Driver).newInstance();
 System.out.println(Attempting to connect);
 conn = DriverManager.getConnection(**jdbc:mysql://
 127.0.0.1: +
 assigned_port + /schemaName, dbUser, dbUserPassword);
 System.out.println(CONNECTED!**);
 conn.close();
 System.exit(1);

 The issue is that whenever I use DriverManager.getConnection, it
 outputs Attempting to connect then hangs.

 Any pointers would be appreciated.
 - Elm

 --**--**
 --
 Master SQL Server Development, Administration, T-SQL, SSAS, SSIS, SSRS
 and more. Get SQL Server skills now (including 2012) with LearnDevNow -
 200+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
 SALE $99.99 this month only - learn more at:
 http://p.sf.net/sfu/learnmore_**122512http://p.sf.net/sfu/learnmore_122512
 __**_
 JSch-users mailing list
 JSch-users@lists.sourceforge.**net JSch-users@lists.sourceforge.net
 https://lists.sourceforge.net/**lists/listinfo/jsch-usershttps://lists.sourceforge.net/lists/listinfo/jsch-users



-- 
Dipl.Inform.
Heiner Westphal
Senior Software Engineer

verit Informationssysteme GmbH
Europaallee 10
67657 Kaiserslautern

E-Mail: heiner.westp...@verit.de
Telefon: +49 631 520 840 00
Telefax: +49 631 520 840 01
Web: http://www.verit.de/

Registergericht: Amtsgericht Kaiserslautern
Registernummer: HRB 3751
Geschäftsleitung: Claudia Könnecke, Torsten Stolpmann
--
Master Java SE, Java EE, Eclipse, Spring, Hibernate, JavaScript, jQuery
and much more. Keep your Java skills current with LearnJavaNow -
200+ hours of step-by-step video tutorials by Java experts.
SALE $49.99 this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122612 ___
JSch-users mailing list
JSch-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/jsch-users


Re: RES: [firebird-support] Re: firebird.conf improve for lot of memory

2012-10-29 Thread HECTOR RIOS


 


 From: Fabiano fabianoas...@gmail.com
To: firebird-support@yahoogroups.com 
Sent: Monday, October 29, 2012 12:52 PM
Subject: RES: [firebird-support] Re: firebird.conf improve for lot of memory
  
 
   
 
Performance is not only memory, is a combination of network speed, hard
drive speed, memory size and speed and finally processor(s) speed.

You do not need to do that. Maybe if you do not know exactly what are you
doing you can get worst performance.

[Non-text portions of this message have been removed]

   
 

[Non-text portions of this message have been removed]



Re: [firebird-support] How to make queries to a temporary result set

2012-09-22 Thread HECTOR RIOS
Hi Mercea,
Currently, I´m using Developer Express grid control and can do as you suggest, 
but I´m more interested to know if Firebird supports the temporary result set.
Regards,
Hector Rios
 


 From: Mercea Paul paul.mer...@almexa.ro
To: firebird-support@yahoogroups.com 
Sent: Saturday, September 22, 2012 3:48 AM
Subject: Re: [firebird-support] How to make queries to a temporary result set
  

 
   
 
On 2012.09.21 10:12 PM, HECTOR RIOS wrote:

 Hi,
 I´m trying to save some sql parsing in a Delphi application by using 
 temporary result sets. Is there such thing in Firebird?
 For example: some user makes a query with some filters and then wants 
 to make another query over results from previous query.
 It would be easy if added the filters over the result set without 
 having to mix previous filters plus last query´s filters.
 Anybody has an easy alternative for this?

You can run the query with the new filter or if you have a dataset 
populated from first query you can filter data from dataset with filter 
condition without interrogating again the server, this is from UI controls.
I don't know equivalent in Delphi but from c# i can do:

private void tbFilter_KeyUp(object sender, KeyEventArgs e)
{
if (tbFilter.Text.Length  0)
{
bsProducts.Filter = PRODUCTNAME LIKE '%+ 
tbFilter.Text+%';
}
else
{
bsProducts.Filter = ;
}
}
bsProducts is populated running the main query.

HTH,
Paul MERCEA

[Non-text portions of this message have been removed]

   
  

[Non-text portions of this message have been removed]



Re: RES: [firebird-support] How to make queries to a temporary result set

2012-09-22 Thread HECTOR RIOS
Hi Andrea,
 
But would you have to create the table?
Regards,
 
Hector Rios
 


 From: Andrea Raimondi andrea.raimo...@gmail.com
To: firebird-support@yahoogroups.com 
Sent: Saturday, September 22, 2012 12:06 AM
Subject: Re: RES: [firebird-support] How to make queries to a temporary result 
set
  

 
   
 
I am not 100% sure about this, but I think Firebird supports temporary
tables. You might try with that...
On Sep 22, 2012 1:41 AM, HECTOR RIOS mailto:scadcam2004%40yahoo.com wrote:

 Hi Fabiano,

 I was thinking more like this:

 1. User wants bills from client x and program makes a

 select * from clients where name= x

 2. Program displays x bills in some grid.

 3. Later on, user wants x bills from march and I would like to make a
query in program like this

 select * from lastResultSet where moth(bill_date) = 'march'

 of course, in this particular example, I could have made a query like

 select * from clients where name= x and month(bill_date)= 'march'

 but sometimes the second query is not added so easily, you would have to
make some parsing to mix both queries.
 It would be a lot easier if there were some temporary result set and
apply filters to this temporary.

 Regards,

 Hector Rios


 
  From: Fabiano mailto:fabianoaspro%40gmail.com
 To: mailto:firebird-support%40yahoogroups.com
 Sent: Friday, September 21, 2012 1:26 PM
 Subject: RES: [firebird-support] How to make queries to a temporary
result set





 Maybe:

 Select * from

 (

 select * from table where field1 = 1

 ) as FILTER

 Where FILTER.field2 = 2

 Fabiano.

 [Non-text portions of this message have been removed]




 [Non-text portions of this message have been removed]



 

 ++

 Visit http://www.firebirdsql.org/ and click the Resources item
 on the main (top) menu.  Try Knowledgebase and FAQ links !

 Also search the knowledgebases at http://www.ibphoenix.com/

 ++
 Yahoo! Groups Links




[Non-text portions of this message have been removed]

   
  

[Non-text portions of this message have been removed]



Re: RES: [firebird-support] How to make queries to a temporary result set

2012-09-22 Thread HECTOR RIOS
Hi,
Seems like temporary result set does not exist or at least nobody seems heard 
about it.
On second thought, I think Fabiano´s suggestion is workable for me because in 
the application, previous query could be saved (the one inside parentheses) and 
then add the outer select -subsequent query-. This solution would not require 
parsing.
Also, I found something about CTE and derived queries in firebird documentation 
that I have to check.
 
Thanks,
 
Hector Rios
 


 From: Andrea Raimondi andrea.raimo...@gmail.com
To: firebird-support@yahoogroups.com 
Sent: Saturday, September 22, 2012 12:06 AM
Subject: Re: RES: [firebird-support] How to make queries to a temporary result 
set
  

 
   
 
I am not 100% sure about this, but I think Firebird supports temporary
tables. You might try with that...
On Sep 22, 2012 1:41 AM, HECTOR RIOS mailto:scadcam2004%40yahoo.com wrote:

 Hi Fabiano,

 I was thinking more like this:

 1. User wants bills from client x and program makes a

 select * from clients where name= x

 2. Program displays x bills in some grid.

 3. Later on, user wants x bills from march and I would like to make a
query in program like this

 select * from lastResultSet where moth(bill_date) = 'march'

 of course, in this particular example, I could have made a query like

 select * from clients where name= x and month(bill_date)= 'march'

 but sometimes the second query is not added so easily, you would have to
make some parsing to mix both queries.
 It would be a lot easier if there were some temporary result set and
apply filters to this temporary.

 Regards,

 Hector Rios


 
  From: Fabiano mailto:fabianoaspro%40gmail.com
 To: mailto:firebird-support%40yahoogroups.com
 Sent: Friday, September 21, 2012 1:26 PM
 Subject: RES: [firebird-support] How to make queries to a temporary
result set





 Maybe:

 Select * from

 (

 select * from table where field1 = 1

 ) as FILTER

 Where FILTER.field2 = 2

 Fabiano.

 [Non-text portions of this message have been removed]




 [Non-text portions of this message have been removed]



 

 ++

 Visit http://www.firebirdsql.org/ and click the Resources item
 on the main (top) menu.  Try Knowledgebase and FAQ links !

 Also search the knowledgebases at http://www.ibphoenix.com/

 ++
 Yahoo! Groups Links




[Non-text portions of this message have been removed]

   
  

[Non-text portions of this message have been removed]



[firebird-support] How to make queries to a temporary result set

2012-09-21 Thread HECTOR RIOS
Hi,
I´m trying to save some sql parsing in a Delphi application by using temporary 
result sets. Is there such thing in Firebird?
For example: some user makes a query with some filters and then wants to make 
another query over results from previous query.
It would be easy if added the filters over the result set without having to mix 
previous filters plus last query´s filters.
Anybody has an easy alternative for this?
 
Regards,
Hector Rios

[Non-text portions of this message have been removed]



Re: RES: [firebird-support] How to make queries to a temporary result set

2012-09-21 Thread HECTOR RIOS
Hi Fabiano,
 
I was thinking more like this:
 
1. User wants bills from client x and program makes a 
 
select * from clients where name= x
 
2. Program displays x bills in some grid.
 
3. Later on, user wants x bills from march and I would like to make a query in 
program like this
 
select * from lastResultSet where moth(bill_date) = 'march'
 
of course, in this particular example, I could have made a query like
 
select * from clients where name= x and month(bill_date)= 'march'
 
but sometimes the second query is not added so easily, you would have to make 
some parsing to mix both queries.
It would be a lot easier if there were some temporary result set and apply 
filters to this temporary.
 
Regards,
 
Hector Rios
 


 From: Fabiano fabianoas...@gmail.com
To: firebird-support@yahoogroups.com 
Sent: Friday, September 21, 2012 1:26 PM
Subject: RES: [firebird-support] How to make queries to a temporary result set
  

 
   
 
Maybe:

Select * from

(

select * from table where field1 = 1

) as FILTER

Where FILTER.field2 = 2

Fabiano.

[Non-text portions of this message have been removed]

   
  

[Non-text portions of this message have been removed]



Re: [firebird-support] hostname:databasename

2012-09-16 Thread HECTOR RIOS
Hi Sean, Helen, Mark

Finally got it to work. It is easy, once you see endpoints in Azure.
Just had to add an endpoint for my virtual machine in Windows Azure Dashboard. 
Specify endpoint with public port 3050 and private port 3050. I suppose is some 
kind of proxy.
With that, the virtual machine behaves like a normal remote host and you can 
use regular  
connect scadcam.cloudapp.net:employees
Thanks for your interest.
Hector Rios




 From: HECTOR RIOS scadcam2...@yahoo.com
To: firebird-support@yahoogroups.com firebird-support@yahoogroups.com 
Sent: Saturday, September 8, 2012 3:53 PM
Subject: Re: [firebird-support] hostname:databasename
 

  
Hi Sean,
 
I´m just beginning with this cloud thing. It would seem to me like a hassle to 
do that if the Azure guys wanted to make it easy. I expected like a normal 
remote host, but maybe you are right about having to login/open connection to 
access cloud resources, even a database server.
At first I tought I could get by with just hostname:databasename but sintax 
is not helping. Have to research if the Microsoft SQL Server guys dont have to 
login to know if it is a real sintax problem.
Not sure how to do that about the proxy but sounds logical. 
 
Regards,
Hector Rios



From: Leyne, Sean s...@broadviewsoftware.com
To: firebird-support@yahoogroups.com firebird-support@yahoogroups.com 
Sent: Saturday, September 8, 2012 3:09 PM
Subject: RE: [firebird-support] hostname:databasename


  

Hector,

 Sounds nice but still cannot connect with
 scadcam.cloudapp.net/51160:employees
 Is this / to indicate the port firebird uses or for the port to get access to 
 the
 host?

Is scadcam.cloudapp.net:51160 how you connect to the remote cloud?

Doesn't a client need to login/open a connection to the cloud before you can 
access cloud resources?

How is port 51160 being mapped/assigned to your cloud Firebird instance?

I suspect that you need to create a redirector which would be installed on 
the remote client systems.  It would login/establish a connection to the cloud 
and then listen on port 3050 and proxy the TCP messages between the client 
and the cloud Firebird instance.

Sean



[Non-text portions of this message have been removed]


 

[Non-text portions of this message have been removed]



Re: [firebird-support] hostname:databasename

2012-09-08 Thread HECTOR RIOS
Hi Mark,
 
Sounds nice but still cannot connect with scadcam.cloudapp.net/51160:employees
Is this / to indicate the port firebird uses or for the port to get access to 
the host?
If I enter into isql inside the virtual machine, I try the following..
a.  connect employees    (works fine)
b.  connect localhost:employees  (works fine)
c.  connect localhost/3050:employees (works fine)
d.  connect localhost/3060:employees (fails with sqlstate=08006)
 
I have turned off the firewall in both sides but still refuses the connection 
with sqlstate=08004 
Because of point c above, I think / is only to indicate the port firebird uses 
but not to specify the port to get access to virtual machine. Is there 
something like a cascading of ports?  From within the Windows Azure, 
scadcam.cloudapp.net gets you to your environment (still not familiar), but 
you still need to give the port to take you to one of possibly several virtual 
machines and then specify the firebird port (really not needed because is using 
the default firebird port).
Does it sound rational?
 
Regards,
Hector Rios
 


 From: Mark Rotteveel m...@lawinegevaar.nl
To: firebird-support@yahoogroups.com 
Sent: Saturday, September 8, 2012 3:23 AM
Subject: Re: [firebird-support] hostname:databasename
  

 
   
 
On 7-9-2012 22:35, scadcam2004 wrote:
 Hi,
 Hope somebody can help me.
 Is it possible to use other than : to separate hostname and database name?
 My problem is this:
 I just started a project using a Microsoft Azure virtual machine in the cloud 
 that will host a firebird database.
 To access the virtual machine you need to give a url plus port number. Like 
 this:
 scadcam.cloudapp.net:51160 This would be the hostname.
 But when trying to access the database with 
 scadcam.cloudapp.net:51160:employees, firebird reads up to the first : 
 leaving out the port number needed to access the host.
 Is there a workaround?
 Regards,
 Héctor Ríos Mendoza
 mailto:scadcam%40gmail.com
 614 4155821, 4160317

For the Firebird library itself you simply need to use 
scadcam.cloudapp.net/51160:employees

The fact that Microsoft specifies scadcam.cloudapp.net:51160 is that 
almost all applications use ':' as the separator between hostname and 
port. Firebird doesn't, it uses '/' as the separator. At the TCP/IP 
level where the actual connection is created that distinction doesn't 
even exist.

Mark
-- 
Mark Rotteveel
   
  

[Non-text portions of this message have been removed]



Re: [firebird-support] hostname:databasename

2012-09-08 Thread HECTOR RIOS
Hi Helen,
 
The 51160 port is not the firebird port, it is a port used to connect to a 
virtual machine. If you had several virtual machines, you´d use a different 
port to communicate to each one.
For example, I connect to the virtual machine -not the database- with remote 
desktop using   scadcam.cloudapp.net:51165, then you give the windows 
account name and password. With that you have a virtual Windows Server 2008.
Then I go to cmd shell and firebird folder and start    isql
Once inside isql, can connect with   
connect employees    or with   
connect localhost:employees   or with 
connect localhost/3050:employees   (I´m using the default firebird port).
 
That´s why, I think,  
 
scadcam.cloudapp.net/51160:employees   
 
does not work in this case.
 
Regards,
Hector
 


 From: Helen Borrie hele...@iinet.net.au
To: firebird-support@yahoogroups.com 
Sent: Friday, September 7, 2012 2:55 PM
Subject: Re: [firebird-support] hostname:databasename
  

 
   
 
At 08:35 AM 8/09/2012, scadcam2004 wrote:
Hi,
Hope somebody can help me.
Is it possible to use other than : to separate hostname and database name?
My problem is this:
I just started a project using a Microsoft Azure virtual machine in the cloud 
that will host a firebird database.
To access the virtual machine you need to give a url plus port number. Like 
this: 
scadcam.cloudapp.net:51160 This would be the hostname.
But when trying to access the database with 
scadcam.cloudapp.net:51160:employees, firebird reads up to the first : 
leaving out the port number needed to access the host.
Is there a workaround?

scadcam.cloudapp.net/51160:employees is the correct TCP/IP syntax when using a 
non-standard port.  This isn't a workaround, it's the external standard for 
TCP/IP connection. 
Also locate the parameter 
#RemoteServicePort = 3050
in firebird.conf on the server, delete the '#' marker and change the port 
number to 51160.

I don't know whether your Microsoft Azure thing recognises external standards, 
though.  If this doesn't work, your best bet would be to contact Microsoft 
support about it.

If you are using events, you will also need to open a specific port for event 
traffic and set the RemoteAuxPort parameter in firebird.conf.

./hb

Regards,
Héctor Ríos Mendoza
mailto:scadcam%40gmail.com
614 4155821, 4160317





++

Visit http://www.firebirdsql.org/ and click the Resources item
on the main (top) menu.  Try Knowledgebase and FAQ links !

Also search the knowledgebases at http://www.ibphoenix.com/ 

++
Yahoo! Groups Links



   
  

[Non-text portions of this message have been removed]



Re: [firebird-support] hostname:databasename

2012-09-08 Thread HECTOR RIOS
Hi Sean,
 
I´m just beginning with this cloud thing. It would seem to me like a hassle to 
do that if the Azure guys wanted to make it easy. I expected like a normal 
remote host, but maybe you are right about having to login/open connection to 
access cloud resources, even a database server.
At first I tought I could get by with just hostname:databasename but sintax 
is not helping. Have to research if the Microsoft SQL Server guys dont have to 
login to know if it is a real sintax problem.
Not sure how to do that about the proxy but sounds logical. 
 
Regards,
Hector Rios
 


 From: Leyne, Sean s...@broadviewsoftware.com
To: firebird-support@yahoogroups.com firebird-support@yahoogroups.com 
Sent: Saturday, September 8, 2012 3:09 PM
Subject: RE: [firebird-support] hostname:databasename
  

 
   
 
Hector,

 Sounds nice but still cannot connect with
 scadcam.cloudapp.net/51160:employees
 Is this / to indicate the port firebird uses or for the port to get access to 
 the
 host?

Is scadcam.cloudapp.net:51160 how you connect to the remote cloud?

Doesn't a client need to login/open a connection to the cloud before you can 
access cloud resources?

How is port 51160 being mapped/assigned to your cloud Firebird instance?

I suspect that you need to create a redirector which would be installed on 
the remote client systems.  It would login/establish a connection to the cloud 
and then listen on port 3050 and proxy the TCP messages between the client 
and the cloud Firebird instance.


Sean
   
  

[Non-text portions of this message have been removed]



[Bug 661815] [NEW] package tzdata 2010l-1 failed to install/upgrade: unable to install new version of `/usr/sbin/tzconfig': Read-only file system

2010-10-16 Thread Hector Rios
Public bug reported:

Binary package hint: tzdata

ubu...@ubuntu:~$ lsb_release -rd
Description:Ubuntu 10.10
Release:10.10

I'm trying to install Ubuntu 10.10 on a clean hard drive in a Macbook
Pro, Late 2007 model and everytime it gets to downloading updates from
the internet, it crashes.

ProblemType: Package
DistroRelease: Ubuntu 10.10
Package: tzdata 2010l-1
ProcVersionSignature: Ubuntu 2.6.35-22.33-generic 2.6.35.4
Uname: Linux 2.6.35-22-generic i686
Architecture: i386
Date: Sat Oct 16 12:13:04 2010
ErrorMessage: unable to install new version of `/usr/sbin/tzconfig': Read-only 
file system
LiveMediaBuild: Ubuntu 10.10 Maverick Meerkat - Release i386 (20101007)
PackageArchitecture: all
SourcePackage: tzdata
Title: package tzdata 2010l-1 failed to install/upgrade: unable to install new 
version of `/usr/sbin/tzconfig': Read-only file system

** Affects: tzdata (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: apport-package i386 maverick

-- 
package tzdata 2010l-1 failed to install/upgrade: unable to install new version 
of `/usr/sbin/tzconfig': Read-only file system
https://bugs.launchpad.net/bugs/661815
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 661815] Re: package tzdata 2010l-1 failed to install/upgrade: unable to install new version of `/usr/sbin/tzconfig': Read-only file system

2010-10-16 Thread Hector Rios


-- 
package tzdata 2010l-1 failed to install/upgrade: unable to install new version 
of `/usr/sbin/tzconfig': Read-only file system
https://bugs.launchpad.net/bugs/661815
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


Fwd: Low price on prescription drugs Without a Prescription t ooaaugt oah v

2003-12-01 Thread Hector Rios
Title: Absolutely No Doctor's Prescription Needed
SAVE  LOTS ON HUNDREDS OF MEDICATIONSAbsolutely No Doctor's Prescription Needed! Upon approval, our US licensed physicians will write an FDA approved prescription for you and the product will be filled and shipped by a US licensed pharmacist overnight to your door, immediately and discreetly. Order by 2 pm EST and you get your meds tomorrow.Get Your Meds Here and Save!rqjrys dxxky u
jhfwouw


Problem getting Netscape to forward requests to Tomcat.

2001-07-25 Thread Hector Rios
Title: Problem getting Netscape to forward requests to Tomcat.







Hello,


I've been trying to configure Netscape Enterprise Server 3.6 and Tomcat 3.2.3 to work together but with no luck.


I went through and followed the steps outlined in the document Tomcat Netscape HowTo and everything appears

to work ok. The server starts and ok and the settings show that it is indeed using nsapi_redirect.dll but when I try to 

access http://localhost:80/examples I get the an error saying that it could be founds in the netscape docs folder. 


The server does not appear to be redirecting the requests at all. One thing to note is that I removed all the entries concerning

ajp13 configuration from the workers.properties file.Until I did that the server would not start.


Is there anythin I'm not doing right. I'm sure it's something stupid but I would appreciate any help that anyone could offer.

Thank you in advance

Hector


-- below you'll find my obj.conf file and workers.properties file.


 workers.properties  obj.conf 



 workers.properties
 obj.conf