[slurm-users] Inconsistencies in CPU time Reporting by sreport and sacct Tools

2024-04-17 Thread KK via slurm-users
I wish to ascertain the CPU core time utilized by user dj1 and dj. I have
tested with sreport cluster UserUtilizationByAccount, sreport job
SizesByAccount, and sacct. It appears that sreport cluster
UserUtilizationByAccount displays the total core hours used by the entire
account, rather than the individual user's cpu time. Here are the specifics:

Users dj and dj1 are both under the account mehpc.

In 2024-04-12 ~ 2024-04-15, dj1 used approximately 10 minutes of core time,
while dj used about 4 minutes. However, "sreport Cluster
UserUtilizationByAccount user=dj1 start=2024-04-12 end=2024-04-15" shows 14
minutes of usage. Similarly, "sreport job SizesByAccount Users=dj
start=2024-04-12 end=2024-04-15" hows about 14 minutes.
Using "sreport job SizesByAccount Users=dj1 start=2024-04-12
end=2024-04-15" or "sacct -u dj1 -S 2024-04-12 -E 2024-04-15 -o
"jobid,partition,account,user,alloccpus,cputimeraw,state,workdir%60" -X
|awk 'BEGIN{total=0}{total+=$6}END{print total}'" yields the accurate
values, which are around 10 minutes for dj1.

Attachment are the details.


detail_results
Description: Binary data

-- 
slurm-users mailing list -- slurm-users@lists.schedmd.com
To unsubscribe send an email to slurm-users-le...@lists.schedmd.com


[slurm-users] Fwd: sreport cluster UserUtilizationByaccount Used result versus sreport job SizesByAccount or sacct: inconsistencies

2024-04-15 Thread KK via slurm-users
-- Forwarded message -
发件人: KK 
Date: 2024年4月15日周一 13:25
Subject: sreport cluster UserUtilizationByaccount Used result versus
sreport job SizesByAccount or sacct: inconsistencies
To: 


I wish to ascertain the CPU core hours utilized by user dj1 and dj. I have
tested with sreport cluster UserUtilizationByAccount, sreport job
SizesByAccount, and sacct. It appears that sreport cluster
UserUtilizationByAccount displays the total core hours used by the entire
account, rather than the individual user's cpu time. Here are the specifics:

Users dj and dj1 are both under the account mehpc.

In 2024-04-12 ~ 2024-04-15, dj1 used approximately 10 minutes of core time,
while dj used about 4 minutes. However, "*sreport Cluster
UserUtilizationByAccount user=dj1 start=2024-04-12 end=2024-04-15*" shows
14 minutes of usage. Similarly, "*sreport job SizesByAccount Users=dj
start=2024-04-12 end=2024-04-15*" hows about 14 minutes.
Using "*sreport job SizesByAccount Users=dj1 start=2024-04-12
end=2024-04-15*" or "*sacct -u dj1 -S 2024-04-12 -E 2024-04-15 -o
"jobid,partition,account,user,alloccpus,cputimeraw,state,workdir%60" -X
|awk 'BEGIN{total=0}{total+=$6}END{print total}'*" yields the accurate
values, which are around 10 minutes for dj1. Here are the details:

[root@ood-master ~]# sacctmgr list assoc format=cluster,user,account,qos
   Cluster   UserAccount  QOS
-- -- -- 
 mehpc  root   normal
 mehpc   root   root   normal
 mehpc mehpc   normal
 mehpc dj  mehpc   normal
 mehpcdj1  mehpc   normal


[root@ood-master ~]# sacct -X -u dj1 -S 2024-04-12 -E 2024-04-15 -o
jobid,ncpus,elapsedraw,cputimeraw
JobID NCPUS ElapsedRaw CPUTimeRAW
 -- -- --
4 1 60 60
5 2120240
6 1 61 61
8 2120240
9 0  0  0

[root@ood-master ~]# sacct -X -u dj -S 2024-04-12 -E 2024-04-15 -o
jobid,ncpus,elapsedraw,cputimeraw
JobID NCPUS ElapsedRaw CPUTimeRAW
 -- -- --
7 2120240


[root@ood-master ~]# sreport job SizesByAccount Users=dj1 start=2024-04-12
end=2024-04-15

Job Sizes 2024-04-12T00:00:00 - 2024-04-14T23:59:59 (259200 secs)
Time reported in Minutes

  Cluster   Account 0-49 CPUs   50-249 CPUs  250-499 CPUs  500-999 CPUs
 >= 1000 CPUs % of cluster
- - - - - -
- 
mehpc  root10 0 0 0
0  100.00%


[root@ood-master ~]# sreport job SizesByAccount Users=dj start=2024-04-12
end=2024-04-15

Job Sizes 2024-04-12T00:00:00 - 2024-04-14T23:59:59 (259200 secs)
Time reported in Minutes

  Cluster   Account 0-49 CPUs   50-249 CPUs  250-499 CPUs  500-999 CPUs
 >= 1000 CPUs % of cluster
- - - - - -
- 
mehpc  root 4 0 0 0
0  100.00%


[root@ood-master ~]# sreport Cluster UserUtilizationByAccount user=dj1
start=2024-04-12 end=2024-04-15

Cluster/User/Account Utilization 2024-04-12T00:00:00 - 2024-04-14T23:59:59
(259200 secs)
Usage reported in CPU Minutes

  Cluster Login Proper Name Account Used   Energy
- - --- ---  
mehpc   dj1 dj1 dj1   mehpc   140



[root@ood-master ~]# sreport Cluster UserUtilizationByAccount user=dj
start=2024-04-12 end=2024-04-15

Cluster/User/Account Utilization 2024-04-12T00:00:00 - 2024-04-14T23:59:59
(259200 secs)
Usage reported in CPU Minutes

  Cluster Login Proper Name Account Used   Energy
- - --- ---  
mehpcdj   dj dj   mehpc   140


[root@ood-master ~]# sacct -u dj1 -S 2

[kde] [Bug 484388] New: threshold temperature warning is wrong

2024-03-24 Thread kk
https://bugs.kde.org/show_bug.cgi?id=484388

Bug ID: 484388
   Summary: threshold temperature warning is wrong
Classification: I don't know
   Product: kde
   Version: unspecified
  Platform: Arch Linux
OS: Linux
Status: REPORTED
  Severity: normal
  Priority: NOR
 Component: general
  Assignee: unassigned-b...@kde.org
  Reporter: k...@orly.at
  Target Milestone: ---

SUMMARY
***
NOTE: If you are reporting a crash, please try to attach a backtrace with debug
symbols.
See
https://community.kde.org/Guidelines_and_HOWTOs/Debugging/How_to_create_useful_crash_reports
***
Miniprogram : Thermal Monitor 0.1.4

STEPS TO REPRODUCE
1. Open Settings
2. GoTo: Appearance
3.  Enable danger color




OBSERVED RESULT

Even the temperature of Sensor is below the warning  threshold - the font-color
is red (e.q. warning)

EXPECTED RESULT
If the temperature is below the warning threshold, the font color should be
same is descriptor color. (for example: black)

SOFTWARE/OS VERSIONS
Windows: 
macOS: 
Linux/KDE Plasma: Arch 6.8.1-arch-1(64bit)
(available in About System)
KDE Plasma Version: 6.0.2
KDE Frameworks Version:  6.0.0
Qt Version: 6.6.2

ADDITIONAL INFORMATION
Platform Wayland

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: STM32H7 serial TX DMA issues

2024-03-08 Thread Kian Karas (KK)
Hi,

here is the pull request:
https://github.com/apache/nuttx/pull/11871

My initial comments (and "fix") for uart_xmitchars_dma() are no longer 
relevant. Hence, those changes are no longer included.

Regards
Kian

From: Sebastien Lorquet 
Sent: 08 March 2024 11:13
To: dev@nuttx.apache.org 
Subject: Re: STM32H7 serial TX DMA issues

Hello,

Yes, stm32h7 uart transmission has issues. You can easily test this in
nsh with just an echo command and a very long string, eg > 64 ascii
chars. At first I believed it was buffering problems.

This caused me some headaches 1.5 years ago, but the DMA serial driver
is too complex for me to debug. I have disabled CONFIG_UARTn_TXDMA on
relevant uarts of my board.

Please give the link to your PR when it's ready so I can follow this
closely.

Thank you,

Sebastien


Le 08/03/2024 à 10:29, David Sidrane a écrit :
> Hi Kian,
>
> The Problem with the semaphore is it cause blocking when the port
> is opened non blocking.
>
> Please do PR so we can review it.
>
> David
>
>
> -----Original Message-
> From: Kian Karas (KK) 
> Sent: Friday, March 8, 2024 4:18 AM
> To: dev@nuttx.apache.org
> Subject: STM32H7 serial TX DMA issues
>
> Hi community
>
> The STM32H7 serial driver TX DMA logic is no longer working properly.
>
> The issues started with commit 660ac63b. Subsequent attempts (f92a9068,
> 6c186b60) have failed to get it working again.
>
> I think the original idea of 660ac63b is right, it just failed to restart
> TX DMA upon TX DMA completion (if needed).
>
> I would suggest reverting the following commits: 6c186b60 58f2a7b1
> 69a8b5b5. Then add the following patch as an amendment:
>
> diff --git a/arch/arm/src/stm32h7/stm32_serial.c
> b/arch/arm/src/stm32h7/stm32_serial.c
> index 120ea0f3b5..fc90c5d521 100644
> --- a/arch/arm/src/stm32h7/stm32_serial.c
> +++ b/arch/arm/src/stm32h7/stm32_serial.c
> @@ -3780,11 +3780,20 @@ static void up_dma_txcallback(DMA_HANDLE handle,
> uint8_t status, void *arg)
>   }
>   }
>
> -  nxsem_post(>txdmasem);
> -
> /* Adjust the pointers */
>
> uart_xmitchars_done(>dev);
> +
> +  /* Initiate another transmit if data is ready */
> +
> +  if (priv->dev.xmit.tail != priv->dev.xmit.head)
> +{
> +  uart_xmitchars_dma(>dev);
> +}
> +  else
> +{
> +  nxsem_post(>txdmasem);
> +}
>   }
>   #endif
>
> @@ -3806,6 +3815,14 @@ static void up_dma_txavailable(struct uart_dev_s
> *dev)
> int rv = nxsem_trywait(>txdmasem);
> if (rv == OK)
>   {
> +  if (dev->xmit.head == dev->xmit.tail)
> +{
> +  /* No data to transfer. Release semaphore. */
> +
> +  nxsem_post(>txdmasem);
> +  return;
> +}
> +
> uart_xmitchars_dma(dev);
>   }
>   }
>
>
> However, uart_xmitchars_dma() is currently not safe to call from an
> interrupt service routine, so the following patch would also be required:
>
> diff --git a/drivers/serial/serial_dma.c b/drivers/serial/serial_dma.c
> index aa99e801ff..b2603953ad 100644
> --- a/drivers/serial/serial_dma.c
> +++ b/drivers/serial/serial_dma.c
> @@ -97,26 +97,29 @@ void uart_xmitchars_dma(FAR uart_dev_t *dev)  {
> FAR struct uart_dmaxfer_s *xfer = >dmatx;
>
> -  if (dev->xmit.head == dev->xmit.tail)
> +  size_t head = dev->xmit.head;
> +  size_t tail = dev->xmit.tail;
> +
> +  if (head == tail)
>   {
> /* No data to transfer. */
>
> return;
>   }
>
> -  if (dev->xmit.tail < dev->xmit.head)
> +  if (tail < head)
>   {
> -  xfer->buffer  = >xmit.buffer[dev->xmit.tail];
> -  xfer->length  = dev->xmit.head - dev->xmit.tail;
> +  xfer->buffer  = >xmit.buffer[tail];
> +  xfer->length  = head - tail;
> xfer->nbuffer = NULL;
> xfer->nlength = 0;
>   }
> else
>   {
> -  xfer->buffer  = >xmit.buffer[dev->xmit.tail];
> -  xfer->length  = dev->xmit.size - dev->xmit.tail;
> +  xfer->buffer  = >xmit.buffer[tail];
> +  xfer->length  = dev->xmit.size - tail;
> xfer->nbuffer = dev->xmit.buffer;
> -  xfer->nlength = dev->xmit.head;
> +  xfer->nlength = head;
>   }
>
> dev->tx_count += xfer->length + xfer->nlength;
>
>
> Any thoughts?
>
> Regards
> Kian


STM32H7 serial TX DMA issues

2024-03-08 Thread Kian Karas (KK)
Hi community

The STM32H7 serial driver TX DMA logic is no longer working properly.

The issues started with commit 660ac63b. Subsequent attempts (f92a9068, 
6c186b60) have failed to get it working again.

I think the original idea of 660ac63b is right, it just failed to restart TX 
DMA upon TX DMA completion (if needed).

I would suggest reverting the following commits: 6c186b60 58f2a7b1 69a8b5b5. 
Then add the following patch as an amendment:

diff --git a/arch/arm/src/stm32h7/stm32_serial.c 
b/arch/arm/src/stm32h7/stm32_serial.c
index 120ea0f3b5..fc90c5d521 100644
--- a/arch/arm/src/stm32h7/stm32_serial.c
+++ b/arch/arm/src/stm32h7/stm32_serial.c
@@ -3780,11 +3780,20 @@ static void up_dma_txcallback(DMA_HANDLE handle, 
uint8_t status, void *arg)
 }
 }

-  nxsem_post(>txdmasem);
-
   /* Adjust the pointers */

   uart_xmitchars_done(>dev);
+
+  /* Initiate another transmit if data is ready */
+
+  if (priv->dev.xmit.tail != priv->dev.xmit.head)
+{
+  uart_xmitchars_dma(>dev);
+}
+  else
+{
+  nxsem_post(>txdmasem);
+}
 }
 #endif

@@ -3806,6 +3815,14 @@ static void up_dma_txavailable(struct uart_dev_s *dev)
   int rv = nxsem_trywait(>txdmasem);
   if (rv == OK)
 {
+  if (dev->xmit.head == dev->xmit.tail)
+{
+  /* No data to transfer. Release semaphore. */
+
+  nxsem_post(>txdmasem);
+  return;
+}
+
   uart_xmitchars_dma(dev);
 }
 }


However, uart_xmitchars_dma() is currently not safe to call from an interrupt 
service routine, so the following patch would also be required:

diff --git a/drivers/serial/serial_dma.c b/drivers/serial/serial_dma.c
index aa99e801ff..b2603953ad 100644
--- a/drivers/serial/serial_dma.c
+++ b/drivers/serial/serial_dma.c
@@ -97,26 +97,29 @@ void uart_xmitchars_dma(FAR uart_dev_t *dev)
 {
   FAR struct uart_dmaxfer_s *xfer = >dmatx;

-  if (dev->xmit.head == dev->xmit.tail)
+  size_t head = dev->xmit.head;
+  size_t tail = dev->xmit.tail;
+
+  if (head == tail)
 {
   /* No data to transfer. */

   return;
 }

-  if (dev->xmit.tail < dev->xmit.head)
+  if (tail < head)
 {
-  xfer->buffer  = >xmit.buffer[dev->xmit.tail];
-  xfer->length  = dev->xmit.head - dev->xmit.tail;
+  xfer->buffer  = >xmit.buffer[tail];
+  xfer->length  = head - tail;
   xfer->nbuffer = NULL;
   xfer->nlength = 0;
 }
   else
 {
-  xfer->buffer  = >xmit.buffer[dev->xmit.tail];
-  xfer->length  = dev->xmit.size - dev->xmit.tail;
+  xfer->buffer  = >xmit.buffer[tail];
+  xfer->length  = dev->xmit.size - tail;
   xfer->nbuffer = dev->xmit.buffer;
-  xfer->nlength = dev->xmit.head;
+  xfer->nlength = head;
 }

   dev->tx_count += xfer->length + xfer->nlength;


Any thoughts?

Regards
Kian


Re: Addition of STM32H7 MCU's

2024-01-18 Thread Kian Karas (KK)
Hi Robert, Community

We have NuttX running on an STM32H723VE, but haven't tested all peripherals. We 
also did some initial work on an STM32H730, but this has hardly been tested.

What is the best way to share the STM32H723VE support with the community? It 
needs some reviewing. I am concerned we could have broken stuff for other MCUs 
in the familly, but I can't test this.

@Robert: if you are in a hurry, send me an email directly and I'll respond with 
a patch.

Regards
Kian

From: Robert Turner 
Sent: 18 January 2024 03:30
To: dev@nuttx.apache.org 
Subject: Re: Addition of STM32H7 MCU's

Nah not internal cache. The SRAM sizes for H723/5 are different from any of
those defined in arch/arm/include/stm32h7/chip.h
Suspect we need to get these correct as other files use these defs also,
such as stm32_allocateheap.c
Is Jorge's PR the one merged on Jul 12 (8ceff0d)?
Thanks,
Robert

On Thu, Jan 18, 2024 at 2:56 PM Alan C. Assis  wrote:

> Hi Robert,
> Thank you for the explanation! Is it about internal cache?
>
> Looking at
> https://www.st.com/en/microcontrollers-microprocessors/stm32h7-series.html
> I can see that H723/5 shares mostly everything with H333/5.
> I only tested NuttX on STM32H743ZI and STM32H753BI (I and Jorge added
> support to this few weeks ago).
>
> Please take a look at Jorge's PRs, probably if you fix the memory in the
> linker script and the clock tree for your board NuttX will work fine on it.
>
> BR,
>
> Alan
>
> On Wed, Jan 17, 2024 at 10:25 PM Robert Turner  wrote:
>
> > Apologies, I should have been more specific, I was referring to parts in
> > the family which are not currently covered, such as the STM32H723xx which
> > we use. The RAM sizes definitions in chip.h for
> > CONFIG_STM32H7_STM32H7X3XX/CONFIG_STM32H7_STM32H7X5XX are incorrect for
> > the  STM32H723xx and  STM32H725xx.
> > BR,
> > Robert
> >
> > On Thu, Jan 18, 2024 at 1:28 PM Alan C. Assis  wrote:
> >
> > > Robert,
> > > STM32H7 family is already supported.
> > >
> > > Look at arch/arm/src/stm32h7 and equivalent at boards/
> > >
> > > BR,
> > >
> > > Alan
> > >
> > > On Tuesday, January 16, 2024, Robert Turner  wrote:
> > >
> > > > Did anyone finish supporting the broader STM32H7xx family? If so, is
> it
> > > > close to being mergeable or sendable as a patch?
> > > >
> > > > Thanks,
> > > > Robert
> > > >
> > > > On Fri, Sep 8, 2023 at 10:33 PM raiden00pl 
> > wrote:
> > > >
> > > > > > You're right, but not entirely) For example, chips of different
> > > > subseries
> > > > > have different interrupt vector tables. Those. The
> stm32h7x3xx_irq.h
> > > file
> > > > > lists interrupt vectors for the RM0433, but not for the RM0455 or
> > > > > RM0468. Although
> > > > > some chips from all these series have 7x3 in the name.
> > > > >
> > > > > I think they are the same (not checked, intuition tells me so). But
> > > some
> > > > > peripherals are not available on some chips and then the
> > > > > corresponding interrupt line is marked RESERVED, or its the same
> > > > peripheral
> > > > > but with upgraded functionality (QSPI/OCTOSPI) or
> > > > > for some reason ST changed its name to confuse devs. There should
> be
> > no
> > > > > conflict between IRQ lines.
> > > > >
> > > > > > Even if it duplicates 90% of the file it is better than #ifdefing
> > the
> > > > > > stm32h7x3xx_irq.h file. AKA ifdef rash!
> > > > >
> > > > > One file approach can be done with only one level of #ifdefs, one
> > level
> > > > of
> > > > > #ifdefs doesn't have a negative impact on code quality (but
> > > > > it's probably a matter of individual feelings).
> > > > > For IRQ and memory map (and probably DMAMUX), the approach with
> > > separate
> > > > > files may make sense but for peripheral definitions
> > > > > I don't see any benefit in duplicating files.
> > > > >
> > > > > pt., 8 wrz 2023 o 12:01 
> > > napisał(a):
> > > > >
> > > > > > You're right, but not entirely) For example, chips of different
> > > > subseries
> > > > > > have different interrupt vector tables. Those. The
> > stm32h7x3xx_irq.h
> > > > file
> > > > > > lists interrupt vectors for the RM0433, but not for the RM0455 or
> > > > > RM0468. Although
> > > > > > some chips from all these series have 7x3 in the name.
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *От:* "raiden00pl" 
> > > > > > *Кому:* "undefined" 
> > > > > > *Отправлено:* пятница, 8 сентября 2023 г., 12:52
> > > > > > *Тема:* Re: Addition of STM32H7 MCU's
> > > > > >
> > > > > > From what I'm familiar with STM32H7, all chips use the same
> > registers
> > > > and
> > > > > > bit definitions.
> > > > > > Therefore, keeping definitions for different chips in different
> > files
> > > > > > doesn't make sense in my opinion.
> > > > > > The only problem is that some chips support some peripherals
> while
> > > > others
> > > > > > do not. But this can be
> > > > > > solved using definitions from Kconfig, where we define the
> > 

Re: TUN device (PPP) issue?

2024-01-17 Thread Kian Karas (KK)
Hi Zhe

I am working on tag nuttx-12.2.1.

Your referenced commit did indeed fix the issue.

My apologies for not trying on master. I mistakenly though the error was in the 
TUN device driver, which I noticed had not changed since nuttx-12.2.1.

Thanks a lot!
Kian

From: Zhe Weng 翁�� 
Sent: 17 January 2024 04:55
To: Kian Karas (KK) 
Cc: dev@nuttx.apache.org 
Subject: Re: TUN device (PPP) issue?


Hi Kian,


Which version of NuttX are you working on? It behaves like a problem I've met 
before. Do you have this commit in your code? If not, maybe you could have a 
try: 
https://github.com/apache/nuttx/commit/e2c9aa65883780747ca00625a1452dddc6f8a138


Best regards

Zhe



From: Kian Karas (KK) 
Sent: Tuesday, January 16, 2024 11:53:06 PM
To: dev@nuttx.apache.org
Subject: TUN device (PPP) issue?

Hi community

I am experiencing an issue with PPP/TUN and reception of packets. The network 
stack reports different decoding errors in the received packets e.g.:
[   24.56] [  WARN] ppp: ipv4_in: WARNING: IP packet shorter than length in 
IP header

I can reproduce the issue by sending a number of packets (from my PC over PPP 
to the TUN device in NuttX),  which are all larger than can fit into one IOB 
*and* which are ignored (e.g. unsupported protocol or IP destination) - i.e. 
*not* triggering a response / TX packet. I then send a correct ICMP echo 
request from my PC to NuttX, which causes the above error to be reported.

The following PC commands will trigger the error message. My PC has IP 
172.29.4.1 and the NuttX ppp interface has 172.29.4.2. Note the first command 
sends to the *wrong* IP address so that NuttX ignores the ICMP messages. The 
second commands uses the IP of NuttX and should result in a response. I run the 
test after a fresh boot and with no other network traffic to/from NuttX.

$ ping -I ppp0 -W 0.2 -i 0.2 -c 13 172.29.4.3 -s 156
$ ping -I ppp0 -W 0.2 -c 1 172.29.4.2 -s 0

If I skip the first command, ping works fine.

I think the issue is caused by the IOB management in the TUN device driver 
(drivers/net/tun.c). I am new to NuttX, so I don't quite understand the correct 
use of IOB, so I am just guessing here. I think that when a packet is received 
by tun_write() and too large to fit into a single IOB *and* the packet is 
ignored, the IOB chain "lingers" and is not freed. Subsequent packets received 
by tun_write() does not end up in the beginning of the first IOB and the 
IP/TCP/UDP header may then be split across IOB boundary. The network stack 
assumes the protocol headers are not split across IOB boundaries, so the 
network stack ends up reading outside the IOB io_data[] array boundaries 
resulting in undefined behavior.

With CONFIG_IOB_DEBUG enabled, notice how the "avail" value decrease for each 
ignored packet until the final/correct ICMP request (at time 24.54) is 
copied to the second IOB in the chain.

[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=0 
len=184 next=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 182 
bytes new len=182
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 added to the 
chain
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=0 len=2 
next=0
[   10.08] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 2 bytes 
new len=2
[   10.08] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.08] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=168 
len=184 next=0x24002a50
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 168 
bytes new len=168
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=2 
len=16 next=0
[   10.28] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 16 bytes 
new len=16
[   10.28] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.28] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.46] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=154 
len=184 next=0x24002a50
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 154 
bytes new len=154
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=16 
len=30 next=0
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 30 bytes 
new len=30
[   10.48] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.49] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
...
[   12.50] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
of

TUN device (PPP) issue?

2024-01-16 Thread Kian Karas (KK)
Hi community

I am experiencing an issue with PPP/TUN and reception of packets. The network 
stack reports different decoding errors in the received packets e.g.:
[   24.56] [  WARN] ppp: ipv4_in: WARNING: IP packet shorter than length in 
IP header

I can reproduce the issue by sending a number of packets (from my PC over PPP 
to the TUN device in NuttX),  which are all larger than can fit into one IOB 
*and* which are ignored (e.g. unsupported protocol or IP destination) - i.e. 
*not* triggering a response / TX packet. I then send a correct ICMP echo 
request from my PC to NuttX, which causes the above error to be reported.

The following PC commands will trigger the error message. My PC has IP 
172.29.4.1 and the NuttX ppp interface has 172.29.4.2. Note the first command 
sends to the *wrong* IP address so that NuttX ignores the ICMP messages. The 
second commands uses the IP of NuttX and should result in a response. I run the 
test after a fresh boot and with no other network traffic to/from NuttX.

$ ping -I ppp0 -W 0.2 -i 0.2 -c 13 172.29.4.3 -s 156
$ ping -I ppp0 -W 0.2 -c 1 172.29.4.2 -s 0

If I skip the first command, ping works fine.

I think the issue is caused by the IOB management in the TUN device driver 
(drivers/net/tun.c). I am new to NuttX, so I don't quite understand the correct 
use of IOB, so I am just guessing here. I think that when a packet is received 
by tun_write() and too large to fit into a single IOB *and* the packet is 
ignored, the IOB chain "lingers" and is not freed. Subsequent packets received 
by tun_write() does not end up in the beginning of the first IOB and the 
IP/TCP/UDP header may then be split across IOB boundary. The network stack 
assumes the protocol headers are not split across IOB boundaries, so the 
network stack ends up reading outside the IOB io_data[] array boundaries 
resulting in undefined behavior.

With CONFIG_IOB_DEBUG enabled, notice how the "avail" value decrease for each 
ignored packet until the final/correct ICMP request (at time 24.54) is 
copied to the second IOB in the chain.

[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=0 
len=184 next=0
[   10.06] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 182 
bytes new len=182
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 added to the 
chain
[   10.07] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=0 len=2 
next=0
[   10.08] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 2 bytes 
new len=2
[   10.08] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.08] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.26] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=168 
len=184 next=0x24002a50
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 168 
bytes new len=168
[   10.27] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=2 
len=16 next=0
[   10.28] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 16 bytes 
new len=16
[   10.28] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.28] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   10.46] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=154 
len=184 next=0x24002a50
[   10.47] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 154 
bytes new len=154
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=16 
len=30 next=0
[   10.48] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 30 bytes 
new len=30
[   10.48] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   10.49] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
...
[   12.50] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=184 
offset=0
[   12.51] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=14 
len=184 next=0x24002a50
[   12.51] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 14 bytes 
new len=14
[   12.52] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=156 
len=170 next=0
[   12.52] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 Copy 170 
bytes new len=170
[   12.52] [  INFO] ppp0: tun_net_receive_tun: IPv4 frame
[   12.53] [  INFO] ppp0: ipv4_in: WARNING: Not destined for us; not 
forwardable... Dropping!
[   24.54] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 len=28 
offset=0
[   24.54] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 avail=0 
len=28 next=0x24002a50
[   24.55] [  INFO] ppp0: iob_copyin_internal: iob=0x24002b20 Copy 0 bytes 
new len=0
[   24.55] [  INFO] ppp0: iob_copyin_internal: iob=0x24002a50 avail=170 

Re: pgBackRest on old installation

2023-11-20 Thread KK CHN
Thank you.  Its worked out well. But a basic doubt ? is storing the DB
superuser password in .pgpass is advisable ? What other options do we have ?
#su postgres
bash-4.2$ cd

bash-4.2$ cat .pgpass
*:*:*:postgres:your_password
bash-4.2$


On Mon, Nov 20, 2023 at 4:16 PM Achilleas Mantzios - cloud <
a.mantz...@cloud.gatewaynet.com> wrote:

>
> On 11/20/23 12:31, KK CHN wrote:
>
> list,
>
> I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
> legacy application.)
>
> I have installed pgbackrest through  package install on RHEL7.6
> But unable to get the basic stanza-creation working It throws an error.
>
>
> * /etc/pgbackrest.conf  as follows..*
> 
> [demo]
> pg1-path=/app/edb/as10/data
> pg1-port = 5444
> pg1-socket-path=/tmp
>
> [global]
>
> repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw
>
> repo1-cipher-type=aes-256-cbc
> repo1-path=/var/lib/pgbackrest
> repo1-retention-full=2
> backup-user=postgres
>
>
> [global:archive-push]
> compress-level=3
> #
>
>
>
> [root@dbs ~]# pgbackrest version
> pgBackRest 2.48
> [root@dbs ~]#
> #
>
> *Postgres conf as follows... *
>
> listen_addresses = '*'
> port = 5444
> unix_socket_directories = '/tmp'
>
> archive_command = 'pgbackrest --stanza=demo archive-push %p'
> archive_mode = on
> log_filename = 'postgresql.log'
> max_wal_senders = 3
> wal_level = replica
>
> #
>
>
> *ERROR  Getting as follows ..What went wrong here ??*
>
>
>  [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
> --log-level-console=info stanza-create
> 2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
> --exec-id=29527-bf5e2f80 --log-level-console=info
> --pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
> --repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
> --repo1-path=/var/lib/pgbackrest --stanza=demo
> WARN: unable to check pg1: [DbConnectError] unable to connect to
> 'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
> "/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
> ERROR: [056]: unable to find primary cluster - cannot proceed
>HINT: are all available clusters in recovery?
> 2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted
> with exception [056]
> [root@dbs ~]#
>
> It complains about the password.  I followed the below tutorial link, but
> no mention of password (Where to supply password, what parameter where ?)
> setting here ==> https://pgbackrest.org/user-guide-rhel.html
>
> This is about the user connecting to the db, in general, pgbackrest has to
> connect like any other app/user. So, change your .pgpass to contain smth
> like the below on the top of the file :
>
> /tmp:5444:*:postgres:your_whatever_pgsql_password
>
> and retry
>
>
>
> Any hints welcome..  What am I missing here ??
>
> Best,
> Krishane
>
>
>
>
>
>
>
>


pgBackRest on old installation

2023-11-20 Thread KK CHN
list,

I am trying pgBackRest on an RHEL 7.6 and old EDB 10 database cluster( a
legacy application.)

I have installed pgbackrest through  package install on RHEL7.6
But unable to get the basic stanza-creation working It throws an error.


* /etc/pgbackrest.conf  as follows..*

[demo]
pg1-path=/app/edb/as10/data
pg1-port = 5444
pg1-socket-path=/tmp

[global]
repo1-cipher-pass=sUAeceWoDffSz9Q/d8sWREHe+wte3uOO9lggn5/5mTkQEempvBxQk5UbxsrDzHbw

repo1-cipher-type=aes-256-cbc
repo1-path=/var/lib/pgbackrest
repo1-retention-full=2
backup-user=postgres


[global:archive-push]
compress-level=3
#



[root@dbs ~]# pgbackrest version
pgBackRest 2.48
[root@dbs ~]#
#

*Postgres conf as follows... *

listen_addresses = '*'
port = 5444
unix_socket_directories = '/tmp'

archive_command = 'pgbackrest --stanza=demo archive-push %p'
archive_mode = on
log_filename = 'postgresql.log'
max_wal_senders = 3
wal_level = replica

#


*ERROR  Getting as follows ..What went wrong here ??*


 [root@dbs ~]# sudo -u postgres pgbackrest --stanza=demo
--log-level-console=info stanza-create
2023-11-20 21:04:05.223 P00   INFO: stanza-create command begin 2.48:
--exec-id=29527-bf5e2f80 --log-level-console=info
--pg1-path=/app/edb/as10/data --pg1-port=5444 --pg1-socket-path=/tmp
--repo1-cipher-pass= --repo1-cipher-type=aes-256-cbc
--repo1-path=/var/lib/pgbackrest --stanza=demo
WARN: unable to check pg1: [DbConnectError] unable to connect to
'dbname='postgres' port=5444 host='/tmp'': connection to server on socket
"/tmp/.s.PGSQL.5444" failed: fe_sendauth: no password supplied
ERROR: [056]: unable to find primary cluster - cannot proceed
   HINT: are all available clusters in recovery?
2023-11-20 21:04:05.224 P00   INFO: stanza-create command end: aborted with
exception [056]
[root@dbs ~]#

It complains about the password.  I followed the below tutorial link, but
no mention of password (Where to supply password, what parameter where ?)
setting here ==> https://pgbackrest.org/user-guide-rhel.html


Any hints welcome..  What am I missing here ??

Best,
Krishane


capacity planning question

2023-10-30 Thread KK CHN
Hi,



I am in need of  an infrastructure set up for data analytics / live video
stream analytics application  using big data and analytics technology..


The data is basically right now stored as structured data(no video
streaming) in PostgresDatabase. ( Its an emergency call handling solution,
In  database which stores,  caller info (address, mobile number, locations
co-ordinates, emergency category metadata and dispatch information
regarding rescue vehicles.,  Rescue vehicle location update (lat,
long)every 30 seconds all are stored in the Postgres Database ..



Input1 :   I have to do an analytics on these data( say 600 GB for the last
2 years its the size grown from initial setup).To perform an analytical
application development( using python and data analytics libraries, and
displaying the results and analytical predication through a dashboard
application.)


Query 1. How much resource in terms of compute(GPU?(CPU) cores required for
this analytical application? and memory ? And any specific type of
storage(in memory like redis required ? ) etc, which   I have to provision
for these kind of application processing. ?? any hints most welcome..  Any
more input required let me know I can provide if available.


Input 2

In addition to the above I have to do video analytics from bodyworn cameras
by police personnel, drone surveillance

 Videos from any emergency sites,  patrol vehicle (from a mobile tablet
device over 5G )live streaming of indent locations for few  minutes ( say 3
to 5 minutes live streaming for each incident. )  There are 50 drones, 500
Emergency  rescue service vehicles, 300 body worn camera personnels..   and
roughly  5000 incidents / emergency incidents  per day  happening,  which
needs video streaming for at least 1000 incidents for a time duration of 4
to 5 minutes live streaming.


Query2. What/(how many) kind of computing resources GPUs(CPUs)?  RAM,
Storage solutions I have to deploy in numbers( or cores of GPUs/how
many/(CPUs)?  RAM ?   In Memory (Redis or similar ) or any other specific
data storage mechanisms ?   Any hints much appreciated..


Best,

Krishane


Re: pgBackRest for a 50 TB database

2023-10-03 Thread KK CHN
Greetings,
Happy to hear you successfully performed pgBackRest for a 50TB DB. Out of
curiosity I would like to know your infrastructure settings.

1. The  connectivity protocoal and bandwidth you used for your backend
storage ?  Is it iSCSI, FC FCoE or GbE ? what's the exact reason for
the 26 Hours it took in the best case ? What factors may reduce 26 Hours to
much less time say 10 Hour or so for a 50 TB DB to  backup destination ??
What to  fine tune or deploy  for a better performance?

2. It has been said that  you are running the DB on a 2 slot 18 core
processor = 36 Physical cores ..  Is it a dedicated Server H/W entirely
dedicated for a 50 TB database alone ?
Why I asked, nowadays mostly we may run the DB servers on VMs in
virtualized environments..  So I would like to know  all 36 Physical cores
and associated RAM are all utilized by your 50 TB Database server ? or any
vacant CPU cores/Free RAM on those server machines?

3.  What kind of connectivity/bandwidth between DB server and Storage
backend you established ( I Want to know the server NIC card details,
Connectivity Channel protocol/bandwidth and Connecting Switch spec from DB
Server to Storage backend( NAS in this case right ?)

Could you share the recommendations / details as in your case , Becoz I'm
also in need to perform such a pgBackRest trial from a  production DB  to
a  suitable Storage Device( Mostly Unified storage  DELL Unity)

Any inputs are most welcome.

Thanks,
Krishane

On Tue, Oct 3, 2023 at 12:14 PM Abhishek Bhola <
abhishek.bh...@japannext.co.jp> wrote:

> Hello,
>
> As said above, I tested pgBackRest on my bigger DB and here are the
> results.
> Server on which this is running has the following config:
> Architecture:  x86_64
> CPU op-mode(s):32-bit, 64-bit
> Byte Order:Little Endian
> CPU(s):36
> On-line CPU(s) list:   0-35
> Thread(s) per core:1
> Core(s) per socket:18
> Socket(s): 2
> NUMA node(s):  2
>
> Data folder size: 52 TB (has some duplicate files since it is restored
> from tapes)
> Backup is being written on to DELL Storage, mounted on the server.
>
> pgbackrest.conf with following options enabled
> repo1-block=y
> repo1-bundle=y
> start-fast=y
>
>
> 1. *Using process-max: 30, Time taken: ~26 hours*
> full backup: 20230926-092555F
> timestamp start/stop: 2023-09-26 09:25:55+09 / 2023-09-27
> 11:07:18+09
> wal start/stop: 00010001AC0E0044 /
> 00010001AC0E0044
> database size: 38248.9GB, database backup size: 38248.9GB
> repo1: backup size: 6222.0GB
>
> 2. *Using process-max: 10, Time taken: ~37 hours*
>  full backup: 20230930-190002F
> timestamp start/stop: 2023-09-30 19:00:02+09 / 2023-10-02
> 08:01:20+09
> wal start/stop: 00010001AC0E004E /
> 00010001AC0E004E
> database size: 38248.9GB, database backup size: 38248.9GB
> repo1: backup size: 6222.0GB
>
> Hope it helps someone to use these numbers as some reference.
>
> Thanks
>
>
> On Mon, Aug 28, 2023 at 12:30 AM Abhishek Bhola <
> abhishek.bh...@japannext.co.jp> wrote:
>
>> Hi Stephen
>>
>> Thank you for the prompt response.
>> Hearing it from you makes me more confident about rolling it to PROD.
>> I will have a discussion with the network team once about and hear what
>> they have to say and make an estimate accordingly.
>>
>> If you happen to know anyone using it with that size and having published
>> their numbers, that would be great, but if not, I will post them once I set
>> it up.
>>
>> Thanks for your help.
>>
>> Cheers,
>> Abhishek
>>
>> On Mon, Aug 28, 2023 at 12:22 AM Stephen Frost 
>> wrote:
>>
>>> Greetings,
>>>
>>> * Abhishek Bhola (abhishek.bh...@japannext.co.jp) wrote:
>>> > I am trying to use pgBackRest for all my Postgres servers. I have
>>> tested it
>>> > on a sample database and it works fine. But my concern is for some of
>>> the
>>> > bigger DB clusters, the largest one being 50TB and growing by about
>>> > 200-300GB a day.
>>>
>>> Glad pgBackRest has been working well for you.
>>>
>>> > I plan to mount NAS storage on my DB server to store my backup. The
>>> server
>>> > with 50 TB data is using DELL Storage underneath to store this data
>>> and has
>>> > 36 18-core CPUs.
>>>
>>> How much free CPU capacity does the system have?
>>>
>>> > As I understand, pgBackRest recommends having 2 full backups and then
>>> > having incremental or differential backups as per requirement. Does
>>> anyone
>>> > have any reference numbers on how much time a backup for such a DB
>>> would
>>> > usually take, just for reference. If I take a full backup every Sunday
>>> and
>>> > then incremental backups for the rest of the week, I believe the
>>> > incremental backups should not be a problem, but the full backup every
>>> > Sunday might not finish in time.
>>>
>>> pgBackRest scales extremely well- what's going to matter here is how
>>> much you can 

Re: [webkit-gtk] Fix CVE-2023-32435 for webkitgtk 2.38.6

2023-09-06 Thread 不会弹吉他的KK
On Wed, Sep 6, 2023 at 9:46 PM Michael Catanzaro 
wrote:

> On Wed, Sep 6 2023 at 04:23:17 PM +0800, 不会弹吉他的KK
>  wrote:
> > My question is
> > 1. Does webkitgtk 2.38.6 is vulnerable to CVE-2023-32435?
>
> No clue, sorry.
>
> > 2. If YES, how to deal the patches with the 2 new files? If just
> > ignore and only patch file
> > Source/JavaScriptCore/wasm/WasmSectionParser.cpp, could
> > CVE-2023-32435 be fixed for 2.38.6, please?
>
> Patching just that one file is what I would do if tasked with
> backporting this fix.

OK.

That said, keep in mind that only 10-20% of our
> security vulnerabilities receive CVEs, so just patching CVEs is not
> sufficient to provide a secure version of WebKitGTK. The 2.38 branch is
> no longer secure and you should try upgrading to 2.42. (I would skip
> 2.40 at this point, since that branch will end next week when 2.42.0 is
> released.)
>
For Yocto project whick I am working on, packages(recipes) can NOT be
updated with
major version upgrade on Yocto released products/branches. So we still have
to fix such
kind of CVEs. But for master branch, webkitgtk will be upgraded as soon as
it released.

Thanks a lot.
Kai

>
> Michael
>
>
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


[webkit-gtk] Fix CVE-2023-32435 for webkitgtk 2.38.6

2023-09-06 Thread 不会弹吉他的KK
Hi All,
CVE-2023-32435 has been fixed in webkitgtk 2.40.0. According to
https://bugs.webkit.org/show_bug.cgi?id=251890, the commit is at
https://github.com/WebKit/WebKit/commit/50c7aaec2f53ab3b960f1b299aad5009df6f1967
.
It patches 3 files, but 2 of them are created/added in 2.40.0 and do NOT
exist in 2.38.6:
* Source/JavaScriptCore/wasm/WasmAirIRGenerator64.cpp
* Source/JavaScriptCore/wasm/WasmAirIRGeneratorBase.h

My question is
1. Does webkitgtk 2.38.6 is vulnerable to CVE-2023-32435?
2. If YES, how to deal the patches with the 2 new files? If just ignore and
only patch file Source/JavaScriptCore/wasm/WasmSectionParser.cpp,
could CVE-2023-32435 be fixed for 2.38.6, please?

Regards,
Kai
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: [webkit-gtk] Webkit bugzilla ID access

2023-08-31 Thread 不会弹吉他的KK
Hi Michael,

Thanks a lot!.

Kai

On Wed, Aug 30, 2023 at 11:42 PM Michael Catanzaro 
wrote:

>
> Hi, see: https://commits.webkit.org/260455@main
>
>
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: [webkit-gtk] Webkit bugzilla ID access

2023-08-29 Thread 不会弹吉他的KK
Hi MIchael,

Would you like to share the fix commit of CVE-2023-23529, please? It is
handled by https://bugs.webkit.org/show_bug.cgi?id=251944 which is still
not pulibc.

Sorry for duplicate email that previous is rejected by maillist.

Thanks,
Kai

On Wed, May 31, 2023 at 10:17 PM Michael Catanzaro 
wrote:

>
> Hi, the bugs are private. I can give you the mappings between bug ID
> and fix commit, though:
>
> 248266 - https://commits.webkit.org/258113@main
> 245521 - https://commits.webkit.org/256215@main
> 245466 - https://commits.webkit.org/255368@main
> 247420 - https://commits.webkit.org/256519@main
> 246669 - https://commits.webkit.org/255960@main
> 248615 - https://commits.webkit.org/262352@main
> 250837 - https://commits.webkit.org/260006@main
>
> That said, I don't generally recommend backporting fixes yourself
> because (a) it can become pretty difficult as time goes on, and (b)
> only a tiny fraction of security fixes receive CVE identifiers (maybe
> around 5%). So I highly recommend upgrading to WebKitGTK 2.40.2.
> WebKitGTK maintains API and ABI stability to the greatest extent
> possible in order to encourage safe updates.
>
> Michael
>
>
> ___
> webkit-gtk mailing list
> webkit-gtk@lists.webkit.org
> https://lists.webkit.org/mailman/listinfo/webkit-gtk
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: DB Server slow down & hang during Peak hours of Usage

2023-08-08 Thread KK CHN
On Tue, Aug 8, 2023 at 5:49 PM Marc Millas  wrote:

> also,
> checkpoint setup are all default values
>
> you may try to
> checkpoint_completion_target = 0.9
> checkpoint_timeout = 15min
> max_wal_size = 5GB
>
> and, as said in the previous mail, check the checkpoint logs
>
> Also, all vacuum and autovacuum values are defaults
> so, as autovacuum_work_mem = -1
> the autovacuum processes will use the 4 GB setuped by maintenance_work_mem
> = 4096MB
> as there are 3 launched at the same time, its 12 GB "eaten"
> which doesn't look like a good idea, so set
> autovacuum_work_mem = 128MB
>
> also pls read the autovacuum doc for your version (which is ?) here for
> postgres 12:
> https://www.postgresql.org/docs/12/runtime-config-autovacuum.html
>
>
>
> Marc MILLAS
> Senior Architect
> +33607850334
> www.mokadb.com
>
>
>
> On Tue, Aug 8, 2023 at 1:59 PM Marc Millas  wrote:
>
>> Hello,
>> in the postgresql.conf joined, 2 things (at least) look strange:
>> 1) the values for background writer are the default values, fit for a
>> server with a limited writes throughput.
>> you may want to increase those, like:
>> bgwriter_delay = 50ms
>> bgwriter_lru_maxpages = 400
>> bgwriter_lru_multiplier = 4.0
>> and check the checkpoint log to see if there are still backend processes
>> writes.
>>
>> 2) work_mem is set to 2 GB.
>> so, if 50 simultaneous requests use at least one buffer for sorting,
>> joining, ..., you will consume 100 GB of RAM
>> this value seems huge for the kind of config/usage you describe.
>> You may try to set work_mem to 100 MB and check what's happening.
>>
>> Also check the logs, postgres tells his life there...
>>
>>
>>
>>
>>
>> Marc MILLAS
>> Senior Architect
>> +33607850334
>> www.mokadb.com
>>
>>
>>
Thank you all for your time and the valuable inputs to fix the issue.  Let
me tune conf parameters as advised and   will get back with the results and
log outputs .

Krishane

>
>> On Mon, Aug 7, 2023 at 3:36 PM KK CHN  wrote:
>>
>>> List ,
>>>
>>> *Description:*
>>>
>>> Maintaining a DB Server Postgres and with a lot of read writes to this
>>> Server( virtual machine running on  ESXi 7 with CentOS 7) .
>>>
>>> ( I am not sure how to get the read / write counts or required IOPS or
>>> any other parameters for you. If  you point our  I can execute those
>>> commands and get the data. )
>>>
>>> Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
>>> Emergency call response system  writing many  Emergency Response vehicles
>>> locations coordinates to the DB every 30 Seconds and every emergency call
>>> metadata (username, phone number, location info and address of the caller
>>> to the DB for each call)
>>>
>>> During these hours  the system hangs and the  Application ( which shows
>>> the location of the vehicles on a  GIS map hangs ) and the CAD machines
>>> which connects to the system hangs as those machines can't  connect to the
>>> DB and get data for displaying the caller information to the call taking
>>> persons working on them. )
>>>
>>> *Issue : *
>>> How to trace out what makes this DB  hangs and make it slow  and how to
>>> fix it..
>>>
>>> *Resource poured on the system :*
>>>
>>> *64 vCPUs  allocate ( Out of a host machine comprised of 2 processor
>>> slots of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage
>>> show 50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
>>> showing around 33 GB only ) *
>>>
>>> *Query :*
>>>
>>> How to rectify the issues that makes the DB server underperforming and
>>> find a permanent fix for this slow down issue*. *
>>>
>>> *Attached the  Postgres.conf file here for reference .*
>>>
>>> *Any more information required I can share for analysis to fix the
>>> issue. *
>>>
>>>
>>> *Krishane *
>>>
>>


Re: My 1st TABLESPACE

2023-08-08 Thread KK CHN
On Mon, Aug 7, 2023 at 5:47 PM Amn Ojee Uw  wrote:

> Thanks Negora.
>
> Makes sense, I will check it out.
>
> On 8/7/23 1:48 a.m., negora wrote:
>
> Hi:
>
> Although the "postgres" user owns the "data" directory, Has he access to
> the whole branch of directories? Maybe the problem is that he can't reach
> the "data" directory.
>
> Regards.
>
>
> On 07/08/2023 07:43, Amn Ojee Uw wrote:
>
> I'd like to create a TABLESPACE, so, following this web page
> ,  I
> have done the following :
>
> *mkdir
> /home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*
>
> *sudo chown postgres:postgres
> /home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data*
>
> *sudo -u postgres psql*
>
> *\du*
> * arbolone| Cannot login  | {}*
> * chispa
> || {prosafe}*
> * workerbee | Superuser, Create DB| {arbolone}*
> * jme
> || {arbolone}*
> * postgres| Superuser, Create role, Create DB, Replication, Bypass RLS
> | {}*
> * prosafe  | Cannot login  | {}*
>
> *CREATE TABLESPACE jmetablespace OWNER jme LOCATION
> '/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data';*
>
>
Here owner is jme   and the  data dir  you created must have owner jme..

> The *CREATE **TABLESPACE* schema throws this error message :
>
> *ERROR:  could not set permissions on directory
> "/home/my_debian_account/Documents/NetbeansWorkSpace/JavaSE/Jme/database/postgresql/data":
> Permission denied*
>
> I have followed the web page to the best of my abilities, and AFAIK, the
> postgres user owns the folder '*data*'.
>
> I know that something is missing, where did I go wrong and how can I
> resolve this issue?
>
>
> Thanks in advance.
>
>
>


DB Server slow down & hang during Peak hours of Usage

2023-08-07 Thread KK CHN
List ,

*Description:*

Maintaining a DB Server Postgres and with a lot of read writes to this
Server( virtual machine running on  ESXi 7 with CentOS 7) .

( I am not sure how to get the read / write counts or required IOPS or any
other parameters for you. If  you point our  I can execute those commands
and get the data. )

Peak hours  say 19:00 Hrs to 21:00 hrs it hangs ( The application is an
Emergency call response system  writing many  Emergency Response vehicles
locations coordinates to the DB every 30 Seconds and every emergency call
metadata (username, phone number, location info and address of the caller
to the DB for each call)

During these hours  the system hangs and the  Application ( which shows the
location of the vehicles on a  GIS map hangs ) and the CAD machines which
connects to the system hangs as those machines can't  connect to the DB and
get data for displaying the caller information to the call taking persons
working on them. )

*Issue : *
How to trace out what makes this DB  hangs and make it slow  and how to fix
it..

*Resource poured on the system :*

*64 vCPUs  allocate ( Out of a host machine comprised of 2 processor slots
of 20 cores each with Hyper Threading, intel xeon 2nd Gen, CPU usage show
50 % in vCentre Console), and RAM 64 GB allocated ( buy usage always
showing around 33 GB only ) *

*Query :*

How to rectify the issues that makes the DB server underperforming and find
a permanent fix for this slow down issue*. *

*Attached the  Postgres.conf file here for reference .*

*Any more information required I can share for analysis to fix the issue. *


*Krishane *


postgresql(1).conf
Description: Binary data


Re: Backup Copy of a Production server.

2023-08-07 Thread KK CHN
On Mon, Aug 7, 2023 at 10:49 AM Ron  wrote:

> On 8/7/23 00:02, KK CHN wrote:
>
> List,
>
> I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
> external storage( Say USB Hard Drive) and need to set up a backup server
> with this data dir.
>
> What is the trivial method to achieve this ??
>
> 1. Is Sqldump an option at a production server ?? (  Will this affect the
> server performance  and possible slowdown of the production server ? This
> server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
> Any bottlenecks ?
>
>
> Whether or not there will be bottlenecks depends on how busy (CPU and disk
> load) the current server is.
>
>
> 2. Is copying the data directory from the production server to an external
> storage and replace the data dir  at a  backup server with same postgres
> version and replace it's data directory with this data dir copy is a viable
> option ?
>
>
> # cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
> database server performance ??)   due to the copy command overhead ?
>
>
> OR  doing a WAL Replication Configuration to a standby is the right method
> to achieve this ??
>
>
> But you say you can't establish a network connection outside the DC.  ( I
> can't do for a remote machine .. But I can do  a WAL replication to another
> host in the same network inside the DC. So that If I  do a sqldump  or Copy
> of Data dir of the standby server it won't affect the production server, is
> this sounds good  ?  )
>
>
>  This is to take out the database backup outside the Datacenter and our DC
> policy won't allow us to establish a network connection outside the DC to a
> remote location for WAL replication .
>
>
> If you're unsure of what Linux distro & version and Postgresql version
> that you'll be restoring the database to, then the solution is:
> DB=the_database_you_want_to_backup
> THREADS=
> cd $PGDATA
> cp -v pg_hba.conf postgresql.conf /media/mydb_backup
> cd /media/mydb_backup
> pg_dumpall --globals-only > globals.sql
>

What is the relevance of  globals-only and  what this will do  ${DB}.log
// or is it  ${DB}.sql  ?

pg_dump --format=d --verbose --jobs=$THREADS $DB &> ${DB}.log  // .log
> couldn't get an idea what it mean
>
> If you're 100% positive that the system you might someday restore to is
> *exactly* the same distro & version, and Postgresql major version, then
> I'd use PgBackRest.
>
> --
> Born in Arizona, moved to Babylonia.
>


Backup Copy of a Production server.

2023-08-06 Thread KK CHN
List,

I am in need to copy a production PostgreSQL server  data( 1 TB)  to  an
external storage( Say USB Hard Drive) and need to set up a backup server
with this data dir.

What is the trivial method to achieve this ??

1. Is Sqldump an option at a production server ?? (  Will this affect the
server performance  and possible slowdown of the production server ? This
server has a high IOPS). This much size 1.2 TB will the Sqldump support ?
Any bottlenecks ?

2. Is copying the data directory from the production server to an external
storage and replace the data dir  at a  backup server with same postgres
version and replace it's data directory with this data dir copy is a viable
option ?


# cp  -r   ./data  /media/mydb_backup  ( Does this affect the Production
database server performance ??)   due to the copy command overhead ?


OR  doing a WAL Replication Configuration to a standby is the right method
to achieve this ??

 This is to take out the database backup outside the Datacenter and our DC
policy won't allow us to establish a network connection outside the DC to a
remote location for WAL replication .

Any hints most welcome ..

Thank you
Krishane


EDB to Postgres Migration

2023-07-13 Thread KK CHN
List,

Recently I happened to have  managed a few  EDB instances running on the
EDB-10 version .

I am looking for an option for migrating all these EDB instances  to
Postgres Community edition.

1. What  major steps / actions involved ( in  bird's eye view ) for a
successful migration  to postgres community edition . ( From EDB 10 to
Postgres 14 )

2. What major challenges are involved?  (or any hurdles ?)


Please enlighten me with your experience..

Any reference  links most welcome ..

PS: -  The EDB instances are live and in production.. I can get a down time
( 5  to 15 Minutes Maximum)  Or can live  porting and upgrading to postgres
14  is possible  with minimal downtime ?

Request your  guidance,
Krishane.


BI Reports and Postgres

2023-07-11 Thread KK CHN
List,
1. For generating BI reports, which  Databases are more suitable ( RDBMS
like Postgres  OR NoSQL like MongoDB ) ? Which is best? Why ?

2. Is NoSQL DBs like MongoDB et all useful in which scenarios  and
application context ? or NoSQLs are losing the initial hype ?

3. Could someone point out which BI report tool (  OpenSource tool / Free
Software tool )  available for  generating BI reports from Postgres ?
 What does the community use ?

4. For Generating BI reports does it make sense to keep your data in RDBMS
or do we need to port data to MongoDB or similar NoSQLs ?

Any hints are much appreciated.
Krishane


PostgreSQL Server Hang​

2023-06-21 Thread KK CHN
*Description of System: *
1. We are running a Postgres Server (version 12, on CentOS 6) for an
emergency call attending and  vehicle tracking system fitted with mobile
devices for vehicles with navigation apps for emergency service.

2.   vehicles every 30 Seconds sending location coordinates( Lat /Long ) and
getting stored into the DB server at the emergency call center cum control
room.

*Issue: *
We are facing an issue of  the database hanging and becoming unresponsive
for
applications running which try to connect to the DB. So eventually applications
are also crawling on its knees.


*Mitigation done so far : *What mitigation we have done is increasing the
resources,CPU(vCPUs)  from  32 to 64  ( Not sure is this the right
approach / may be too dumb idea, but the hanging issue for the time being
rectified. )..

RAM 32 GB increased to 48 GB,  but  it observed that RAM usage was always
below 32GB only ( So foolishly increased the RAM !!!)

*Question: *
How to optimize and fine tune this database performance issue ?  Definitely
pouring the resources like the above is not a solution.

What to check the root cause and find for the performance bottle neck
reasons  ?

Thank you,
Krishane


*Additional Inputs If required: *

*##*
The DB machine   is running on a CentOS6 platform ..  Only a single Database
Instance  running as a Virtual Machine.

The  Database server also stores call center call related ( call arrival and
dispatch time stamps and short messages to and  from around 300 Desktop
application operators and data from Mobile tablets  fitted on Vehicles
with  VTS App installed on it.  The vehicle locations every 30 seconds are
continuously stored into the database..

Voice calls from callers in emergency which are each 3MB in size not stored
in the Database, but as  files stored in an NFS mount folder and  the
Database stores only the references to that voice call  files for future
reference ( The call volumes are around 1 lakh / day )   Only  meta data
information related to the calls ,  caller name, caller number, lat/long data
of caller, short description of caller situation which are less than 200
Characters  x  3 messages per each call  stored into DB.

This database  is also used for  making  reports on the action taken by
call takers/despatchers, Vehicle tracking reports etc on a daily basis.
 Around 2000 Vehicles are fitted with Mobile tablets with the emergency
navigation apps in the fleet.

The database grows roughly 1 GB / Day  )





Re: Doris 编译报错

2023-05-05 Thread zy-kk
If your compiled code is the Doris 1.2 branch, please use the 1.2 version of 
the docker development image: apache/doris:build-env-for-1.2

> 2023年5月5日 19:25,郑高峰  写道:
> 
> 环境 centOs,
> Docker 拉的最新编译环境,
> 代码为apache-doris-1.2.4.1-src
> 
> 错误提示为:
> /root/apache-doris-1.2.4.1-src/be/src/vec/core/field.h: In member function 
> 'doris::Status doris::ScrollParser::fill_columns(const 
> doris::TupleDescriptor*, 
> std::vector  >&, doris::MemPool*, bool*, const std::map std::__cxx11::basic_string >&)':
> /root/apache-doris-1.2.4.1-src/be/src/vec/core/field.h:633:9: error: 'val' 
> may be used uninitialized in this function [-Werror=maybe-uninitialized]
> 633 | new () StorageType(std::forward(x));
> | ^~
> /root/apache-doris-1.2.4.1-src/be/src/exec/es/es_scroll_parser.cpp:627:30: 
> note: 'val' was declared here
> 627 | __int128 val;
> 
> 
> 
>   
> 郑高峰
> zp...@163.com
>  
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@doris.apache.org
> For additional commands, e-mail: dev-h...@doris.apache.org



Re: [webkit-gtk] How to fix CVEs of webkitgtk 2.36.x

2023-03-27 Thread 不会弹吉他的KK
On Wed, Mar 22, 2023 at 7:01 PM Michael Catanzaro 
wrote:

> On Wed, Mar 22 2023 at 11:26:56 AM +0200, Adrian Perez de Castro
>  wrote:
> > Recently advisories published by Apple include the Bugzilla issue
> > numbers
> > (e.g. [1]), so with some work you can find out which commits
> > correspond to
> > the fixes.
>
> It finally occurs to me that since Apple now publishes the bug
> information, we could start publishing revision information. We'd want
> to fix [1] first.
>

Hi  Adrián and Michael,

Thanks. I'll try to do more search for the existing CVEs.


> > WebKitGTK 2.38.x is backwards compatible with 2.36.x, you can safely
> > update
> > without needing to change applications. In general, we always keep
> > the API and
> > ABI backwards compatible.
>
> For avoidance of doubt, WebKitGTK 2.40.x is backwards-compatible as
> well and that will remain true indefinitely, as long as you continue to
> build the same API version [2]. Adrian might be planning one last
> 2.38.x release, but it's really time to move on to 2.40.
>
> On rare occasions, an upgrade might affect the behavior of particular
> API functionality within the same API version, but this is unusual and
> is avoided whenever possible. I don't think any APIs broke between 2.36
> and 2.40, so that shouldn't be a problem for you this time. The goal is
> for upgrades to be as safe as possible.
>

Great. Your comments will be powerful evidence to upgrade webkitgtk on
Yocto lts release.

Thanks a lot.
Kai


> Michael
>
> [1] https://bugs.webkit.org/show_bug.cgi?id=249672
> [2]
>
> https://blogs.gnome.org/mcatanzaro/2023/03/21/webkitgtk-api-for-gtk-4-is-now-stable/
>
>
>
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


[webkit-gtk] How to fix CVEs of webkitgtk 2.36.x

2023-03-21 Thread 不会弹吉他的KK
Hi All,

I am working on Yocto project. In last LTS Yocto release the version of
webkitgtk is 2.36.8.
And there are more than 15 CVE issues for 2.36.8 till now. I checked the
git log and
"WebKitGTK and WPE WebKit Security Advisory" pages that I only got info
that which CVE
has been fixed in which version of webkitgtk. But I can NOT get the exact
info that it is fixed by
which commit(s). So if there anywhere or some web page to get the specific
fix/patch for a CVE,
please?

And the second question is webkitgtk 2.38.x backward compatible with
2.36.8? I compare
 the header files between 2.36.8 and 2.38.4 that it seems no function
deleted and no interface
change for existing functions, only some functions are marked deprecated
and some new functions
added. Does that mean upgrade webkitgtk from 2.36.8 to 2.38.4 will not
break applications which
depend on it, please?

Thanks a lot.
Kai
___
webkit-gtk mailing list
webkit-gtk@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-gtk


Re: [vpp-dev] Help:VPP-DPDK #acl_plugin #dpdk

2023-02-07 Thread kk
I read the official example given by dpdk. The two function interfaces 
"rte_ring_create and rte_mempool_create" should indeed be called after 
rte_eal_init(). The problem is that I have no problem calling these two 
function interfaces in the vpp dpdk plug-in, which is very strange.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22565): https://lists.fd.io/g/vpp-dev/message/22565
Mute This Topic: https://lists.fd.io/mt/96804295/21656
Mute #acl_plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/acl_plugin
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



[vpp-dev] Help:VPP-DPDK #acl_plugin #dpdk

2023-02-07 Thread kk
Hello everyone, I wrote a vpp plug-in by myself. I called the dpdk function 
interface "rte_ring_create and rte_mempool_create" in this plug-in, and then it 
will prompt:
"MEMPOOL: Cannot allocate tailq entry!
Problem getting send ring
RING: Cannot reserve memory for tailq
RING: Cannot reserve memory for tailq
",
does anyone know what's going on?
And I found that I have no problem calling this function interface in the 
vpp-dpdk plug-in.

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#22557): https://lists.fd.io/g/vpp-dev/message/22557
Mute This Topic: https://lists.fd.io/mt/96804295/21656
Mute #acl_plugin:https://lists.fd.io/g/vpp-dev/mutehashtag/acl_plugin
Mute #dpdk:https://lists.fd.io/g/vpp-dev/mutehashtag/dpdk
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: NEO6 GPS with Py PICO with micropython

2022-11-30 Thread KK CHN
List,

Just commented the // gpsModule.readline() in the while loop,  (
refer the link
https://microcontrollerslab.com/neo-6m-gps-module-raspberry-pi-pico-micropython/
)


while True: # gpsModule.readline() // This line commented out and the "GPS
not found message disappeared". buff = str(gpsModule.readline()) parts =
buff.split(',')


The GPS not found error which appears intermittently in the output python
console for few seconds ( say 7 to 8 seconds  its printing the lines   "
GPS data not found" )   now  disappears.

 Any thoughts?  How the above line comment made it vanish the  "GPS data
not found" error output.

Krishane

On Wed, Nov 30, 2022 at 3:58 AM rbowman  wrote:

> On Tue, 29 Nov 2022 17:23:31 +0530, KK CHN wrote:
>
>
> > When I ran the program I am able to see the output of  latitude and
> > longitude in the console of thony IDE.  But  between certain intervals
> > of a few seconds  I am getting the latitude and longitude data ( its
> > printing GPS data not found ?? ) in the python console.
>
> I would guess the 8 seconds in
>
> timeout = time.time() + 8
>
> is too short. Most GPS receivers repeat a sequence on NMEA sentences and
> the code is specifically looking for $GPGGA. Add
>
> print(buff)
>
> to see the sentences being received. I use the $GPRMC since I'm interested
> in the position, speed, and heading. It's a different format but if you
> only want lat/lon you could decode it in a similar fashion as the $GPGGA.
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


NEO6 GPS with Py PICO with micropython

2022-11-29 Thread KK CHN
List ,
I am following this tutorial to  get latitude and longitude data  using
NEO6 GPS module and Py PICO  to read the GPS data from the device.

I followed the code specified in this tutorial.
https://microcontrollerslab.com/neo-6m-gps-module-raspberry-pi-pico-micropython/

I have installed thony IDE in my Desktop(windows PC) and  run the code
after the devices all connected and using USB cable connected to my PC.

When I ran the program I am able to see the output of  latitude and
longitude in the console of thony IDE.  But  between certain intervals of a
few seconds  I am getting the latitude and longitude data ( its printing
GPS data not found ?? ) in the python console.

The satellite count from the $GGPA output showing 03 ..
and the GPS data not found repeating randomly for intervals of seconds.
Any hints why it is missing the GPS data (randomly) ??

PS:-  The GPS device I placed outside  my window and connected to the PC
with a USB cable from the PICO  module. GPS device NEO6 light (Red LED
) blinking even though the "  GPS data not found" messages in th python
console.

Any hints ?? most welcome

Yours,
Krishane
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 如何使用flink sql优雅的处理大量嵌套if-else逻辑

2022-11-28 Thread macia kk
我会选择 UDF  + 配置文件,把配置文件放 HDFS上,UDF读这个配置文件。每次更新HDFS的配置文件,重启下任务

casel.chen  于2022年11月24日周四 12:01写道:

> 我有一个flink
> sql作业需要根据不同字段值满足不同条件来设置另一个字段值,还有一些嵌套if-else逻辑,这块逻辑不是固定的,业务方会过一段时间调整一次。
> 想请问如何使用flink sql优雅的处理嵌套if-else逻辑呢?我有想到使用drools规则引擎,通过udf来调用,不知道还有没有更好的办法?
>
>


Re: Tumble Window 会带来反压问题吗?

2022-10-20 Thread macia kk
Hi  yidan

我的的意思是,假设上游 1-10 分钟在处理数据,然后第11分钟就把大批量数据发给 sink,然后上游继续进行 10-20的处理,但是这时候 sink
由于数据量大产生了阻塞,造成反压反馈给上游,上游就变慢了。但实际上如果没有反压机制。10-20 的时候,sink
其实可以慢慢写完的。唯一的区别是他发送了一个反压信号,导致上游处理变慢。不知道理解的对不对。


为了要10分钟发送,是因为上游太多数据, 所以我先提前用窗口个聚合一下,目前一秒将近有 800MB 的流量



Shammon FY  于2022年10月20日周四 11:48写道:

> 如果必须要10分钟,但是key比较分散,感觉这种情况可以增加资源加大一下并发试试,减少每个task发出的数据量
>
> On Thu, Oct 20, 2022 at 9:49 AM yidan zhao  wrote:
>
> > 这个描述前后矛盾,写出速度跟不上导致反压,那控制写出速度不是问题更大。不过你不需要考虑这些,因为你控制不了写出速度,只能控制写出时机。
> >
> > 写出时机是由window的结束时间和watermark决定的,所以如果真要解决,需要控制分窗不要固定整点10分钟。
> >
> > macia kk  于2022年10月20日周四 00:57写道:
> > >
> > > 聚合10分钟再输出,到10分钟的时候由于积攒了很多数据,写出速度跟不上,导致反压,然后上游消费就处理变慢了。
> > >
> > > 如果控制一下写出的速度,让他慢慢写会不会好一些
> >
>


Tumble Window 会带来反压问题吗?

2022-10-19 Thread macia kk
聚合10分钟再输出,到10分钟的时候由于积攒了很多数据,写出速度跟不上,导致反压,然后上游消费就处理变慢了。

如果控制一下写出的速度,让他慢慢写会不会好一些


Flink 的 大Hive 维度表

2022-09-21 Thread macia kk
Hi
  Flink 的 Hive 维度表是放在内从中,可以把这个放到State中吗,这样用 RocksDB 就能减小一下内存的使用量


Python code: brief

2022-07-26 Thread KK CHN
List ,

I have come across a difficulty to understand the code in this file.  I am
unable to understand exactly what the code snippet is doing here.

https://raw.githubusercontent.com/CODARcode/MDTrAnal/master/lib/codar/oas/MDTrSampler.py

I am new to this type of scientific computing code snippets and it is coded
by someone.  Due to a requirement I would like to understand what these
lines of code will do exactly.  If someone  could explain to me what the
code snippets do in the code blocks, it will be a great help .

Thanks in advance
Krish
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [users@httpd] site compromised and httpd log analysis

2022-07-06 Thread KK CHN
On Wed, Jul 6, 2022 at 8:33 AM Yehuda Katz  wrote:

> Your log doesn't start early enough. Someone uploaded a web shell (or
> found an existing web shell) to your server, possibly using an upload for
> that doesn't validate the input, then used that shell to run commands on
> your server.
>

Yes, that was not too old log

Here is another old log  paste
https://zerobin.net/?a4d9f5b146676594#hkpTU0ljaG5W0GUNVEsaYqvffQilrXavBmbK+V9mzUw=


.

Here is another log which starts earlier than the earlier logs.  Which may
help to investigate more.

I would consider your entire server to be compromised at this point since
> you have no record of what else the attacker could have done once they had
> a shell.
>
> Yes we took the server down, and recreated the VM with an old backup. Also
informed the developer/maintainer about this simple.shell execution and the
need of regular patching of the PHP7 version and the wordpress framework
they used for hosting.

I would like to know what other details / analysis we need to perform to
find out how the attacker got access and what time the backdoor was
installed and through what vulnerability they exploited ?

I request your tips  to investigate further and to find the root cause of
this kind of attack and how to prevent it in future..??



Make sure that you do not allow users to upload files and then execute
> those files.
>
> - Y
>
> On Tue, Jul 5, 2022 at 9:53 PM KK CHN  wrote:
>
>> https://pastebin.com/YspPiWif
>>
>> One of the websites hosted  by a customer on our Cloud infrastructure was
>> compromised, and the attackers were able to replace the home page with
>> their banner html page.
>>
>> The log files output I have pasted above.
>>
>> The site compromised was PHP 7 with MySQL.
>>
>> From the above log, can someone point out what exactly happened and how
>> they are able to deface the home page.
>>
>> How to prevent these attacks ? What is the root cause of this
>> vulnerability  and how the attackers got access ?
>>
>> Any other logs or command line outputs required to trace back kindly let
>> me know what other details  I have to produce ?
>>
>> Kindly shed your expertise in dealing with these kind of attacks and
>> trace the root cause and prevention measures to block this.
>>
>> Regards,
>> Krish
>>
>>
>>


[users@httpd] site compromised and httpd log analysis

2022-07-05 Thread KK CHN
https://pastebin.com/YspPiWif

One of the websites hosted  by a customer on our Cloud infrastructure was
compromised, and the attackers were able to replace the home page with
their banner html page.

The log files output I have pasted above.

The site compromised was PHP 7 with MySQL.

>From the above log, can someone point out what exactly happened and how
they are able to deface the home page.

How to prevent these attacks ? What is the root cause of this
vulnerability  and how the attackers got access ?

Any other logs or command line outputs required to trace back kindly let me
know what other details  I have to produce ?

Kindly shed your expertise in dealing with these kind of attacks and trace
the root cause and prevention measures to block this.

Regards,
Krish


[users@httpd] Defaced Website : Few forensic tips and help

2022-07-04 Thread KK CHN
List ,

https://pastebin.com/YspPiWif

One of our PHP  website hacked on 3rd july 2022.  I am attaching the httpd
access files contents in the above pastebin.I hide the original URL of
the website due to a SLA policy.

Can anybody point out from the logs what exactly made the attacker able to
bring the site down..

Has he used this php site for attacking ?

Any other logs or command line outputs needed  let me know. I will share
the required files.   I am new to this area of forensic analysis to find
out the root cause of the attack .

Kindly shed some tips to find out where the vulnerability is and how to
prevent it in future.

Any more inputs/details  required  keep me informed, I can share those too.

Regards,
Krish


[users@httpd] Slow web site response..PHP-8/CSS/Apache/

2022-06-23 Thread KK CHN
List,

I am facing a slow response for a hosted PHP8 web site..   It takes 30
seconds to load the website fully .  The application and database(
postgresql ) both are separately running on two Virtual Machines in
OpenStack cloud.  in two 10.184.x.221  and 10.184.y.221 networks
respectively.



When I  used tools like  GTMetrix and Webpagetest.org   it says   there are
render  blocking resources

Resources are blocking the first paint of your page. Consider delivering
critical JS/CSS inline and deferring all non-critical JS/styles.
Learn how to improve this


Resources that *may* be contributing to render-blocking include:
URL Transfer Size Download Time
 xxx.mysite.com/css/bootstrap.min.css   152KB 6.6s
xxx.mysite.com/css/style.css 14.2KB 5.9s
xxx.mysite.com/css/font/font.css  3.33KB  5.7s

here this bootstrap.css, which take  TTFB  6 seconds   and full loading of
the website taking almost extra 24 seconds total  30 seconds to render it..

https://pastebin.mozilla.org/SX3Cyhpg


The GTmetrix.com site also  show  this  issue also

The Critical Request Chains below show you what resources are loaded with a
high priority. Consider reducing the length of chains, reducing the
download size of resources, or deferring the download of unnecessary
resources to improve page load.
Learn how to improve this


Maximum critical path latency: *24.9s*



How can I overcome this issue   ?  Is this a  VM performance issue or PHP
issue ?/Apache issue ?or PHP applicaiton to Database  backend
connection issue..

Excuse me if this an off topic post to httpd list. Hope a lot of people
might have their experience to share how to trouble shoot or what may the
root cause making this site response too slow.

Kindly shed some light here.  Any hints where to start most welcome..

Any more data needed pls let me know ..I can share .

Thanks in advance,
Krish.


[jira] [Commented] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-15 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17492824#comment-17492824
 ] 

kk commented on SPARK-38115:


Is there any config as such to stop using FileOutputCommiter, because we didn't 
set any conf explicitly to use the committers.

And more over when overwriting on s3:// then i don't have a problem of 
_temporary. Problem comes if our path has s3a://

Just I am looking if I can use conf/options to manage temporary location as 
staging and have target path as primary

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Minor
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-15 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17492785#comment-17492785
 ] 

kk commented on SPARK-38115:


Hello [~hyukjin.kwon] did you get a chance to look into this

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Minor
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-07 Thread kk (Jira)


[ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17488445#comment-17488445
 ] 

kk commented on SPARK-38115:


Thanks [~hyukjin.kwon] for responding.

Basically I am trying to write data to s3 from spark dataframe. And this will 
use FileOutputCommitter by spark.

[https://stackoverflow.com/questions/46665299/spark-avoid-creating-temporary-directory-in-s3]

Now my requirement is to either change the '{*}_temporary{*}' path to write to 
different s3 bucket and copy to original s3 by setting any spark conf or 
parameter part of write step.

or 

stop creating *_temporary* when writing to s3. 

As we have version enabled bucket the _temporary is being stored in the version 
even though it is not physically present.

Below is the write step:

df.coalesce(1).write.format('parquet').mode('overwrite').save('{*}s3a{*}://outpath')

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: Spark Core
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Minor
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-38115) No spark conf to control the path of _temporary when writing to target filesystem

2022-02-04 Thread kk (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-38115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kk updated SPARK-38115:
---
Description: No default spark conf or param to control the '_temporary' 
path when writing to filesystem.  (was: There is default spark conf or param to 
control the '_temporary' path when writing to filesystem.)

> No spark conf to control the path of _temporary when writing to target 
> filesystem
> -
>
> Key: SPARK-38115
> URL: https://issues.apache.org/jira/browse/SPARK-38115
> Project: Spark
>  Issue Type: Improvement
>  Components: PySpark, Spark Core, Spark Shell, Spark Submit
>Affects Versions: 2.4.8, 3.2.1
>Reporter: kk
>Priority: Major
>  Labels: spark, spark-conf, spark-sql, spark-submit
>
> No default spark conf or param to control the '_temporary' path when writing 
> to filesystem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



Re: [blink-dev] Re: Intent to extend the origin trial: WebTransport over HTTP/3

2022-01-20 Thread kk as
Hi
  Can you please let me know  what transport protocol  do the Streams API 
use in WebTransport over http3/quic.  
I am assuming the datagram API uses the UDP protocol for transport .   Can 
you also please let me know what is the difference in latency
when you send data using Streams API vs Datagram API ?


thanks

On Wednesday, October 27, 2021 at 10:34:56 PM UTC-7 Yutaka Hirano wrote:

> On Thu, Oct 28, 2021 at 2:38 AM Joe Medley  wrote:
>
>> Hi,
>>
>> Can I get some clarification?
>>
>> So this extends the origin trial through 96, but you don't know yet 
>> whether it will ship in 97? Is this correct?
>>
> We're shipping WebTransport over HTTP/3 in 97.
>
>
>> Joe
>> Joe Medley | Technical Writer, Chrome DevRel | jme...@google.com | 
>> 816-678-7195 <(816)%20678-7195>
>> *If an API's not documented it doesn't exist.*
>>
>>
>> On Mon, Oct 25, 2021 at 1:00 AM Mike West  wrote:
>>
>>> LGTM3.
>>>
>>> -mike
>>>
>>>
>>> On Thu, Oct 21, 2021 at 9:58 PM Daniel Bratell  
>>> wrote:
>>>
 For a gapless origin trial->shipping it is important to be sure we 
 don't overlook any feedback in the race to shipping. The normal process 
 has 
 gaps built in which form natural points to do that final polish based on 
 received feedback and that will be missing here.

 It does sound like the feedback has been positive though and that there 
 are no known problems that can't be fixed after shipping, and with that in 
 mind:

 LGTM2
 On 2021-10-21 21:53, Yoav Weiss wrote:

 Discussing amongst the API owners (Alex, Daniel, Rego and myself), this 
 is essentially a request for a gapless OT, only that the would-be-gap is 
 slightly longer than usual. Given the evidence 
 
  of 
 developer feedback presented in the I2S, that seems like a reasonable 
 request. 

 LGTM1 (as gapless OT requests require 3 LGTMs)

 On Monday, October 18, 2021 at 10:39:14 AM UTC+2 Yutaka Hirano wrote:

> Contact emails
>
> yhi...@chromium.org,vas...@chromium.org
>
> Explainer
>
> https://github.com/w3c/webtransport/blob/main/explainer.md
>
> Design docs/spec
>
> Specification: https://w3c.github.io/webtransport/#web-transport
>
>
> https://docs.google.com/document/d/1UgviRBnZkMUq4OKcsAJvIQFX6UCXeCbOtX_wMgwD_es/edit
>
> TAG review
>
> https://github.com/w3ctag/design-reviews/issues/669
>
>
> Summary
>
> WebTransport is an interface representing a set of reliable/unreliable 
> streams to a server. The interface potentially supports multiple 
> protocols, 
> but based on discussions on the IETF webtrans working group, we are 
> developing WebTransport over HTTP/3 which uses HTTP3 as the underlying 
> protocol.
>
> Note that we were developing QuicTransport a.k.a. WebTransport over 
> QUIC and we ran an origin trial M84 through M90. It uses the same 
> interface 
> WebTransport, but because of the protocol difference ("quic-transport" 
> vs. 
> "https") it is difficult for web developers to be confused by them.
>
> new WebTransport("quic-transport://example.com:9922")
>
> represents a WebTransport over QUIC connection, and
>
> new WebTransport("https://example.com:9922;)
>
> represents a WebTransport over HTTP/3 connection.
>
> Goals for experimentation
>
> We're shipping the API in M97 
> .
>  
> Twitch, one of our partners, wants to continue their experiment until the 
> API is fully shipped. I think this is a reasonable request given we 
> originally aimed to ship the feature in M96 but we missed the branch 
> point.
>
> The original goals follow:
>
> To see whether the API (and the implementation) is useful in various 
> circumstances.
>
> Our partners want to evaluate this API on various network 
> circumstances (i.e., lab environments are not enough) to see its 
> effectiveness.
>
> We also expect feedback for performance.
>
> Experimental timeline
>
> M95 and M96
>
> Ongoing technical constraints
>
> None
>
> Debuggability
>
> The devtools support is under development.
>
> Just like with regular HTTP/3 traffic, the detailed information about 
> the connection can be obtained via chrome://net-export interface.
>
> Will this feature be supported on all six Blink platforms (Windows, 
> Mac, Linux,
>
> Chrome OS, Android, and Android WebView)?
>
> Yes
>
> Is this feature fully tested by web-platform-tests 
> 
> ?

[s2putty-developers] RE:

2022-01-07 Thread kk
Title: Untitled document




2021年马上结束了,是否在为没有客户,没有订单而发愁
是否考虑借助软件来为您提高工作效率,开发海外客户

全球引擎数据,海关数据,决策人分析,社交平台搜索,WhatsApp客户电话搜索等多种获客模式帮助您解决无客户难题,提升工作效率及质量

邮件营销+WhatsApp营销多种营销方式,提升工作效率,快速获取意向客户,加速转化进度

QQ:1203046899
WeChat:18617145735
欢迎咨询
2021年马上结束了,是否在为没有客户,没有订单而发愁
是否考虑借助软件来为您提高工作效率,开发海外客户

全球引擎数据,海关数据,决策人分析,社交平台搜索,WhatsApp客户电话搜索等多种获客模式帮助您解决无客户难题,提升工作效率及质量

邮件营销+WhatsApp营销多种营销方式,提升工作效率,快速获取意向客户,加速转化进度

QQ:2890057524
WeChat:13247602337(手机同号)
欢迎咨询






___
s2putty-developers mailing list
s2putty-developers@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/s2putty-developers


unsubscribe

2021-11-15 Thread kk
unsubscribe



Re: [squid-dev] request for change handling hostStrictVerify

2021-11-01 Thread kk

On Saturday, October 30, 2021 01:14 GMT, Alex Rousskov 
 wrote:
 On 10/29/21 8:37 PM, Amos Jeffries wrote:
> On 30/10/21 11:09, Alex Rousskov wrote:
>> On 10/26/21 5:46 PM, k...@sudo-i.net wrote:
>>
>>> - Squid enforces the Client to use SNI
>>> - Squid lookup IP for SNI (DNS resolution).
>>> - Squid forces the client to go to the resolved IP
>>
>> AFAICT, the above strategy is in conflict with the "SECURITY NOTE"
>> paragraph in host_verify_strict documentation: If Squid strays from the
>> intended IP using client-supplied destination info, then malicious
>> applets will escape browser IP-based protections. Also, SNI obfuscation
>> or encryption may make this strategy ineffective or short-lived.
>>
>> AFAICT, in the majority of deployments, the mismatch between the
>> intended IP address and the SNI/Host header can be correctly handled
>> automatically and without creating serious problems for the user. Squid
>> already does the right thing in some cases. Somebody should carefully
>> expand that coverage to intercepted traffic. Frankly, I am somewhat
>> surprised nobody has done that yet given the number of complaints!

> IIRC the "right thing" as defined by TLS for SNI verification is that it
> be the same as the host/domain name from the wrapper protocol (i.e. the
> Host header / URL domain from HTTPS messages). Since Squid uses the SNI
> at step2 as Host value it already gets checked against the intercepted IP


Just to avoid misunderstanding, my email was _not_ about SNI
verification. I was talking about solving the problem this thread is
devoted to (and a specific solution proposed in the opening email on the
thread).

Alex.
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-devThanks Alex & Amos.

Not sure what do you mean with "Somebody should carefully expand that coverage 
to intercepted traffic"?
>then malicious applets will escape browser IP-based protections.
Browser should perform IP-based protection on browser(client) level and should 
therefor not traverse squid.



-- 
Kevin Klopfenstein
Bellevuestrasse 103
3095 Spiegel, CH
sudo-i.net


smime.p7s
Description: S/MIME cryptographic signature
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


[ovs-dev] unknown OpenFlow message (version 4, type 18, stat 21)

2021-10-31 Thread kk Yoon
openvswitch에 무선 파라미터 요청 메시지를 추가하기 위해 다음과 같은 과정을 거쳤습니다.

1. 무선 파라미터 메시지 정의
enum ofptype {
OFPTYPE_WPARAMS_REQUEST, /* OFPRAW_OFPST13_WPARAMS_ REQUEST. */
OFPTYPE_WPARAMS_REPLY, /* OFPRAW_OFST13_WPARAMS_REPLY. */
}

enum offraw {
/* OFPST 1.3+ (21): 무효. */
OFPRAW_OFST13_WPARAMS_ REQUEST,

/* OFST 1.3+ (21): 무효. */
OFPRAW_OFST13_WPARAMS_REPLY
}

2. 처리 함수 정의,
static enum ofperr
handle_wparams_request(struct ofconn* ofconn, const struct ofp_header* oh)
{
VLOG_WARN("handle_wparams_ request() 호출\n");
struct ofpbuf* buf;

buf = offraw_alloc_reply(OFPRAW_ OFPST13_WPARAMS_REPLY, 오, 0);
ofconn_send_reply(ofconn, buf);
반환 0;
}

정적 enum ofperr
handle_single_part_openflow( struct ofconn *ofconn, const struct ofp_header
*oh,
enum ofptype 유형)
OVS_EXCLUDED(ofproto_mutex)
{
// VLOG_INFO("유형: %d 대 %d", 유형, OFPTYPE_GET_TXPOWER_REQUE)

스위치(유형) {
경우 OFPTYPE_WPARAMS_REQUEST:
return handle_wparams_request(ofconn, oh);
}

하지만 /var/log/openvswitch/ovs- vswitchd.log 인쇄
2021-09-10T08:18:32.850Z| 18277|ofp_msgs|WARN|알 수 없는 OpenFlow 메시지(버전 4, 유형
18, 통계 21)
무엇이 문제입니까?
감사합니다.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[ovs-dev] unknown OpenFlow message (version 4, type 18, stat 21)

2021-10-29 Thread kk Yoon
To add a wireless parameter request message to the openvswitch, we went
through the following process.

1. wireless parameter message definition
enum ofptype {
OFPTYPE_WPARAMS_REQUEST, /* OFPRAW_OFPST13_WPARAMS_REQUEST. */
OFPTYPE_WPARAMS_REPLY, /* OFPRAW_OFPST13_WPARAMS_REPLY. */
}

enum ofpraw {
/* OFPST 1.3+ (21): void. */
OFPRAW_OFPST13_WPARAMS_REQUEST,

/* OFPST 1.3+ (21): void. */
OFPRAW_OFPST13_WPARAMS_REPLY
}

2. Definition of processing function,
static enum ofperr
handle_wparams_request(struct ofconn* ofconn, const struct ofp_header* oh)
{
VLOG_WARN("handle_wparams_request() called\n");
struct ofpbuf* buf;

buf = ofpraw_alloc_reply(OFPRAW_OFPST13_WPARAMS_REPLY, oh, 0);
ofconn_send_reply(ofconn, buf);
return 0;
}

static enum ofperr
handle_single_part_openflow(struct ofconn *ofconn, const struct ofp_header
*oh,
enum ofptype type)
OVS_EXCLUDED(ofproto_mutex)
{
// VLOG_INFO("type : %d vs %d", type, OFPTYPE_GET_TXPOWER_REQUEST);

switch (type) {
case OFPTYPE_WPARAMS_REQUEST:
return handle_wparams_request(ofconn, oh);
}

but /var/log/openvswitch/ovs-vswitchd.log print
2021-09-10T08:18:32.850Z|18277|ofp_msgs|WARN|unknown OpenFlow message
(version 4, type 18, stat 21)
What's the problem?
Thank you.
___
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev


[squid-dev] request for change handling hostStrictVerify

2021-10-26 Thread kk

Hi Guys!
Sorry I was unsure if this was the correct point of contact in regards to 
hostStrictVerify.

I think I am not the only one having issues with hostStrictVerify in scenarios 
where you just intercept traffic (tls) and squid checks the SNI if the IP 
address from the Client is the same as squid resolve it. The major issue in 
that approach is that many services today change their DNS records at a very 
high frequency, thus it's almost impossible to make sure that client and squid 
do have the same A record cached.

My Proposal to resolve this issue would be the following:
- Squid enforces the Client to use SNI! (currently, this is not done and can be 
considered as a security issue, because you can bypass any hostname rules)
- Squid lookup IP for SNI (DNS resolution).
- Squid forces the client to go to the resolved IP (and thus ignoring the IP 
which was provided in the L3 info from the client)

Any thoughts?


many thanks & have a nice day,

Kevin

-- 
Kevin Klopfenstein
sudo-i.net


smime.p7s
Description: S/MIME cryptographic signature
___
squid-dev mailing list
squid-dev@lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-dev


Re: Combine multiple wasm files

2021-09-01 Thread Mehaboob kk
Sorry, I missed this post. Thank you for answering my question. What I am
doing now is to build multiple .a files and combine while linking into one
wasm file.
I tried reverse engineer wasm and .a files back to c file and its still wat
like format not getting back to the original c/c++ code which I started
with. However, someone who is intimately familiar with wasm may be able to
figure out the source, just like a assembly language expert can understand
the logic. Now I am trying to apply some source obfuscation as binary
obfuscation tools not available for wasm.
Do you think my understanding is correct? Are there any tools out there to
reproduce the exact c code from wasm binary?




On Sat, Jun 19, 2021 at 8:23 PM 'Sam Clegg' via emscripten-discuss <
emscripten-discuss@googlegroups.com> wrote:

> What is the current mechanism for loading the wasm file you are
> supplying?  Are you using emscripten's dynamic linking capability (i.e.
> MAIN_MODULE + SIDE_MODULE?).
>
> If that answer is yes, and you are asking about linking a SIDE_MODULE into
> the MAIN_MODULE ahead of time, its not something that is supported no.
> Also, shared libraries (side moudles) are are only slightly more obfuscated
> than object files and `.a` archives.  They are all in the wasm format which
> is fairly easy to disassembly.  If you want to try to prevent decompilation
> or disassembly you would need to do more than just ship as a shared library
> (side module) you would also need to perform some kind of obfuscation,
> which by its nature (and the nature of WebAssemlby in particular) is always
> going to have limits.
>
> On Sat, Jun 19, 2021 at 3:45 PM Mehaboob kk  wrote:
>
>> Hello,
>>
>> Is it possible to combine multiple .wasm files to one single .wasm file?
>>
>> Scenario:
>> I want to share a library(SDK) to an end customer who is building the
>> .wasm/JS application. Customer concerned that loading multiple wasm files
>> is not efficient. So we wanted to combine two wasm files. Although I can
>> share static lib generated using Emscripten, its not preferred because .a
>> file is easy to reverse compared to wasm right?
>>
>> Any inputs on this please?
>>
>> Thank you
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "emscripten-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to emscripten-discuss+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcCQC45LU%3DNMFev5PYjn%2BmzOOgDfhXx0ahDXq8GqKRVjJQ%40mail.gmail.com
>> <https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcCQC45LU%3DNMFev5PYjn%2BmzOOgDfhXx0ahDXq8GqKRVjJQ%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "emscripten-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to emscripten-discuss+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/emscripten-discuss/CAL_va29zxMEFFi8gHEP5C0pBJ0idQh8qeqT%2B1S5v%3D9XO5BMKNA%40mail.gmail.com
> <https://groups.google.com/d/msgid/emscripten-discuss/CAL_va29zxMEFFi8gHEP5C0pBJ0idQh8qeqT%2B1S5v%3D9XO5BMKNA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcA7%3DS0SD%2Bas_BZGvwKD95DT-%2B32%2Bk3zn_bDfNL77LhnOg%40mail.gmail.com.


[ovirt-users] Injecting VirtIO drivers : Query

2021-08-25 Thread KK CHN
Hi,

I am in the process of importing  multi disk  Windows VMs from   HyperV
 environment to   my  OpenStack Setup( Ussuri version, glance and QEMU-KVM
)

I am referring online documents as  in the trailing lines.   But Is this
relevant to inject  VirtIO drivers to the Windows VMs ( as the articles
date back to 2015) .Some where it  mentions when you perform P2V
migration its necessary.

Is this VirtIO injection is necessary in my case ?  I am exporting from
HyperV and importing to OpenStack.

1. Kindly advise me the relevance of VirtIO injection and is it applicable
to my requirements.

2. Is there any  uptodate reference materials link for performing Windows
Multidisk VM importing to OpenStack(ussurin, glance and KVM) .  Or Am I
doing an impossible thing by beat around the bush ?


These are the links which I referred but it too old :  So the relevance of
the contents still applicable ?   ( Windows VMs are   Windows 2012 Server,
2008 and 2003 which I need to import to OpenStack)

https://superuser.openstack.org/articles/how-to-migrate-from-vmware-and-hyper-v-to-openstack/

https://portal.nutanix.com/page/documents/kbs/details?targetId=kA00e00kAWeCAM

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YXAGSMCIE44FBN4GXJSZEG2XHTTM5NFU/


[ovirt-users] Automigration of VMs from other hypervisors

2021-08-11 Thread KK CHN
Hi list,

I am in the process of migrating 150+ VMs running on Rhevm4.1 toKVM
based OpenStack installation ( Ussuri with KVm and glance as image storage.)

What I am doing now, manually shutdown each VM through RHVM GUI  and export
to export domain and  scp those image files of each VM to our OpenStack
controller node and uploading to glance and creating each VM manually.

query 1.
Is there a better way to automate this migration by any utility or scripts ?
Any one done this kind of automatic migration before what was your
approach?  Or whats the better approach instead of doing manual migration ?

Or only manually I have to repeat the process for all 150+ Virtual
machines?  ( guest VMs are  CentOS7 and Redhat Linux 7 with LVM data
partitions attached)

Kindly share your thoughts..

Query 2.

other than this 150+ VMs Redhat Linux 7 and Centos VMs  on Rhevm 4.1, I
have to migrate  50+ VMs  which hosted on hyperV.

What the method / approach for exporting from HyperV and importing to
OpenStack Ussuri version  with glance with KVM hpervisor ? ( This is the
ffirst time I am going to use hyperV, no much idea about export from hyperv
and Import to KVM)

  Will the images exported form HyperV(vhdx image disks with single disk
and multiple disk(max 3 disk)  VMs) can be directly imported to KVM ? does
KVM support this or  need to modify vhdx disk images to any other format ?
What is the  best approach should be in case of HyperV hosted VMs( Windows
2012 guest machines and Linux guest machines ) to be imported to KVM based
OpenStack(Ussuri version with glance as image storage ).

Thanks in advance

Kris
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7KSQLVOSV5I6QGBAYC4U7SWQIJ2PPC5/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-05 Thread KK CHN
its mount point ? or any other suggestions or
correcton ?  Because its a live host. I can't do trail and error on service
maintainer's rhevm host machines.

kindly correct me if any thing wrong in my steps .  I have to perform this
script running on mylaptop to  rhevm host machines without breaking
anything.

Kindly guide me.

Kris

On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> This requires ovirt.conf file:
>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
> >>
> >> RHV supports download of disks to one image per disk, which you can move
> >> to another system.
> >>
> >> We also have export to ova, which creates one tar file with all
> exported disks,
> >> if this helps.
> >>
> >> Nir
> >>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CQ5RAHW3E6F5IL6QYOG7W3P3BI35MJSU/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-04 Thread KK CHN
Appreciate all  for sharing the valuable information.

1.  I am downloading centos 8  as the Python Ovirt SDK  installation says
it works on  Centos 8 and Need to setup a VM with this OS and install
 ovirt Python SDK on this   VM.   The requirement is that   this
Centos 8 VM should able to communicate with the   Rhevm 4.1  Host node
where the ovirt shell ( Rhevm Shell [connected] #is
 availableright ?

2.  pinging to the  host with "Rhevm Shell [connected]# "   and  that
should  be ssh ed  from the CentOS 8 VM where python3 and oVirt SDK
installed and going to execute the  script  (with ovirt configuration file
on this VM.).  Is these two connectivity checks are enough for executing
the script ?  or any other protocols need to be enabled in the firewall
between these two machine?



3.  while googling  I saw a post
https://users.ovirt.narkive.com/CeEW3lcj/ovirt-users-clone-and-export-vm-by-ovirt-shell


action vm myvm export --storage_domain-name myexport

Will this command export ?  and which format it will export to  the export
domain ?
 Is there any  option to provide with this command to  specify  any
supported format the vm image  to be exported  ?

 Thisneed to be executed from "Rhevm Shell [connected]# "   TTY  right
?



On Wed, Aug 4, 2021 at 1:00 PM Vojtech Juranek  wrote:

> On Wednesday, 4 August 2021 03:54:36 CEST KK CHN wrote:
> > On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:
> > > On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> > > > I have asked our VM maintainer to run the  command
> > > >
> > > > # virsh -r dumpxml vm-name_blah//as Super user
> > > >
> > > > But no output :   No matching domains found that was the TTY  output
> on
> > >
> > > that rhevm node when I executed the command.
> > >
> > > > Then I tried to execute #  virsh list //  it doesn't list any VMs
> > >
> > > !!!   ( How come this ? Does the Rhevm node need to enable any CLI
> with
> > > License key or something to list Vms or  to dumpxml   with   virsh ? or
> > > its
> > > CLI commands ?
> > >
> > > RHV undefine the vms when they are not running.
> > >
> > > > Any way I want to know what I have to ask the   maintainerto
> provide
> > >
> > > a working a working  CLI   or ? which do the tasks expected to do with
> > > command line utilities in rhevm.
> > >
> > > If the vm is not running you can get the vm configuration from ovirt
> > >
> > > using the API:
> > > GET /api/vms/{vm-id}
> > >
> > > You may need more API calls to get info about the disks, follow the
> > > 
> > > in the returned xml.
> > >
> > > > I have one more question :Which command can I execute on an rhevm
> > >
> > > node  to manually export ( not through GUI portal) a   VMs to
>  required
> > > format  ?
> > >
> > > > For example;   1.  I need to get  one  VM and disks attached to it
> as
> > >
> > > raw images.  Is this possible how?
> > >
> > > > and another2. VM and disk attached to it as  Ova or( what other
> good
> > >
> > > format) which suitable to upload to glance ?
> > >
> > > Arik can add more info on exporting.
> > >
> > > >   Each VMs are around 200 to 300 GB with disk volumes ( so where
> should
> > >
> > > be the images exported to which path to specify ? to the host node(if
> the
> > > host doesn't have space  or NFS mount ? how to specify the target
> location
> > > where the VM image get stored in case of NFS mount ( available ?)
> > >
> > > You have 2 options:
> > > - Download the disks using the SDK
> > > - Export the VM to OVA
> > >
> > > When exporting to OVA, you will always get qcow2 images, which you can
> > > later
> > > convert to raw using "qemu-img convert"
> > >
> > > When downloading the disks, you control the image format, for example
> > > this will download
> > >
> > > the disk in any format, collapsing all snapshots to the raw format:
> > >  $ python3
> > >
> > > /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> > > -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
> > >
> > > To perform this which modules/packages need to be installed in the
> rhevm
> >
> > host node ?  Does the rhevm hosts come with python3 installed by default
> ?
> > or I need to install  python3 on r

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
On Wed, Aug 4, 2021 at 1:38 AM Nir Soffer  wrote:

> On Tue, Aug 3, 2021 at 7:29 PM KK CHN  wrote:
> >
> > I have asked our VM maintainer to run the  command
> >
> > # virsh -r dumpxml vm-name_blah//as Super user
> >
> > But no output :   No matching domains found that was the TTY  output on
> that rhevm node when I executed the command.
> >
> > Then I tried to execute #  virsh list //  it doesn't list any VMs
> !!!   ( How come this ? Does the Rhevm node need to enable any CLI  with
> License key or something to list Vms or  to dumpxml   with   virsh ? or its
> CLI commands ?
>
> RHV undefine the vms when they are not running.
>
> > Any way I want to know what I have to ask the   maintainerto provide
> a working a working  CLI   or ? which do the tasks expected to do with
> command line utilities in rhevm.
> >
> If the vm is not running you can get the vm configuration from ovirt
> using the API:
>
> GET /api/vms/{vm-id}
>
> You may need more API calls to get info about the disks, follow the 
> in the returned xml.
>
> > I have one more question :Which command can I execute on an rhevm
> node  to manually export ( not through GUI portal) a   VMs to   required
> format  ?
> >
> > For example;   1.  I need to get  one  VM and disks attached to it  as
> raw images.  Is this possible how?
> >
> > and another2. VM and disk attached to it as  Ova or( what other good
> format) which suitable to upload to glance ?
>
> Arik can add more info on exporting.
>
> >   Each VMs are around 200 to 300 GB with disk volumes ( so where should
> be the images exported to which path to specify ? to the host node(if the
> host doesn't have space  or NFS mount ? how to specify the target location
> where the VM image get stored in case of NFS mount ( available ?)
>
> You have 2 options:
> - Download the disks using the SDK
> - Export the VM to OVA
>
> When exporting to OVA, you will always get qcow2 images, which you can
> later
> convert to raw using "qemu-img convert"
>
> When downloading the disks, you control the image format, for example
> this will download
> the disk in any format, collapsing all snapshots to the raw format:
>
>  $ python3
> /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
> -c engine-dev 3649d84b-6f35-4314-900a-5e8024e3905c /var/tmp/disk1.raw
>
> To perform this which modules/packages need to be installed in the rhevm
host node ?  Does the rhevm hosts come with python3 installed by default ?
or I need to install  python3 on rhevm node ? Then  using pip3 to install
the  download_disk.py / what the module name to install this sdk ?  any
dependency before installing this sdk ? like java need to be installed on
the rhevm node ?

One doubt:  came across  virt v2v while google search,  can virtv2v  be
used in rhevm node to export VMs to images ?  or only from other
hypervisors   to rhevm only virt v2v supports ?

This requires ovirt.conf file:   // ovirt.conf file need to be created
? or already there  in any rhevm node?

>
> $ cat ~/.config/ovirt.conf
> [engine-dev]
> engine_url = https://engine-dev
> username = admin@internal
> password = mypassword
> cafile = /etc/pki/vdsm/certs/cacert.pem
>
> Nir
>
> > Thanks in advance
> >
> >
> > On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:
> >>
> >> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >> >
> >> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >> >
> >> > Now I am in the process of migrating  those VMs to  my cloud setup
> with  OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >> >
> >> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >> >
> >> > There are three folders  which contain images for each VM .
> >> > These folders contain the base OS image, and attached LVM disk images
> ( from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >> >
> >> > Is there a way to  get all these images to be exported as  Single
> image file Instead of  multiple image files from Rhevm it self.  Is this
> possible ?
> >> >
> >> > If possible how to combine e all these disk images to a single image
> and that image  can upload to our  cloud  glance storage as a single image ?
> >>
> >> It is not clear what is the vm you are trying to export. If you share
> >> the libvirt xml
> >> of this vm it will be

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-03 Thread KK CHN
I have asked our VM maintainer to run the  command

# virsh -r dumpxml vm-name_blah//as Super user

But no output :   No matching domains found that was the TTY  output on
that rhevm node when I executed the command.

Then I tried to execute #  virsh list //  it doesn't list any VMs  !!!
 ( How come this ? Does the Rhevm node need to enable any CLI  with License
key or something to list Vms or  to dumpxml   with   virsh ? or its CLI
commands ?

Any way I want to know what I have to ask the   maintainerto provide a
working a working  CLI   or ? which do the tasks expected to do with
command line utilities in rhevm.

I have one more question :Which command can I execute on an rhevm node
to manually export ( not through GUI portal) a   VMs to   required format  ?

For example;   1.  I need to get  one  VM and disks attached to it  as raw
images.  Is this possible how?

and another2. VM and disk attached to it as  Ova or( what other good
format) which suitable to upload to glance ?


  Each VMs are around 200 to 300 GB with disk volumes ( so where should be
the images exported to which path to specify ? to the host node(if the host
doesn't have space  or NFS mount ? how to specify the target location where
the VM image get stored in case of NFS mount ( available ?)

Thanks in advance


On Mon, Aug 2, 2021 at 8:22 PM Nir Soffer  wrote:

> On Mon, Aug 2, 2021 at 12:22 PM  wrote:
> >
> > I have  few VMs in   Redhat Virtualisation environment  RHeV ( using
> Rhevm4.1 ) managed by a third party
> >
> > Now I am in the process of migrating  those VMs to  my cloud setup with
> OpenStack ussuri  version  with KVM hypervisor and Glance storage.
> >
> > The third party is making down each VM and giving the each VM image
> with their attached volume disks along with it.
> >
> > There are three folders  which contain images for each VM .
> > These folders contain the base OS image, and attached LVM disk images (
> from time to time they added hard disks  and used LVM for storing data )
> where data is stored.
> >
> > Is there a way to  get all these images to be exported as  Single image
> file Instead of  multiple image files from Rhevm it self.  Is this possible
> ?
> >
> > If possible how to combine e all these disk images to a single image and
> that image  can upload to our  cloud  glance storage as a single image ?
>
> It is not clear what is the vm you are trying to export. If you share
> the libvirt xml
> of this vm it will be more clear. You can use "sudo virsh -r dumpxml
> vm-name".
>
> RHV supports download of disks to one image per disk, which you can move
> to another system.
>
> We also have export to ova, which creates one tar file with all exported
> disks,
> if this helps.
>
> Nir
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TGBJTVT6EME4TXQ3OHY7L6YXOGZXCRC6/


Re: Python Developer

2021-07-12 Thread RaviKiran Kk
We are looking for django developer
Contact 6309620745

On Fri, Jul 2, 2021, 23:56 Nagaraju Singothu 
wrote:

> Dear Group Members,
>
>  My name is Nagaraju, I have 2+ years of experience as a
> python developer in Ikya Software Solutions Pvt Ltd at Hyderabad. Please
> refer to my resume for your reference and I hope I will get a good response
> from you as soon as possible.
>
> Thanking you,
>
> With regards.
> Nagaraju,
> Mob:7659965869.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/CAMyGuAZ2pHMojwy-kAbmR3dQRDExZcPWV2QvkBoUMGnPNeLUYA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAPEfQMT0n2x4sV4Bi1hvBgGqyTi3Qwj3aFpgPmBhnxQa2FqviA%40mail.gmail.com.


Re: counting the same words within a song added by a user using Django

2021-07-12 Thread RaviKiran Kk
We are looking for django developer
Plz contact 6309620745

On Mon, Jul 5, 2021, 17:07 DJANGO DEVELOPER  wrote:

> Hi there.
> I am developing a project based on adding songs to the user's library and
> to the home page.
> other users can also purchase the songs like wise people do shopping on
> eCommerce stores.
> *Problem:(Question)*
> The problem that I want to discuss here is that when a user adds a sing
> through django forms, and now that song is added to the user's personal
> library.
> now what I want to do is :
>
>
> *When the lyrics of a song are added as a record to the "Song" table, the
> individual words in that song should be added to a 2nd table with their
> frequency of usage within that song (so the words need to be counted and a
> signal needs to be created).Also, when a user adds the song to his/her
> personal library, all of the words from the song and their frequencies
> within that song should be added to another table and associated with that
> user.*
>
> how to count same word within a song?
>
> can anyone help me here?
> your help would be appreciated.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/bc7bc37b-6f26-465c-b330-d275ab86b76an%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAPEfQMQcJ0Na%2BxR4AE0QeuJ5RL4VucmaDpJ84_fg9YKUDqqHAw%40mail.gmail.com.


Combine multiple wasm files

2021-06-19 Thread Mehaboob kk
Hello,

Is it possible to combine multiple .wasm files to one single .wasm file?

Scenario:
I want to share a library(SDK) to an end customer who is building the
.wasm/JS application. Customer concerned that loading multiple wasm files
is not efficient. So we wanted to combine two wasm files. Although I can
share static lib generated using Emscripten, its not preferred because .a
file is easy to reverse compared to wasm right?

Any inputs on this please?

Thank you

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcCQC45LU%3DNMFev5PYjn%2BmzOOgDfhXx0ahDXq8GqKRVjJQ%40mail.gmail.com.


Re: [Kannada STF-32363] ಹೊನ್ನಾಳಿಯ ರಾಜಶೇಖರ ಸಶಿ ಇವರು ಆನ್ಲೈನ್ ರಸ ಪ್ರಶ್ನೆ ತಯಾರಿಸುವ ವಿಧಾನದ link ಕಳಿಸಿದ್ದಾರೆ. ಆಸಕ್ತರು ವೀಕ್ಷಿಸಬಹುದು.

2021-06-12 Thread kotekalallaiah kk
Nange mattu Nanna  shaleya makkalige thumba anu  koola aythu tq sir

On Tue, Jun 8, 2021, 11:53 AM Basavaraja n d 
wrote:

> https://youtu.be/08CSXXiRdaw
>
> --
> ---
> 1.ವಿಷಯ ಶಿಕ್ಷಕರ ವೇದಿಕೆಗೆ ಶಿಕ್ಷಕರನ್ನು ಸೇರಿಸಲು ಈ ಅರ್ಜಿಯನ್ನು ತುಂಬಿರಿ.
> -
> https://docs.google.com/forms/d/e/1FAIpQLSevqRdFngjbDtOF8YxgeXeL8xF62rdXuLpGJIhK6qzMaJ_Dcw/viewform
> 2. ಇಮೇಲ್ ಕಳುಹಿಸುವಾಗ ಗಮನಿಸಬೇಕಾದ ಕೆಲವು ಮಾರ್ಗಸೂಚಿಗಳನ್ನು ಇಲ್ಲಿ ನೋಡಿ.
> -
> http://karnatakaeducation.org.in/KOER/index.php/ವಿಷಯಶಿಕ್ಷಕರವೇದಿಕೆ_ಸದಸ್ಯರ_ಇಮೇಲ್_ಮಾರ್ಗಸೂಚಿ
> 3. ಐ.ಸಿ.ಟಿ ಸಾಕ್ಷರತೆ ಬಗೆಗೆ ಯಾವುದೇ ರೀತಿಯ ಪ್ರಶ್ನೆಗಳಿದ್ದಲ್ಲಿ ಈ ಪುಟಕ್ಕೆ ಭೇಟಿ
> ನೀಡಿ -
> http://karnatakaeducation.org.in/KOER/en/index.php/Portal:ICT_Literacy
> 4.ನೀವು ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶ ಬಳಸುತ್ತಿದ್ದೀರಾ ? ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶದ ಬಗ್ಗೆ
> ತಿಳಿಯಲು -
> http://karnatakaeducation.org.in/KOER/en/index.php/Public_Software
> ---
> ---
> You received this message because you are subscribed to the Google Groups
> "KannadaSTF - ಕನ್ನಡ ಭಾಷಾ ಶಿಕ್ಷಕರ ವೇದಿಕೆ" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kannadastf+unsubscr...@googlegroups.com.
> To view this discussion on the web, visit
> https://groups.google.com/d/msgid/kannadastf/CALc7Aqt%2B3efdBrsJ4zp%3Dg4mRx5PWVzFoU%2BpgLZakuJL0reZMbw%40mail.gmail.com
> 
> .
>

-- 
---
1.ವಿಷಯ ಶಿಕ್ಷಕರ ವೇದಿಕೆಗೆ  ಶಿಕ್ಷಕರನ್ನು ಸೇರಿಸಲು ಈ  ಅರ್ಜಿಯನ್ನು ತುಂಬಿರಿ.
 
-https://docs.google.com/forms/d/e/1FAIpQLSevqRdFngjbDtOF8YxgeXeL8xF62rdXuLpGJIhK6qzMaJ_Dcw/viewform
2. ಇಮೇಲ್ ಕಳುಹಿಸುವಾಗ ಗಮನಿಸಬೇಕಾದ ಕೆಲವು ಮಾರ್ಗಸೂಚಿಗಳನ್ನು ಇಲ್ಲಿ ನೋಡಿ.
-http://karnatakaeducation.org.in/KOER/index.php/ವಿಷಯಶಿಕ್ಷಕರವೇದಿಕೆ_ಸದಸ್ಯರ_ಇಮೇಲ್_ಮಾರ್ಗಸೂಚಿ
3. ಐ.ಸಿ.ಟಿ ಸಾಕ್ಷರತೆ ಬಗೆಗೆ ಯಾವುದೇ ರೀತಿಯ ಪ್ರಶ್ನೆಗಳಿದ್ದಲ್ಲಿ ಈ ಪುಟಕ್ಕೆ ಭೇಟಿ ನೀಡಿ -
http://karnatakaeducation.org.in/KOER/en/index.php/Portal:ICT_Literacy
4.ನೀವು ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶ ಬಳಸುತ್ತಿದ್ದೀರಾ ? ಸಾರ್ವಜನಿಕ ತಂತ್ರಾಂಶದ ಬಗ್ಗೆ ತಿಳಿಯಲು 
-http://karnatakaeducation.org.in/KOER/en/index.php/Public_Software
---
--- 
You received this message because you are subscribed to the Google Groups 
"KannadaSTF - ಕನ್ನಡ ಭಾಷಾ  ಶಿಕ್ಷಕರ ವೇದಿಕೆ" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to kannadastf+unsubscr...@googlegroups.com.
To view this discussion on the web, visit 
https://groups.google.com/d/msgid/kannadastf/CAMuheRAynvLWHVx-tkRUft7yR624GEY2vDPgU%3DOvxu_QmMOPyQ%40mail.gmail.com.


Re: [OPSEC] Alvaro Retana's No Objection on draft-ietf-opsec-v6-25: (with COMMENT)

2021-05-11 Thread KK Chittimaneni
Hi Alvaro,

Thank you very much for your detailed review.

Together with my co-authors, we have uploaded revision -27, which should
address all your comments.

The diff is at: https://www.ietf.org/rfcdiff?url2=draft-ietf-opsec-v6-27

Regards,
KK

On Mon, Apr 19, 2021 at 8:27 AM Alvaro Retana 
wrote:

> Enno:
>
> Hi!
>
> I looked at -26.
>
> I still find the applicability statement confusing, the the reasons I
> described in 1.a/1.b (below).  There is a contradiction about whether the
> document applies to residential users (as mentioned in §1.1 and §5) or not
> (as mentioned in the Abstract).  Also, why does the "applicability
> statement especially applies to Section 2.3 and Section 2.5.4” *only*?
>
> This is obviously a non-blocking comment, but I believe it is important
> since the applicability statement may influence who reads and follows the
> recommendations.
>
> Thanks!
>
> Alvaro.
>
> On April 10, 2021 at 2:36:26 PM, Enno Rey (e...@ernw.de) wrote:
>
> Hi Alvaro,
>
> thanks for the detailed evaluation and for the valuable feedback.
>
> I went thru your COMMENTS and performed some related adaptions of the
> draft. A new version has been uploaded.
>
> thank you again & have a great weekend
>
> Enno
>
>
>
>
> On Mon, Apr 05, 2021 at 02:07:53PM -0700, Alvaro Retana via Datatracker
> wrote:
> > Alvaro Retana has entered the following ballot position for
> > draft-ietf-opsec-v6-25: No Objection
> >
> > When responding, please keep the subject line intact and reply to all
> > email addresses included in the To and CC lines. (Feel free to cut this
> > introductory paragraph, however.)
> >
> >
> > Please refer to
> https://www.ietf.org/iesg/statement/discuss-criteria.html
> > for more information about IESG DISCUSS and COMMENT positions.
> >
> >
> > The document, along with other ballot positions, can be found here:
> > https://datatracker.ietf.org/doc/draft-ietf-opsec-v6/
> >
> >
> >
> > --
> > COMMENT:
> > --
> >
> >
> > (1) The applicability statement in ??1.1 is confusing to me.
> >
> > a. The Abstract says that "this document are not applicable to
> residential
> > user cases", but that seems not to be true because this section says
> that the
> > contents do apply to "some knowledgeable-home-user-managed residential
> > network[s]", and ??5 is specific to residential users.
> >
> > b. "This applicability statement especially applies to Section 2.3 and
> Section
> > 2.5.4." Those two sections represent a small part of the document; what
> about
> > the rest? It makes sense to me for the applicability statement to cover
> most
> > of the document.
> >
> > c. "For example, an exception to the generic recommendations of this
> document
> > is when a residential or enterprise network is multi-homed." I'm not
> sure if
> > this sentence is an example of the previous one (above) or if "for
> example" is
> > out of place.
> >
> > (2) ??5 mentions "early 2020" -- I assume that the statement is still
> true now.
> >
> > (3) It caught my attention that there's only one Normative Reference
> (besides
> > rfc8200, of course). Why? What is special about the IPFIX registry?
> >
> > It seems that an argument could be made to the fact that to secure
> OSPFv3, for
> > example, an understanding of the protocol is necessary. This argument
> could be
> > extended to other protocols or mechanisms, including IPv6-specific
> technology:
> > ND, the addressing architecture, etc. Consider the classification of the
> > references in light of [1].
> >
> > [1]
> >
> https://www.ietf.org/about/groups/iesg/statements/normative-informative-references/
> >
> >
> >
>
> --
> Enno Rey
>
> Cell: +49 173 6745902
> Twitter: @Enno_Insinuator
>
>
___
OPSEC mailing list
OPSEC@ietf.org
https://www.ietf.org/mailman/listinfo/opsec


Re: [OPSEC] Roman Danyliw's No Objection on draft-ietf-opsec-v6-26: (with COMMENT)

2021-05-11 Thread KK Chittimaneni
Hi Roman,

Thank you very much for your detailed review.

Together with my co-authors, we have uploaded revision -27, which should
address most of your comments except a few listed below with our rationale:

** Section 2.1.5.  Per “However, in scenarios where anonymity is a strong
desire (protecting user privacy is more important than user attribution),
privacy extension addresses should be used.”, it might be worth
acknowledging
that even if these are managed network, the end user and the operators may
be
at odds on what privacy properties are important.

[authors] We didn't change the text here as we felt that this is a given.

** Section 3.1.  This list is helpful.  Is text here and Section 2.5.4
needed?
For example, does one need to say both “discard _packets_ from bogons” (this
section) and “discard _routes_ from bogons” (Section 2.5.4)

[authors] We kept the text in both sections the rationale being that
packets are dropped at the enterprise edge while routes are ignored by
peering routers (not all enterprises have a DFZ routing)

The diff is at: https://www.ietf.org/rfcdiff?url2=draft-ietf-opsec-v6-27

Regards,
KK

On Tue, Apr 20, 2021 at 7:11 PM Roman Danyliw via Datatracker <
nore...@ietf.org> wrote:

> Roman Danyliw has entered the following ballot position for
> draft-ietf-opsec-v6-26: No Objection
>
> When responding, please keep the subject line intact and reply to all
> email addresses included in the To and CC lines. (Feel free to cut this
> introductory paragraph, however.)
>
>
> Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
> for more information about DISCUSS and COMMENT positions.
>
>
> The document, along with other ballot positions, can be found here:
> https://datatracker.ietf.org/doc/draft-ietf-opsec-v6/
>
>
>
> --
> COMMENT:
> --
>
> ** Section 2.1.5.  Per “However, in scenarios where anonymity is a strong
> desire (protecting user privacy is more important than user attribution),
> privacy extension addresses should be used.”, it might be worth
> acknowledging
> that even if these are managed network, the end user and the operators may
> be
> at odds on what privacy properties are important.
>
> ** Section 2.2.1.  Per “A firewall or edge device should be used to
> enforce the
> recommended order and the maximum occurrences of extension headers”, does
> enforcement mean dropping the packets?
>
> ** Section 2.3.2.  Per “Network operators should be aware that RA-Guard and
> SAVI do not work or could even be harmful in specific network
> configurations”,
> please provide a more details, ideally through citation.
>
> ** Section 2.3.2, “Enabling RA-Guard by default in … enterprise campus
> networks
> …”, what’s the key property of “enterprise campus network”.  The
> introduction
> already roughly says this whole document applies to managed networks.
>
> ** Section 2.5.2.  Reading this section, the specific recommended practices
> weren’t clear.
>
> ** Section 2.6.  It wasn’t clear how comprehensive this list of logs was
> intended to be.  A few additional thoughts include: -- DHCPv6 logs --
> firewall
> ACL logs -- authentication server logs -- NEA-like policy enforcement at
> the
> time of connection -- vpn/remote access logs -- passive DNS from port 53
> traffic -- full packet capture -- active network and service scanning/audit
> information
>
> ** Section 2.6.1.2.  The recommended fields in this section are helpful,
> but
> only in the context of the rest of the five-tuple + timing + interface +
> vlan +
> select layer 4 information for each flow.  These open IPFIX information
> elements aren't mentioned.
>
> ** Section 2.6.2.1.  Per “The forensic use case is when the network
> operator
> must locate an IPv6 address that was present in the network at a certain
> time
> or is currently in the network”, isn’t this use case more precisely an
> attempt
> to link an IP address to (a) a specific port in the case of a wired
> network;
> (b) a access point (or base station, etc.) in the case of wireless; or (c)
> an
> external IP address in the case of a VPN connection?
>
> ** Section 2.6.2.1.  Additional techniques/suggestions to consider:
> -- Using the IPAM system noted in Section 2.1 or any other inventory
> system to
> provide hints in the about where an IP address might belong in the
> topology.
>
> -- A reminder that mapping between and IP+port or MAC+IP might not have
> been
> the same one as during the time of the event of interest
>
> -- there is discussion about identifying subscribers for an ISP but not in
> normal enterprise network scenario through SS

Re: [OPSEC] Zaheduzzaman Sarker's No Objection on draft-ietf-opsec-v6-25: (with COMMENT)

2021-05-11 Thread KK Chittimaneni
Hello Zahed,

Thank you very much for your detailed review.

Together with my co-authors, we have uploaded revision -27, which should
address all your comments.

The diff is at: https://www.ietf.org/rfcdiff?url2=draft-ietf-opsec-v6-27

Regards,
KK

On Wed, Apr 7, 2021 at 3:33 AM Zaheduzzaman Sarker via Datatracker <
nore...@ietf.org> wrote:

> Zaheduzzaman Sarker has entered the following ballot position for
> draft-ietf-opsec-v6-25: No Objection
>
> When responding, please keep the subject line intact and reply to all
> email addresses included in the To and CC lines. (Feel free to cut this
> introductory paragraph, however.)
>
>
> Please refer to https://www.ietf.org/iesg/statement/discuss-criteria.html
> for more information about IESG DISCUSS and COMMENT positions.
>
>
> The document, along with other ballot positions, can be found here:
> https://datatracker.ietf.org/doc/draft-ietf-opsec-v6/
>
>
>
> --
> COMMENT:
> --
>
> I found this document very informative and I learned quite a lot by reading
> this document (I must confess I haven't  read the long list of  referenced
> documents :-)). I think the collected recommendations in one place will be
> very
> helpful.
>
> Some comments -
>
>   *  The abstract says - "The recommendations in this document are not
>   applicable to residential user cases". However, later on in section 1.1
> it
>   says - "This covers Service Provider (SP), enterprise networks and some
>   knowledgeable-home-user-managed residential network." Furthermore in
> section
>   5, it recommends configurations for residential users.May be I am not
>   getting the distinction among residential user cases, managed residential
>   network and residential users correct but I think further clarification
> is
>   needed on what is written in thee abstract and what is in the rest of the
>   document.
>
>   * I noted that section 2.3.4 refers to 3GPP 4G terminologies while
> describing
>   the case. If this section is not supposed to restricted to certain
>   generations of 3GPP technologies then I would recommend to update the
> section
>   with 5G terminologies as well.
>
>   * In section 2.6 there is an ask for the network operators to log "of all
>   applications using the network (including user space and kernel space)
> when
>   available (for example web servers)". How realistic is this? I hardly
> see the
>   web servers sharing logging files with network operators ( I would be
> happy
>   to be corrected here ). I am also missing the discussion on -- if not
>   available how much this affects the forensic research in the event of
>   security incident and abnormal behavior.
>
>
>
>
___
OPSEC mailing list
OPSEC@ietf.org
https://www.ietf.org/mailman/listinfo/opsec


Re: Using custom libs .a files

2021-04-28 Thread Mehaboob kk
Hi Thomas,

Thanks for your input, I was referring to the Using Libraries section here
https://emscripten.org/docs/compiling/Building-Projects.html#using-libraries


Thanks,
Mehaboob



On Wed, Apr 14, 2021 at 4:14 PM 'Thomas Lively' via emscripten-discuss <
emscripten-discuss@googlegroups.com> wrote:

> Hi Mehaboob,
>
> No, Emscripten cannot compile native objects or archive files to
> WebAssembly. Perhaps what you saw was a reference to archive files
> containing WebAssembly objects produced by Emscripten?
>
> Thomas
>
> On Wed, Apr 14, 2021 at 12:45 Mehaboob kk  wrote:
>
>> Hello All,
>>
>> I see that Emscripton allows to compile .a files to wasm. In order for
>> this to work the .a file need to be originally compiled using gcc? if yes,
>> any specific version to be compatible?
>>
>> Thanks,
>> Mehaboob
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "emscripten-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to emscripten-discuss+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/emscripten-discuss/ee46348f-db22-406d-b85d-7dae3aa57e8an%40googlegroups.com
>> <https://groups.google.com/d/msgid/emscripten-discuss/ee46348f-db22-406d-b85d-7dae3aa57e8an%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "emscripten-discuss" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/emscripten-discuss/5-IfzGR_ioQ/unsubscribe
> .
> To unsubscribe from this group and all its topics, send an email to
> emscripten-discuss+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/emscripten-discuss/CAJZD_EUz2Mkh3S1%2Bh7iGox2Pk3KZWOvMKOrr8a7sNSoZyG7rnw%40mail.gmail.com
> <https://groups.google.com/d/msgid/emscripten-discuss/CAJZD_EUz2Mkh3S1%2Bh7iGox2Pk3KZWOvMKOrr8a7sNSoZyG7rnw%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcAPy%2BRLggy1p4mk7RRBQLh8WjZcUQ927D3XtFwk_JCuVw%40mail.gmail.com.


Re: WASM reverse engineering protection

2021-04-28 Thread Mehaboob kk
Hello,
Appreciate your help. Thanks for your response.

-Mehaboob

On Wed, Apr 14, 2021 at 8:22 PM J Decker  wrote:

> If code can be run, it can be reverse engineered; just a matter of how
> much work one wants to put into it.
> If the code was compressed, and encrypted, then there'd still be a small
> bit code that has the decrypt, which can be extracted to get the rest, etc.
> Remote checking is a better option; requiring a bit of external code that
> encodes the license in a varying way, for submission to the server, which
> might reply with some small code which enables it to actually function.
> Of course - the really diligent would still just capture the reply and
> bundle it all up statically.
>
> It's just a matter of how much time you want to make them spend... but
> much like a lock that can be opened can be picked, code that runs can be
> 'run' virtually, using just pen and paper.
>
> On Wed, Apr 14, 2021 at 12:43 PM Mehaboob kk  wrote:
>
>> Hello All,
>>
>> Is there any technique to protect the WASM code from reverse engineering?
>> I have a licensed software which need to be protected. I am worried that
>> the reverse engineered code can be modified to bypass the license and
>> compile it back again
>>
>> Any inputs please
>>
>> Thanks,
>> Mehaboob
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "emscripten-discuss" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to emscripten-discuss+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/emscripten-discuss/d9c09212-cad8-40e1-ba4c-88979f8437a2n%40googlegroups.com
>> <https://groups.google.com/d/msgid/emscripten-discuss/d9c09212-cad8-40e1-ba4c-88979f8437a2n%40googlegroups.com?utm_medium=email_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "emscripten-discuss" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to emscripten-discuss+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/emscripten-discuss/CAA2GJqWtWBgix7K3VYT7NiUmRW4R%3DMHKsy570GiX_K7uhCbovA%40mail.gmail.com
> <https://groups.google.com/d/msgid/emscripten-discuss/CAA2GJqWtWBgix7K3VYT7NiUmRW4R%3DMHKsy570GiX_K7uhCbovA%40mail.gmail.com?utm_medium=email_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/CAO5iXcB_S6GvA2H-4DLE%3DbWnvJ4PcLXWbD6ArEV-BxS4-zZ2NQ%40mail.gmail.com.


[kdenlive] [Bug 425960] Rendering files with only one frame results in 16 minutes of black screen

2021-04-28 Thread KK
https://bugs.kde.org/show_bug.cgi?id=425960

--- Comment #11 from KK  ---
Can confirm this bug was fixed in the latest Kdenlive release. Thanks!

-- 
You are receiving this mail because:
You are watching all bug changes.

Dynamic Table Options 被优化器去掉了

2021-04-25 Thread macia kk
Hi

  我有在使用 temporal Joini 的时候有设置 如果读取分区的相关的 dynamic
option,但是最后是没有生效的,我看全部使用的默认参数,打印出来了执行计划,逻辑执行计划是有的,优化之后没有了
  如下,我设置的是加载最新分区,24小时加载一次,我看最后运行的日志是加载的全部分区,1小时有一次加载,这都是默认的参数,所以怀疑是 dyanmic
option 没有生效。


== Abstract Syntax Tree ==
+- LogicalSnapshot(period=[$cor0.proctime])
   +- LogicalTableScan(table=[[ds, my_db, store_da_table,
source: [HiveTableSource(store_id, store_name, merchant_id, tag_id,
brand_id, tob_user_id, is_use_wallet, is_use_merchant_app, longitude,
latitude, state, city, district, address, postal_code, register_phone,
email, email_source, register_time, logo, banner, partner_type,
commission_rate, tax_rate, service_fee, min_spend, delivery_distance,
preparation_time, contact_phone, store_status, closed_start_time,
closed_end_time, effective_closed_end_time, auto_confirmed,
auto_confirmed_enabled, create_time, update_time, rating_total,
rating_score, opening_status, surcharge_intervals, service_charge_fee_rate,
driver_modify_order_enabled, delivery_distance_mode, business_info_added,
mtime, dt, grass_region) TablePath: my_db.store_da_table, PartitionPruned:
false, PartitionNums: null], dynamic options:
{streaming-source.enable=true, streaming-source.monitor-interval=24 h,
streaming-source.partition.include=latest}]])

== Optimized Logical Plan ==
Calc(select=[_UTF-16LE'v4' AS version, _UTF-16LE'ID' AS country, city, id,
event_time, operation, platform, payment_method, gmv, 0.0:DECIMAL(2, 1) AS
gmv_usd], where=[NOT(LIKE(UPPER(store_name), _UTF-16LE'%[TEST]%'))])
+- LookupJoin(table=[ds.my_db.store_da_table],
joinType=[LeftOuterJoin], async=[false], lookup=[store_id=store_id],
select=[city, id, event_time, operation, platform, payment_method, gmv,
store_id, store_id, store_name])
   +- Union(all=[true], union=[city, id, event_time, operation, platform,
payment_method, gmv, store_id])
  :- Calc(select=[delivery_city AS city, id, /(CAST(create_time), 1000)
AS event_time, CASE(OR(=(order_status, 440), =(order_status, 800)),
_UTF-16LE'NET':VARCHAR(5) CHARACTER SET "UTF-16LE",
_UTF-16LE'GROSS':VARCHAR(5) CHARACTER SET "UTF-16LE") AS operation,
_UTF-16LE'' AS platform, payment_method, /(CAST(total_amount), 10)
AS gmv, CAST(store_id) AS store_id])
  :  +- DataStreamScan(table=[[ds, keystats,
main_db__transaction_tab]], fields=[id, delivery_city, store_id,
create_time, payment_time, order_status, payment_method, total_amount,
proctime], reuse_id=[1])
  +- Calc(select=[delivery_city AS city, id, /(CAST(payment_time),
1000) AS event_time, _UTF-16LE'NET':VARCHAR(5) CHARACTER SET "UTF-16LE" AS
operation, _UTF-16LE'AIRPAY' AS platform, payment_method,
/(CAST(total_amount), 10) AS gmv, CAST(store_id) AS store_id],
where=[OR(=(order_status, 440), =(order_status, 800))])
 +- Reused(reference_id=[1])


Using custom libs .a files

2021-04-14 Thread Mehaboob kk
Hello All,

I see that Emscripton allows to compile .a files to wasm. In order for this 
to work the .a file need to be originally compiled using gcc? if yes, any 
specific version to be compatible? 

Thanks,
Mehaboob

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/ee46348f-db22-406d-b85d-7dae3aa57e8an%40googlegroups.com.


WASM reverse engineering protection

2021-04-14 Thread Mehaboob kk
Hello All,

Is there any technique to protect the WASM code from reverse engineering? I 
have a licensed software which need to be protected. I am worried that the 
reverse engineered code can be modified to bypass the license and compile 
it back again

Any inputs please

Thanks,
Mehaboob

-- 
You received this message because you are subscribed to the Google Groups 
"emscripten-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to emscripten-discuss+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/emscripten-discuss/d9c09212-cad8-40e1-ba4c-88979f8437a2n%40googlegroups.com.


退订

2021-03-31 Thread kk wi
退订


Flink Temporal Join Two union Hive Table Error

2021-03-15 Thread macia kk
Hi, 麻烦帮忙看下这个问题:


创建 View promotionTable:

SELECT *, 'CN' as country, id as pid
FROM promotion_cn_rule_tab
UNION
SELECT *, 'JP' as country, id as pid
FROM promotion_jp_rule_tab


FLink SQL Query:

SELECT t1.country, t1.promotionId, t1.orderId,
CASE WHEN t2.pid IS NULL THEN 'Rebate'
ELSE 'Rebate'
END AS rebate
FROM eventTable AS t1
LEFT JOIN promotionTable
/*+ OPTIONS('streaming-source.enable' = 'false',
'streaming-source.partition.include' = 'all',
'lookup.join.cache.ttl' = '5 m') */
FOR SYSTEM_TIME AS OF t1.procTime AS t2
ON t1.promotionId = t2.pid
AND t1.country = t2.country


如果去掉 Hive 表的 union ,只保留一个国家的 Hive 表,可以run 成功,但是如果 Union 两张表的话,会得到错误:

Caused by: org.apache.flink.table.api.ValidationException: Currently the
join key in Temporal Table Join can not be empty


Flink Temporal Join Two union Hive Table Error

2021-03-15 Thread macia kk
Hi, 麻烦帮忙看下这个问题:


创建 View promotionTable:

SELECT *, 'CN' as country, id as pid FROM promotion_cn_rule_tab UNION
SELECT *, 'JP' as country, id as pid FROM promotion_jp_rule_tab

FLink SQL Query:
SELECT t1.country, t1.promotionId, t1.orderId, CASE WHEN t2.pid IS NULL
THEN 'Rebate' ELSE 'Rebate' END AS rebate FROM eventTable AS t1 LEFT JOIN
promotionTable /*+ OPTIONS('streaming-source.enable' = 'false',
'streaming-source.partition.include' = 'all', 'lookup.join.cache.ttl' = '5
m') */ FOR SYSTEM_TIME AS OF t1.procTime AS t2 ON t1.promotionId = t2.pid
AND t1.country = t2.country


如果去掉 Hive 表的 union ,只保留一个国家的 Hive 表,可以run 成功,但是如果 Union 两张表的话,会得到错误:

Caused by: org.apache.flink.table.api.ValidationException: Currently the
join key in Temporal Table Join can not be empty


Re: 回复:pyflink 如何使用session window对相同pv数据聚合

2021-03-08 Thread kk
我之前测试过slide window,可以使用。就是无法在session window中使用,group windowed table不支持。



--
Sent from: http://apache-flink.147419.n8.nabble.com/


pyflink 如何使用session window对相同pv数据聚合

2021-03-08 Thread kk
hi,all:
一账号一段时间内连续操作为一个pv,间隔时间超过阈值后会记为新的pv。系统需要获取流式日志,使用日志统计实时数据的各项指标。但是我们在使用session
window的时候无法使用udaf(自定义聚合函数)对相同pv日志进行聚合统计。
希望知道的大佬能给点建议。感谢!!!

session_window = Session.with_gap("60.second").on("pv_time").alias("w")
t_env.from_path('source') \
.window(session_window) \
.group_by("w,pv_id") \
.select("pv_id,get_act(act)").insert_into("sink")


 



--
Sent from: http://apache-flink.147419.n8.nabble.com/


Re: Flink SQL temporal table join with Hive 报错

2021-02-10 Thread macia kk
Hi, Leonard

 我们的业务变得越来越复杂,所以现在需要 Join Hive 维表的情况非常普遍。现在维表分三种情况

 一,维表没有分区,没有 primary key

  这时候 `'streaming-source.partition.include' = 'latest',因为没有
parition,所以 latest 应该加载的就是全部的数据。

 二,维表有有分区,每个分区仅仅包含当天的数据,没有 primary key

  这种情况因为要 Join 全部的数据,所以还是需要设置 'streaming-source.partition.include' =
'all',但是还是因为没有 primary Key,所以无法 run。

三,维表有分区,每个分区包含全量数据,没有 primiary key

  这种情况可以设置,'streaming-source.partition.include' = 'latest',这个是是官网的案例,测试没有问题。


现在就是针对第二种情况,因为Hive的维度表不是我维护的,很多人都在用,所以不能修改去加上 primary key,无法进行 join.




--- BYW---

还有我看文档现在不支持 event time join, 官网的汇率是按照 process time
join,但是如果要回溯昨天的数据的时候,其实就会有问题。

我看 FLIP-132
<https://cwiki.apache.org/confluence/display/FLINK/FLIP-132+Temporal+Table+DDL+and+Temporal+Table+Join>
有提到 Event
Time semantics, 这是以后回支持的吗?



Leonard Xu  于2021年2月10日周三 上午11:36写道:

> Hi,  macia
>
> > 在 2021年2月9日,10:40,macia kk  写道:
> >
> > SELECT *FROM
> >(
> >SELECT  tt.*
> >FROM
> >input_tabe_01 tt
> >FULL OUTER JOIN input_tabe_02 mt
> >ON (mt.transaction_sn = tt.reference_id)
> >and tt.create_time >= mt.create_time + INTERVAL '5' MINUTES
> >and tt.create_time <= mt.create_time - INTERVAL '5' MINUTES
> >WHERE COALESCE(tt.create_time, mt.create_time) is not NULL
> >) lt
> >LEFT JOIN exchange_rate ex
> >/*+
> OPTIONS('streaming-source.enable'='true','streaming-source.partition.include'
> > = 'all') */
> >FOR SYSTEM_TIME AS OF lt.event_time ex ON DATE_FORMAT
> > (lt.event_time, '-MM-dd') = cast(ex.date_id as String)
>
>
> 你说的异常我本地没有复现,异常栈能直接贴下吗?
>
> 另外看你写的是lt.event_time,
> 这个sql的是要做版本表的维表关联吗?目前Hive还不支持指定版本表的,只支持最新分区作为维表或者整个hive表作为维表,
> 两种维表的option你可参考下[1]
>
> 祝好,
> Leonard
> [1]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/connectors/hive/hive_read_write.html#temporal-join-the-latest-table
>
>
>
>


Re: Flink SQL temporal table join with Hive 报错

2021-02-08 Thread macia kk
SELECT *FROM
(
SELECT  tt.*
FROM
input_tabe_01 tt
FULL OUTER JOIN input_tabe_02 mt
ON (mt.transaction_sn = tt.reference_id)
and tt.create_time >= mt.create_time + INTERVAL '5' MINUTES
and tt.create_time <= mt.create_time - INTERVAL '5' MINUTES
WHERE COALESCE(tt.create_time, mt.create_time) is not NULL
) lt
LEFT JOIN exchange_rate ex
/*+ 
OPTIONS('streaming-source.enable'='true','streaming-source.partition.include'
= 'all') */
FOR SYSTEM_TIME AS OF lt.event_time ex ON DATE_FORMAT
(lt.event_time, '-MM-dd') = cast(ex.date_id as String)


Rui Li  于2021年2月9日周二 上午10:20写道:

> Hi,
>
> 那join的语句是怎么写的呢?
>
> On Mon, Feb 8, 2021 at 2:45 PM macia kk  wrote:
>
> > 图就是哪个报错
> >
> > 建表语句如下,表示公共表,我也没有改的权限.
> >
> > CREATE EXTERNAL TABLE `exchange_rate`(`grass_region` string COMMENT
> > 'country', `currency` string COMMENT 'currency', `exchange_rate`
> > decimal(25,10) COMMENT 'exchange rate')
> > PARTITIONED BY (`grass_date` date COMMENT 'partition key, -MM-dd')
> > ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io
> > .parquet.serde.ParquetHiveSerDe'
> > WITH SERDEPROPERTIES (
> >   'serialization.format' = '1'
> > )
> >
> >
> > Rui Li  于2021年2月8日周一 下午2:17写道:
> >
> > > 你好,图挂了,可以贴一下hive建表的DDL和join的语句是怎么写的么?
> > >
> > > On Mon, Feb 8, 2021 at 10:33 AM macia kk  wrote:
> > >
> > > > Currently the join key in Temporal Table Join can not be empty.
> > > >
> > > > 我的 Hive 表 join DDL 没有设置 is not null ,但是都是有值的,还是会报这个错
> > > >
> > > > [image: image.png]
> > > >
> > >
> > >
> > > --
> > > Best regards!
> > > Rui Li
> > >
> >
>
>
> --
> Best regards!
> Rui Li
>


Re: Flink SQL temporal table join with Hive 报错

2021-02-07 Thread macia kk
图就是哪个报错

建表语句如下,表示公共表,我也没有改的权限.

CREATE EXTERNAL TABLE `exchange_rate`(`grass_region` string COMMENT
'country', `currency` string COMMENT 'currency', `exchange_rate`
decimal(25,10) COMMENT 'exchange rate')
PARTITIONED BY (`grass_date` date COMMENT 'partition key, -MM-dd')
ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe'
WITH SERDEPROPERTIES (
  'serialization.format' = '1'
)


Rui Li  于2021年2月8日周一 下午2:17写道:

> 你好,图挂了,可以贴一下hive建表的DDL和join的语句是怎么写的么?
>
> On Mon, Feb 8, 2021 at 10:33 AM macia kk  wrote:
>
> > Currently the join key in Temporal Table Join can not be empty.
> >
> > 我的 Hive 表 join DDL 没有设置 is not null ,但是都是有值的,还是会报这个错
> >
> > [image: image.png]
> >
>
>
> --
> Best regards!
> Rui Li
>


Flink SQL temporal table join with Hive 报错

2021-02-07 Thread macia kk
Currently the join key in Temporal Table Join can not be empty.

我的 Hive 表 join DDL 没有设置 is not null ,但是都是有值的,还是会报这个错

[image: image.png]


Flink SQL Hive 使用 partition.include 结果跟文档不一致

2021-02-04 Thread macia kk
Flink 1.12.1

streaming-source.partition.includeOption to set the partitions to read, the
supported option are `all` and `latest`, the `all` means read all
partitions; the `latest` means read latest partition in order of
'streaming-source.partition.order', the `latest` only works` when the
streaming hive source table used as temporal table. By default the option
is `all`.


报错
Flink SQL> SELECT * FROM exrate_table /*+
OPTIONS('streaming-source.enable'='true','streaming-source.partition.include'
= 'latest') */; [ERROR] Could not execute SQL statement. Reason:
java.lang.IllegalArgumentException: The only supported
'streaming-source.partition.include' is 'all' in hive table scan, but is
'latest'


Flink 读 Hive 表,如何设置 TTL

2021-01-27 Thread macia kk
文档上是在 create table 的时候, 设置 lookup.join.cache.ttl

但是我现在想用 streaming kafka 的数据,join 一张已经存在的 Hive 表,怎么设置TTL?

CREATE TABLE dimension_table (
  product_id STRING,
  product_name STRING,
  unit_price DECIMAL(10, 4),
  pv_count BIGINT,
  like_count BIGINT,
  comment_count BIGINT,
  update_time TIMESTAMP(3),
  update_user STRING,
  ...) TBLPROPERTIES (
  'streaming-source.enable' = 'false',   -- option with
default value, can be ignored.
  'streaming-source.partition.include' = 'all',  -- option with
default value, can be ignored.
  'lookup.join.cache.ttl' = '12 h');


Scala REPL YARN 运行模式报 NoSuchMethodError setPrintSpaceAfterFullCompletion

2021-01-26 Thread macia kk
 bin/start-scala-shell.sh  yarn


scala> Exception in thread "main" java.lang.NoSuchMethodError:
jline.console.completer.CandidateListCompletionHandler.setPrintSpaceAfterFullCompletion(Z)V
at
scala.tools.nsc.interpreter.jline.JLineConsoleReader.initCompletion(JLineReader.scala:139)
at
scala.tools.nsc.interpreter.jline.InteractiveReader.postInit(JLineReader.scala:54)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$25.apply(ILoop.scala:899)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$25.apply(ILoop.scala:897)
at
scala.tools.nsc.interpreter.SplashReader.postInit(InteractiveReader.scala:130)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply$mcV$sp(ILoop.scala:926)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply(ILoop.scala:908)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$scala$tools$nsc$interpreter$ILoop$$anonfun$$loopPostInit$1$1.apply(ILoop.scala:908)
at scala.tools.nsc.interpreter.ILoop$$anonfun$mumly$1.apply(ILoop.scala:189)
at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:221)
at scala.tools.nsc.interpreter.ILoop.mumly(ILoop.scala:186)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$startup$1$1.apply(ILoop.scala:979)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:990)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:891)
at
scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:891)
at
scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:891)
at org.apache.flink.api.scala.FlinkShell$.startShell(FlinkShell.scala:184)
at org.apache.flink.api.scala.FlinkShell$.main(FlinkShell.scala:131)
at org.apache.flink.api.scala.FlinkShell.main(FlinkShell.scala)
va
Exception in thread "Thread-2" java.lang.InterruptedException
at java.util.concurrent.SynchronousQueue.put(SynchronousQueue.java:879)
at scala.tools.nsc.interpreter.SplashLoop.run(InteractiveReader.scala:77)
at java.lang.Thread.run(Thread.java:748)
```


Re: Ongoing project

2020-12-17 Thread RaviKiran Kk
I am interested

On Thu, Dec 17, 2020, 17:13 Peter Kirieny  wrote:

> Hello team
> I have a project in django
> (developing an ecommerce website with some innovations)
>
> Using pycharm and python, am looking for a partner here
>
> Am a Kenyan, in Nairobi
>
> If interested please inbox for more information
>
> --
> You received this message because you are subscribed to the Google Groups
> "Django users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to django-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/django-users/CAL8t8eovVqpPJGfTAE9Q_%3DuPdazu3xxF-79CQxmcf7MNNAL6YA%40mail.gmail.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Django users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to django-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/django-users/CAPEfQMQMa-CbstmH431zfGykCoH92C2_CoADnHteDPj%3DMtf6JA%40mail.gmail.com.


Re: 关于 stream-stream Interval Join 的问题

2020-12-10 Thread macia kk
你用的是哪个版本的Flink呢?
-
1.11.2

看起来你的watermark固定快8个小时的话,应该是时区问题,而不是数据问题。
所以你的binlog是怎么读进来的呢?自定义的format?
-
ts 就是时间戳

bsTableEnv.executeSql("""
  CREATE TABLE input_database (
`table` STRING,
`database` STRING,
`data` ROW(
  reference_id STRING,
  transaction_sn STRING,
  transaction_type BIGINT,
  merchant_id BIGINT,
  transaction_id BIGINT,
  status BIGINT
 ),
ts BIGINT,
event_time AS TO_TIMESTAMP(FROM_UNIXTIME(ts)),
WATERMARK FOR event_time AS event_time - INTERVAL '10' HOUR
 ) WITH (
   'connector.type' = 'kafka',
   'connector.version' = '0.11',
   'connector.topic' = 'mytopic',
   'connector.properties.bootstrap.servers' = '',
   'format.type' = 'json'
 )
)



```



Benchao Li  于2020年12月10日周四 下午6:14写道:

> 你用的是哪个版本的Flink呢?
>
> 看起来你的watermark固定快8个小时的话,应该是时区问题,而不是数据问题。
> 所以你的binlog是怎么读进来的呢?自定义的format?
>
> macia kk  于2020年12月10日周四 上午1:06写道:
>
> > 我刚才跑了很多任务,设置不同的 maxOutOfOrderness WATERMARK FOR event_time AS event_time
> -
> > INTERVAL 'x' HOUR
> >
> >  发现一个很奇怪的问题 ,按理说 watermark = currentMaxTimestamp - maxOutOfOrderness
> >
> > 但是 我通过 页面上的 watermark 时间,和我设置 maxOutOfOrderness x,
> > 能够反推出来数据的 currentMaxTimestamp
> >
> > currentMaxTimestamp = watermark + maxOutOfOrderness
> >
> > 但是我无论设置多少的 maxOutOfOrderness, 反推出来的 currentMaxTimestamp 比现在此时此刻的时间快
> > 8个小时,也就是说 currentMaxTimestamp 在未来后的 8个小时,这个数字一直是固定的8。
> >
> >
> > 但是,我进行 Join, 直接输出任意一张表,得到的 evet time 都是对的,比如现在 00:55
> >
> >
> {"table":"transaction_tab_0122","database":"main_db","transaction_type":1,"transaction_id":11,"reference_id":"11","transaction_sn":"1","merchant_id":1,"status":1,"event_time":"
> > *2020-12-10T01:02:24Z*"}
> >
> > UI 上显示的 watermark 是 1607555031000(Your time zone: 2020年12月10日星期四早上7点02分
> > GMT+08:00)
> >
> > 这个 watermark 是未来的时间 
> >
> >
> >
> >
> >
> > macia kk  于2020年12月9日周三 下午11:36写道:
> >
> > > 感谢 一旦 和 Benchao
> > >
> > >   1. 如果是我的 watermark 设置过长,导致无法输出的话,是有点疑问的,因为我可以确定的是,一定会有 Join 上的数据,但是我
> > Job
> > > 跑了几天也没有一条输出。我还试了如下的SQL,自己 Join 自己,所以理论肯定是直接输出的,实际也是一条也没有。
> > >
> > > val result = bsTableEnv.sqlQuery("""
> > >SELECT *
> > >FROM (
> > >   SELECT t1.`table`, t1.`database`, t1.transaction_type,
> > t1.transaction_id,
> > > t1.reference_id, t1.transaction_sn, t1.merchant_id,
> > t1.status, t1.event_time
> > >   FROM main_db as t1
> > >   LEFT JOIN main_db as t2
> > >   ON t1.reference_id = t2.reference_id
> > >   WHERE t1.event_time >= t2.event_time + INTERVAL '5' MINUTES
> > >AND t1.event_time <= t2.event_time - INTERVAL '5' MINUTES
> > >)
> > >   """.stripMargin)
> > >
> > > 2. 一旦提到的 watermark 传递的问题,我可以确认的是,会传递下去,这可以在 UI 上看到
> > >
> > > 3. 这个底层的watermark只会取当前source subtask见到的最大的watermark 作为这个source
> > > subtask的watermark。
> > > ---
> > > 这里应该是使用 source subtask 最小的 watermark 传递过去,因为我可以看到的是,我的 watermark
> > > 永远和现在相差8个小时,所以怀疑是有一张表,总是会迟8个小时才会有 BinLog.
> > >
> > > 4. Flink SQL 有没有方法在定义 schema 的时候,如果一个字段不存在,就是 null,我现在想换另外一个时间字段作为
> event
> > > time,但是有的表又没有这个字段,会导致解析的时候直接报错.
> > >
> > > 5. 我能不能不在 input_table 上注册 water mark,在 filter 出两张表后,再把 watermark
> > > 加载两张表上,这样可以避免因为别的表,导致 watermark 停止不前,混乱的行为.
> > >
> > >
> > > Thanks and best regards
> > >
> > >
> > > Benchao Li  于2020年12月9日周三 上午10:24写道:
> > >
> > >> Hi macia,
> > >>
> > >> 一旦回答的基本比较完整了。
> > >> watermark影响的主要是left join没有join到的情况下,+(left, null)这样的数据输出的时机。
> > >> 如果是两侧都有数据,watermark不前进,也都可以正常输出。
> > >>
> > >> 关于watermark,如果你的事件时间忽高忽低,这个底层的watermark只会取当前source
> > subtask见到的最大的watermark
> > >> 作为这个source subtask的watermark。但是你的watermark计算逻辑本身就是事件时间delay
> > 10个小时,这个已经会导致
> > >> 你的没有join到的数据下发会延迟很多了。
> > >>
> > >> 你也可以尝试下用处理时间来做一下interval join,看看能不能达到预期。
> > >>
> > >> 赵一旦  于2020年12月9日周三 上午10:15写道:
> > >>
> > >> > 重点是watermark是否推进了,如果不推进,left join也无法知道什么时候右边就没数据了,可以仅输出左边数据。
> > >> >
> > >> >
> 

Re: 关于 stream-stream Interval Join 的问题

2020-12-09 Thread macia kk
我刚才跑了很多任务,设置不同的 maxOutOfOrderness WATERMARK FOR event_time AS event_time -
INTERVAL 'x' HOUR

 发现一个很奇怪的问题 ,按理说 watermark = currentMaxTimestamp - maxOutOfOrderness

但是 我通过 页面上的 watermark 时间,和我设置 maxOutOfOrderness x,
能够反推出来数据的 currentMaxTimestamp

currentMaxTimestamp = watermark + maxOutOfOrderness

但是我无论设置多少的 maxOutOfOrderness, 反推出来的 currentMaxTimestamp 比现在此时此刻的时间快
8个小时,也就是说 currentMaxTimestamp 在未来后的 8个小时,这个数字一直是固定的8。


但是,我进行 Join, 直接输出任意一张表,得到的 evet time 都是对的,比如现在 00:55
{"table":"transaction_tab_0122","database":"main_db","transaction_type":1,"transaction_id":11,"reference_id":"11","transaction_sn":"1","merchant_id":1,"status":1,"event_time":"
*2020-12-10T01:02:24Z*"}

UI 上显示的 watermark 是 1607555031000(Your time zone: 2020年12月10日星期四早上7点02分
GMT+08:00)

这个 watermark 是未来的时间 





macia kk  于2020年12月9日周三 下午11:36写道:

> 感谢 一旦 和 Benchao
>
>   1. 如果是我的 watermark 设置过长,导致无法输出的话,是有点疑问的,因为我可以确定的是,一定会有 Join 上的数据,但是我 Job
> 跑了几天也没有一条输出。我还试了如下的SQL,自己 Join 自己,所以理论肯定是直接输出的,实际也是一条也没有。
>
> val result = bsTableEnv.sqlQuery("""
>SELECT *
>FROM (
>   SELECT t1.`table`, t1.`database`, t1.transaction_type, 
> t1.transaction_id,
> t1.reference_id, t1.transaction_sn, t1.merchant_id, t1.status, 
> t1.event_time
>   FROM main_db as t1
>   LEFT JOIN main_db as t2
>   ON t1.reference_id = t2.reference_id
>   WHERE t1.event_time >= t2.event_time + INTERVAL '5' MINUTES
>AND t1.event_time <= t2.event_time - INTERVAL '5' MINUTES
>)
>   """.stripMargin)
>
> 2. 一旦提到的 watermark 传递的问题,我可以确认的是,会传递下去,这可以在 UI 上看到
>
> 3. 这个底层的watermark只会取当前source subtask见到的最大的watermark 作为这个source
> subtask的watermark。
> ---
> 这里应该是使用 source subtask 最小的 watermark 传递过去,因为我可以看到的是,我的 watermark
> 永远和现在相差8个小时,所以怀疑是有一张表,总是会迟8个小时才会有 BinLog.
>
> 4. Flink SQL 有没有方法在定义 schema 的时候,如果一个字段不存在,就是 null,我现在想换另外一个时间字段作为 event
> time,但是有的表又没有这个字段,会导致解析的时候直接报错.
>
> 5. 我能不能不在 input_table 上注册 water mark,在 filter 出两张表后,再把 watermark
> 加载两张表上,这样可以避免因为别的表,导致 watermark 停止不前,混乱的行为.
>
>
> Thanks and best regards
>
>
> Benchao Li  于2020年12月9日周三 上午10:24写道:
>
>> Hi macia,
>>
>> 一旦回答的基本比较完整了。
>> watermark影响的主要是left join没有join到的情况下,+(left, null)这样的数据输出的时机。
>> 如果是两侧都有数据,watermark不前进,也都可以正常输出。
>>
>> 关于watermark,如果你的事件时间忽高忽低,这个底层的watermark只会取当前source subtask见到的最大的watermark
>> 作为这个source subtask的watermark。但是你的watermark计算逻辑本身就是事件时间delay 10个小时,这个已经会导致
>> 你的没有join到的数据下发会延迟很多了。
>>
>> 你也可以尝试下用处理时间来做一下interval join,看看能不能达到预期。
>>
>> 赵一旦  于2020年12月9日周三 上午10:15写道:
>>
>> > 重点是watermark是否推进了,如果不推进,left join也无法知道什么时候右边就没数据了,可以仅输出左边数据。
>> >
>> >
>> >
>> (1)你这个的话我看到一个问题,就是watermark你定义10小时的maxOutOfOrderness,确定这么长嘛要,这么大的maxOutOfOrderness,会导致join到的则会及时输出,join不到的需要等10小时才能输出“仅左边”数据,即left
>> > join。
>> >
>> > (2)此外,还有一个点,这个我也不确认。如果是datastream
>> > api,watermark是可以正常传播的,不清楚flinkSQL情况是否能这么传播。
>> >
>> >
>> input_database中定义了watermark,从input_database到2个filter后的表不清楚是否还存在watermark(我感觉是存在的),只要存在那就没问题,唯一需要注意的是第1点。
>> >
>> > macia kk  于2020年12月9日周三 上午1:17写道:
>> >
>> > > @Benchao Li   感谢回复,这个问题困扰我半年了,导致我一直不能迁移到
>> > > FLink,可能我的Case 太特殊了.
>> > >
>> > > 我 input topic 和 schema 如果下,但是要注意的是,这个 topic 里包含了两个 MySQL DB 的
>> Binlog,我需要
>> > > filter 出来 main_db__tansaction_tab, merchant_db__transaction_tab, 两个 DB
>> > > 中的两个表。所以这里的字段我定义的是 两张表的字段的并集.
>> > >
>> > > 还要注意的是 even time 是 create_time, 这里问题非常大:
>> > >  1. 很多表都有 create time,所以会导致很多不用的表也能解析出来 watermark, 导致混乱
>> > >  2. Binlog 是 change log, 所以历史数据会不断更新,会导致有很多旧的 create time进来,可能会影响
>> > watermark
>> > > forward on.
>> > >
>> > > bsTableEnv.executeSql("""
>> > >   CREATE TABLE input_database (
>> > > `table` STRING,
>> > > `database` STRING,
>> > > `data` ROW(
>> > >   reference_id STRING,
>> > >   transaction_sn STRING,
>> > >   transaction_type BIGINT,
>> > >   merchant_id BIGINT,
>> > >   transaction_id BIGINT,
>> > >   status BIGINT
>> > >  ),
>> > > ts BIGINT,
>> > > event_time AS TO_TIMESTAMP(FROM_UNIXTIME(create_time)),
>> > > WATERMARK FOR event_time

Re: 关于 stream-stream Interval Join 的问题

2020-12-09 Thread macia kk
感谢 一旦 和 Benchao

  1. 如果是我的 watermark 设置过长,导致无法输出的话,是有点疑问的,因为我可以确定的是,一定会有 Join 上的数据,但是我 Job
跑了几天也没有一条输出。我还试了如下的SQL,自己 Join 自己,所以理论肯定是直接输出的,实际也是一条也没有。

val result = bsTableEnv.sqlQuery("""
   SELECT *
   FROM (
  SELECT t1.`table`, t1.`database`, t1.transaction_type,
t1.transaction_id,
t1.reference_id, t1.transaction_sn, t1.merchant_id,
t1.status, t1.event_time
  FROM main_db as t1
  LEFT JOIN main_db as t2
  ON t1.reference_id = t2.reference_id
  WHERE t1.event_time >= t2.event_time + INTERVAL '5' MINUTES
   AND t1.event_time <= t2.event_time - INTERVAL '5' MINUTES
   )
  """.stripMargin)

2. 一旦提到的 watermark 传递的问题,我可以确认的是,会传递下去,这可以在 UI 上看到

3. 这个底层的watermark只会取当前source subtask见到的最大的watermark 作为这个source
subtask的watermark。
---
这里应该是使用 source subtask 最小的 watermark 传递过去,因为我可以看到的是,我的 watermark
永远和现在相差8个小时,所以怀疑是有一张表,总是会迟8个小时才会有 BinLog.

4. Flink SQL 有没有方法在定义 schema 的时候,如果一个字段不存在,就是 null,我现在想换另外一个时间字段作为 event
time,但是有的表又没有这个字段,会导致解析的时候直接报错.

5. 我能不能不在 input_table 上注册 water mark,在 filter 出两张表后,再把 watermark
加载两张表上,这样可以避免因为别的表,导致 watermark 停止不前,混乱的行为.


Thanks and best regards


Benchao Li  于2020年12月9日周三 上午10:24写道:

> Hi macia,
>
> 一旦回答的基本比较完整了。
> watermark影响的主要是left join没有join到的情况下,+(left, null)这样的数据输出的时机。
> 如果是两侧都有数据,watermark不前进,也都可以正常输出。
>
> 关于watermark,如果你的事件时间忽高忽低,这个底层的watermark只会取当前source subtask见到的最大的watermark
> 作为这个source subtask的watermark。但是你的watermark计算逻辑本身就是事件时间delay 10个小时,这个已经会导致
> 你的没有join到的数据下发会延迟很多了。
>
> 你也可以尝试下用处理时间来做一下interval join,看看能不能达到预期。
>
> 赵一旦  于2020年12月9日周三 上午10:15写道:
>
> > 重点是watermark是否推进了,如果不推进,left join也无法知道什么时候右边就没数据了,可以仅输出左边数据。
> >
> >
> >
> (1)你这个的话我看到一个问题,就是watermark你定义10小时的maxOutOfOrderness,确定这么长嘛要,这么大的maxOutOfOrderness,会导致join到的则会及时输出,join不到的需要等10小时才能输出“仅左边”数据,即left
> > join。
> >
> > (2)此外,还有一个点,这个我也不确认。如果是datastream
> > api,watermark是可以正常传播的,不清楚flinkSQL情况是否能这么传播。
> >
> >
> input_database中定义了watermark,从input_database到2个filter后的表不清楚是否还存在watermark(我感觉是存在的),只要存在那就没问题,唯一需要注意的是第1点。
> >
> > macia kk  于2020年12月9日周三 上午1:17写道:
> >
> > > @Benchao Li   感谢回复,这个问题困扰我半年了,导致我一直不能迁移到
> > > FLink,可能我的Case 太特殊了.
> > >
> > > 我 input topic 和 schema 如果下,但是要注意的是,这个 topic 里包含了两个 MySQL DB 的
> Binlog,我需要
> > > filter 出来 main_db__tansaction_tab, merchant_db__transaction_tab, 两个 DB
> > > 中的两个表。所以这里的字段我定义的是 两张表的字段的并集.
> > >
> > > 还要注意的是 even time 是 create_time, 这里问题非常大:
> > >  1. 很多表都有 create time,所以会导致很多不用的表也能解析出来 watermark, 导致混乱
> > >  2. Binlog 是 change log, 所以历史数据会不断更新,会导致有很多旧的 create time进来,可能会影响
> > watermark
> > > forward on.
> > >
> > > bsTableEnv.executeSql("""
> > >   CREATE TABLE input_database (
> > > `table` STRING,
> > > `database` STRING,
> > > `data` ROW(
> > >   reference_id STRING,
> > >   transaction_sn STRING,
> > >   transaction_type BIGINT,
> > >   merchant_id BIGINT,
> > >   transaction_id BIGINT,
> > >   status BIGINT
> > >  ),
> > > ts BIGINT,
> > > event_time AS TO_TIMESTAMP(FROM_UNIXTIME(create_time)),
> > > WATERMARK FOR event_time AS event_time - INTERVAL '10' HOUR
> > >  ) WITH (
> > >'connector.type' = 'kafka',
> > >'connector.version' = '0.11',
> > >'connector.topic' = 'mytopic',
> > >'connector.properties.bootstrap.servers' = '',
> > >'format.type' = 'json'
> > >  )
> > > """)
> > >
> > >
> > > 分别 filter 出来 两张表,进行 interval Join,这个是一直没有输出的,我两张表输出试过,没有任何问题。
> > >
> > > val main_db = bsTableEnv.sqlQuery("""
> > >   | SELECT *
> > >   | FROM input_database
> > >   | WHERE `database` = 'main_db'
> > >   |  AND `table` LIKE 'transaction_tab%'
> > >   | """.stripMargin)
> > >
> > > val merchant_db = bsTableEnv.sqlQuery("""
> > >   | SELECT *
> > >   | FROM input_database
> > >   | WHERE `database` = 'merchant_db'
> > >   |   AND `table` LIKE 'transaction_tab%'
> > >   | """.stripMargin)
> > >
> > > bsTableEnv.createTemporaryView("main_db", main_db)
> > > bsTableEnv.createTemporaryView("merchant_db", merchant_db)
> > >
> > > val result = bsT

Re: 关于 stream-stream Interval Join 的问题

2020-12-08 Thread macia kk
@Benchao Li   感谢回复,这个问题困扰我半年了,导致我一直不能迁移到
FLink,可能我的Case 太特殊了.

我 input topic 和 schema 如果下,但是要注意的是,这个 topic 里包含了两个 MySQL DB 的 Binlog,我需要
filter 出来 main_db__tansaction_tab, merchant_db__transaction_tab, 两个 DB
中的两个表。所以这里的字段我定义的是 两张表的字段的并集.

还要注意的是 even time 是 create_time, 这里问题非常大:
 1. 很多表都有 create time,所以会导致很多不用的表也能解析出来 watermark, 导致混乱
 2. Binlog 是 change log, 所以历史数据会不断更新,会导致有很多旧的 create time进来,可能会影响 watermark
forward on.

bsTableEnv.executeSql("""
  CREATE TABLE input_database (
`table` STRING,
`database` STRING,
`data` ROW(
  reference_id STRING,
  transaction_sn STRING,
  transaction_type BIGINT,
  merchant_id BIGINT,
  transaction_id BIGINT,
  status BIGINT
 ),
ts BIGINT,
event_time AS TO_TIMESTAMP(FROM_UNIXTIME(create_time)),
WATERMARK FOR event_time AS event_time - INTERVAL '10' HOUR
 ) WITH (
   'connector.type' = 'kafka',
   'connector.version' = '0.11',
   'connector.topic' = 'mytopic',
   'connector.properties.bootstrap.servers' = '',
   'format.type' = 'json'
 )
""")


分别 filter 出来 两张表,进行 interval Join,这个是一直没有输出的,我两张表输出试过,没有任何问题。

val main_db = bsTableEnv.sqlQuery("""
  | SELECT *
  | FROM input_database
  | WHERE `database` = 'main_db'
  |  AND `table` LIKE 'transaction_tab%'
  | """.stripMargin)

val merchant_db = bsTableEnv.sqlQuery("""
  | SELECT *
  | FROM input_database
  | WHERE `database` = 'merchant_db'
  |   AND `table` LIKE 'transaction_tab%'
  | """.stripMargin)

bsTableEnv.createTemporaryView("main_db", main_db)
bsTableEnv.createTemporaryView("merchant_db", merchant_db)

val result = bsTableEnv.sqlQuery("""
   SELECT *
   FROM (
  SELECT t1.`table`, t1.`database`, t1.transaction_type,
t1.transaction_id,
t1.reference_id, t1.transaction_sn, t1.merchant_id,
t1.status, t1.event_time
  FROM main_db as t1
  LEFT JOIN merchant_db as t2
  ON t1.reference_id = t2.reference_id
  WHERE t1.event_time >= t2.event_time + INTERVAL '1' HOUR
   AND t1.event_time <= t2.event_time - INTERVAL '1' HOUR
   )
  """.stripMargin)



事件时间的interval join是需要用watermark来驱动的。你可以确认你的watermark是正常前进的么?
-
你提到的这个问题,我估计我的 watermark 前进肯定是不正常的。但是我无法理解为什么 interval join 需要 watermark
来驱动。
我的理解是,他会把两边的数据都保留在 state 里,既然是 Left join,如果左边有数据查右边的state,如果可以 join上,就输出
join 的结果,如果没有 join上,那应该正常输出左边的数据,这才是 Left join 应有的逻辑把.






Benchao Li  于2020年12月8日周二 下午3:23写道:

> hi macia,
>
> 事件时间的interval join是需要用watermark来驱动的。你可以确认你的watermark是正常前进的么?
>
> macia kk  于2020年12月8日周二 上午1:15写道:
>
> > 抱歉,是 >-30 and <+30
> >
> > 贴的只是demo,我的疑问是,既然是 Left Join,所以无所有没有Jion上右边,左边肯定会输出的,不至于一天条没有
> >
> > 赵一旦 于2020年12月7日 周一23:28写道:
> >
> > > 准确点,2个条件之间没and?2个都是>?
> > >
> > > macia kk  于2020年12月7日周一 下午10:30写道:
> > >
> > > > 不好意思,我上边贴错了
> > > >
> > > > SELECT *
> > > >  FROM A
> > > >  LEFT OUT JOIN B
> > > >  ON order_id
> > > >  Where A.event_time > B.event_time -  30 s
> > > >  A.event_time > B.event_time + 30 s
> > > >
> > > > event_time 是 Time Attributes 设置的 event_time
> > > >
> > > > 这样是没有输出的。
> > > >
> > > >
> > > >
> > > > interval join 左右表在 state 中是缓存多久的?
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > hailongwang <18868816...@163.com> 于2020年12月7日周一 下午8:05写道:
> > > >
> > > > > Hi,
> > > > > 其中 条件是
> > > > > `Where A.event_time < B.event_time + 30 s and A.event_time >
> > > B.event_time
> > > > > - 30 s ` 吧
> > > > > 可以参考以下例子[1],看下有木有写错。
> > > > > [1]
> > > > >
> > > >
> > >
> >
> https://github.com/apache/flink/blob/59ae84069313ede60cf7ad3a9d2fe1bc07c4e460/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/IntervalJoinITCase.scala#L183
> > > > >
> > > > >
> > > > > Best,
> > > > > Hailong
> > > > > 在 2020-12-07 13:10:02,"macia kk"  写道:
> > > > > >Hi, 各位大佬
> > > > > >
> > > > > >  我的上游是一个 Kafka Topic, 里边把一个 MySQL DB 所有的 Binlog 打进去了。我的
> > > > > >Flink任务的在处理的时候,消费一次,然后 filter out 出来 表A 和 表B,表A是 order事件 ,表B 是
> order
> > > > item
> > > > > >信息,所以 我用:
> > > > > >
> > > > > > SELECT *
> > > > > > FROM A
> > > > > > LEFT OUT JOIN B
> > > > > > ON order_id
> > > > > > Where A.event_time > B.event_time + 30 s
> > > > > > A.event_time > B.event_time - 30 s
> > > > > >
> > > > > >我测了下,A 和 BI 单独都可以消费输出,但是如果加上 Left Join 之后就没有输出数据了,可以确认的是我用 Spark
> > > > > Structural
> > > > > >Streaming 实现同样的逻辑是有输出的。 因为我的理解既然是 Left Join,
> > > > > >所以无论如何,左边是一定会输出的,不知道Flink Interval Join 在具体实现的逻辑是什么,我在处理上哪里有问题?
> > > > >
> > > >
> > >
> >
>
>
> --
>
> Best,
> Benchao Li
>


Re: 关于 stream-stream Interval Join 的问题

2020-12-07 Thread macia kk
抱歉,是 >-30 and <+30

贴的只是demo,我的疑问是,既然是 Left Join,所以无所有没有Jion上右边,左边肯定会输出的,不至于一天条没有

赵一旦 于2020年12月7日 周一23:28写道:

> 准确点,2个条件之间没and?2个都是>?
>
> macia kk  于2020年12月7日周一 下午10:30写道:
>
> > 不好意思,我上边贴错了
> >
> > SELECT *
> >  FROM A
> >  LEFT OUT JOIN B
> >  ON order_id
> >  Where A.event_time > B.event_time -  30 s
> >  A.event_time > B.event_time + 30 s
> >
> > event_time 是 Time Attributes 设置的 event_time
> >
> > 这样是没有输出的。
> >
> >
> >
> > interval join 左右表在 state 中是缓存多久的?
> >
> >
> >
> >
> >
> >
> > hailongwang <18868816...@163.com> 于2020年12月7日周一 下午8:05写道:
> >
> > > Hi,
> > > 其中 条件是
> > > `Where A.event_time < B.event_time + 30 s and A.event_time >
> B.event_time
> > > - 30 s ` 吧
> > > 可以参考以下例子[1],看下有木有写错。
> > > [1]
> > >
> >
> https://github.com/apache/flink/blob/59ae84069313ede60cf7ad3a9d2fe1bc07c4e460/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/IntervalJoinITCase.scala#L183
> > >
> > >
> > > Best,
> > > Hailong
> > > 在 2020-12-07 13:10:02,"macia kk"  写道:
> > > >Hi, 各位大佬
> > > >
> > > >  我的上游是一个 Kafka Topic, 里边把一个 MySQL DB 所有的 Binlog 打进去了。我的
> > > >Flink任务的在处理的时候,消费一次,然后 filter out 出来 表A 和 表B,表A是 order事件 ,表B 是 order
> > item
> > > >信息,所以 我用:
> > > >
> > > > SELECT *
> > > > FROM A
> > > > LEFT OUT JOIN B
> > > > ON order_id
> > > > Where A.event_time > B.event_time + 30 s
> > > > A.event_time > B.event_time - 30 s
> > > >
> > > >我测了下,A 和 BI 单独都可以消费输出,但是如果加上 Left Join 之后就没有输出数据了,可以确认的是我用 Spark
> > > Structural
> > > >Streaming 实现同样的逻辑是有输出的。 因为我的理解既然是 Left Join,
> > > >所以无论如何,左边是一定会输出的,不知道Flink Interval Join 在具体实现的逻辑是什么,我在处理上哪里有问题?
> > >
> >
>


Re: 关于 stream-stream Interval Join 的问题

2020-12-07 Thread macia kk
不好意思,我上边贴错了

SELECT *
 FROM A
 LEFT OUT JOIN B
 ON order_id
 Where A.event_time > B.event_time -  30 s
 A.event_time > B.event_time + 30 s

event_time 是 Time Attributes 设置的 event_time

这样是没有输出的。



interval join 左右表在 state 中是缓存多久的?






hailongwang <18868816...@163.com> 于2020年12月7日周一 下午8:05写道:

> Hi,
> 其中 条件是
> `Where A.event_time < B.event_time + 30 s and A.event_time > B.event_time
> - 30 s ` 吧
> 可以参考以下例子[1],看下有木有写错。
> [1]
> https://github.com/apache/flink/blob/59ae84069313ede60cf7ad3a9d2fe1bc07c4e460/flink-table/flink-table-planner-blink/src/test/scala/org/apache/flink/table/planner/runtime/stream/sql/IntervalJoinITCase.scala#L183
>
>
> Best,
> Hailong
> 在 2020-12-07 13:10:02,"macia kk"  写道:
> >Hi, 各位大佬
> >
> >  我的上游是一个 Kafka Topic, 里边把一个 MySQL DB 所有的 Binlog 打进去了。我的
> >Flink任务的在处理的时候,消费一次,然后 filter out 出来 表A 和 表B,表A是 order事件 ,表B 是 order item
> >信息,所以 我用:
> >
> > SELECT *
> > FROM A
> > LEFT OUT JOIN B
> > ON order_id
> > Where A.event_time > B.event_time + 30 s
> > A.event_time > B.event_time - 30 s
> >
> >我测了下,A 和 BI 单独都可以消费输出,但是如果加上 Left Join 之后就没有输出数据了,可以确认的是我用 Spark
> Structural
> >Streaming 实现同样的逻辑是有输出的。 因为我的理解既然是 Left Join,
> >所以无论如何,左边是一定会输出的,不知道Flink Interval Join 在具体实现的逻辑是什么,我在处理上哪里有问题?
>


关于 stream-stream Interval Join 的问题

2020-12-06 Thread macia kk
Hi, 各位大佬

  我的上游是一个 Kafka Topic, 里边把一个 MySQL DB 所有的 Binlog 打进去了。我的
Flink任务的在处理的时候,消费一次,然后 filter out 出来 表A 和 表B,表A是 order事件 ,表B 是 order item
信息,所以 我用:

 SELECT *
 FROM A
 LEFT OUT JOIN B
 ON order_id
 Where A.event_time > B.event_time + 30 s
 A.event_time > B.event_time - 30 s

我测了下,A 和 BI 单独都可以消费输出,但是如果加上 Left Join 之后就没有输出数据了,可以确认的是我用 Spark Structural
Streaming 实现同样的逻辑是有输出的。 因为我的理解既然是 Left Join,
所以无论如何,左边是一定会输出的,不知道Flink Interval Join 在具体实现的逻辑是什么,我在处理上哪里有问题?


Re: 关于去重(Deduplication)

2020-11-15 Thread macia kk
好的,明白了,谢谢

Jark Wu  于2020年11月16日周一 上午10:27写道:

> 关于2, 你理解反了。性能上 deduplicate with first row 比 first_value 更优。 因为deduplicate
> with first row 在 state 里面只存了 key,value 只用了一个空字节来表示。
>
> On Sun, 15 Nov 2020 at 21:47, macia kk  wrote:
>
> > 感谢 Jark 回复, 一直有看你的博客,收益匪浅。
> >
> > 关于2,性能上是 first_value 更优,因为只保存了 key 和 对应的 value,而窗口函数保留了整行数据?
> >
> >
> >
> > Jark Wu  于2020年11月15日周日 下午8:45写道:
> >
> > > 主要两个区别:
> > > 1. 在语义上,deduplicate 是整行去重,而 first_value, last_value 是列去重。比如 deduplicate
> > > with last row,是保留最后一行,如果最后一行中有 null 值,也会保留。而 last_value 是保留该列的最后非 null
> 值。
> > > 2. 性能上 deduplicate 更优,比如 first row, 只保存了 key 的state信息。
> > >
> > > Best,
> > > Jark
> > >
> > > On Sun, 15 Nov 2020 at 19:23, macia kk  wrote:
> > >
> > > > 各位大佬:
> > > >
> > > >   我看文档上建议使用的去重方式是用窗口函数
> > > > <
> > > >
> > >
> >
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
> > > > >
> > > >
> > > > SELECT [column_list]FROM (
> > > >SELECT [column_list],
> > > >  ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
> > > >ORDER BY col1 [asc|desc][, col2 [asc|desc]...]) AS rownum
> > > >FROM table_name)WHERE rownum <= N [AND conditions]
> > > >
> > > >
> > > > 但是我看 Flink SQL 里还有个 first_value, laste_value,也能实现同样的目标。
> > > > 请问这两者有什么区别吗,尤其是在 watermark  以及状态管理上?
> > > >
> > >
> >
>


Re: 关于去重(Deduplication)

2020-11-15 Thread macia kk
感谢 Jark 回复, 一直有看你的博客,收益匪浅。

关于2,性能上是 first_value 更优,因为只保存了 key 和 对应的 value,而窗口函数保留了整行数据?



Jark Wu  于2020年11月15日周日 下午8:45写道:

> 主要两个区别:
> 1. 在语义上,deduplicate 是整行去重,而 first_value, last_value 是列去重。比如 deduplicate
> with last row,是保留最后一行,如果最后一行中有 null 值,也会保留。而 last_value 是保留该列的最后非 null 值。
> 2. 性能上 deduplicate 更优,比如 first row, 只保存了 key 的state信息。
>
> Best,
> Jark
>
> On Sun, 15 Nov 2020 at 19:23, macia kk  wrote:
>
> > 各位大佬:
> >
> >   我看文档上建议使用的去重方式是用窗口函数
> > <
> >
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
> > >
> >
> > SELECT [column_list]FROM (
> >SELECT [column_list],
> >  ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
> >ORDER BY col1 [asc|desc][, col2 [asc|desc]...]) AS rownum
> >FROM table_name)WHERE rownum <= N [AND conditions]
> >
> >
> > 但是我看 Flink SQL 里还有个 first_value, laste_value,也能实现同样的目标。
> > 请问这两者有什么区别吗,尤其是在 watermark  以及状态管理上?
> >
>


关于去重(Deduplication)

2020-11-15 Thread macia kk
各位大佬:

  我看文档上建议使用的去重方式是用窗口函数


SELECT [column_list]FROM (
   SELECT [column_list],
 ROW_NUMBER() OVER ([PARTITION BY col1[, col2...]]
   ORDER BY col1 [asc|desc][, col2 [asc|desc]...]) AS rownum
   FROM table_name)WHERE rownum <= N [AND conditions]


但是我看 Flink SQL 里还有个 first_value, laste_value,也能实现同样的目标。
请问这两者有什么区别吗,尤其是在 watermark  以及状态管理上?


[kdenlive] [Bug 425960] Rendering files with only one frame results in 16 minutes of black screen

2020-10-04 Thread KK
https://bugs.kde.org/show_bug.cgi?id=425960

--- Comment #5 from KK  ---
(In reply to emohr from comment #4)
> Please upload here your project file so I can have a look.


There is no need to send a project file since I can reproduce the bug with a
fresh download of the portable version.
Here:
https://youtu.be/7D8YuDrqAJ0

-- 
You are receiving this mail because:
You are watching all bug changes.

[kdenlive] [Bug 425960] Rendering files with only one frame results in 16 minutes of black screen

2020-09-28 Thread KK
https://bugs.kde.org/show_bug.cgi?id=425960

--- Comment #3 from KK  ---
(In reply to emohr from comment #1)
> Please try with version 20.08.1 where some rendering issues are fixed.

Still same issue on latest version. Tried it a few days ago

-- 
You are receiving this mail because:
You are watching all bug changes.

[issue41868] SMTPLIB integrate or provide option to use "logging"

2020-09-26 Thread KK Hiraskar


New submission from KK Hiraskar :

Currently "smtplib" is directly printing data to stdout/stderr, and not getting 
any good way to get this data in to the logs written by "logging"

please provide an option to achieve this.

--
components: email
messages: 377537
nosy: barry, hpkkumar007, r.david.murray
priority: normal
severity: normal
status: open
title: SMTPLIB integrate or provide option to use "logging"
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue41868>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[Bug 1676328] Re: sssd_be is leaking memory

2020-09-22 Thread KK
Is this git release https://github.com/SSSD/sssd/releases/tag/sssd-
1_13_91 can be upgradable from `1.13.4-1ubuntu1.15` on ubuntu Xenial
16.04 LTS . I am facing memory leaks with this version on multiple
servers. Thanks

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1676328

Title:
  sssd_be is leaking memory

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/sssd/+bug/1676328/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

'Gabin Kim' license statement

2020-09-13 Thread KK
All of my past & future contributions to LibreOffice may belicensed under the 
MPLv2/LGPLv3+ dual license. 
___
LibreOffice mailing list
LibreOffice@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/libreoffice


[kdenlive] [Bug 425960] New: Rendering files with only one frame results in 16 minutes of black screen

2020-08-29 Thread KK
https://bugs.kde.org/show_bug.cgi?id=425960

Bug ID: 425960
   Summary: Rendering files with only one frame results in 16
minutes of black screen
   Product: kdenlive
   Version: 20.08.0
  Platform: Microsoft Windows
OS: Microsoft Windows
Status: REPORTED
  Severity: major
  Priority: NOR
 Component: Video Display & Export
  Assignee: j...@kdenlive.org
  Reporter: krakas...@outlook.es
  Target Milestone: ---

SUMMARY
Rendering files with only one frame results in 16 minutes of black screen

STEPS TO REPRODUCE
1. Drag a single image/video to the timeline.
2. Use the Razor tool to cut the video in one frame.
3. Render to File

OBSERVED RESULT
The video has 16 minutes of black screen

EXPECTED RESULT
A video with the duration of one frame.

SOFTWARE/OS VERSIONS
Windows: 10

ADDITIONAL INFORMATION
This only happens if the timeline has only one or two frames.

-- 
You are receiving this mail because:
You are watching all bug changes.

  1   2   3   4   5   6   7   8   9   10   >