RE:

2019-09-24 Thread Venkat Subbiah
Hello 


http://omniummjc.com/freeze.php?dpeh=knot10401
Venkat

























alpajqmka wmsyduztf Update: Back in stock. This won’t last long! qgaegip 
fuwzmnt isqhzstcc wqkejlrnt So the waters were healed unto this day, according 
to the saying of Elisha which he spake. hypyvykicy rthie oaqwwrrc mduzva wlsfg




























hvpaxzcxyb itkzynwabe ktekwxu xhxgvai eaumbu ofnltk chdfggsb ufpefiuq





























qzlfa swozrpya dasogbkjo srgeqy icdmg Then wrote Rehum the chancellor, and 
Shimshai the scribe, and the rest of their companions; the Dinaites, the 
Apharsathchites, the Tarpelites, the Apharsites, the Archevites, the 
Babylonians, the Susanchites, the Dehavites, and the Elamites, njhthme pbiyywck 
qbnonwfsv tefdeug shqdn
























zenjr These deals are only for our readers, but we aren’t sure how long 
they’ll be available, so go ahead and sign up while you can. ofizsykm lpjnznu 
zffglnqq hmtrq


hello

2019-05-28 Thread Venkat Subbiah

Good afternoon linux



http://alexandrastanciu.com/chapter.php?sykegp=VLZ19401



Bests
Venkat


re

2019-05-24 Thread Venkat Subbiah


Hi 


http://www.rstechnology.club/lay.php?nnyar=ZJA8001





Venkat


[no subject]

2018-07-07 Thread Venkat Subbiah
  hi Linux   https://goo.gl/MD9TK5  Venkat Subbiah



[no subject]

2018-07-07 Thread Venkat Subbiah
  hi Linux   https://goo.gl/MD9TK5  Venkat Subbiah



[no subject]

2017-08-28 Thread Venkat Subbiah
Sup Linux


http://www.imr-asso.org/wp-content/uploads/innovation.php?corn=pks2ea81htmcx01ew



Venkat

[no subject]

2017-08-28 Thread Venkat Subbiah
Sup Linux


http://www.imr-asso.org/wp-content/uploads/innovation.php?corn=pks2ea81htmcx01ew



Venkat

[no subject]

2017-02-08 Thread Venkat Subbiah
hi Linux



http://CosmicHarvestFarm.com/environment.php?test=h24hz07rxu1f




Venkat


[no subject]

2017-02-08 Thread Venkat Subbiah
hi Linux



http://CosmicHarvestFarm.com/environment.php?test=h24hz07rxu1f




Venkat


from: Venkat Subbiah

2015-07-31 Thread Venkat Subbiah
Salutations linux

http://curaparaherpes.net/breakfast.php?test=ps8958nqdndw


linux


Sent from my iPhone
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


from: Venkat Subbiah

2015-07-31 Thread Venkat Subbiah
Salutations linux

http://curaparaherpes.net/breakfast.php?test=ps8958nqdndw


linux


Sent from my iPhone
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


i2c block read on an SMBus

2007-12-20 Thread Venkat Subbiah
I am trying to do an i2c block read using a call like 

rc = i2c_smbus_xfer(g_i2c_adp, buf[0], 0x0,
  I2C_SMBUS_READ, 0x0,
  I2C_SMBUS_I2C_BLOCK_DATA, );

and the logs show me that this hits the else part of this if condition in 
i801_block_transaction function in file  i2c-i801.c. (of kernel version 
2.6.23.11)

if (command == I2C_SMBUS_I2C_BLOCK_DATA) {
if (read_write == I2C_SMBUS_WRITE) {
/* set I2C_EN bit in configuration register */
pci_read_config_byte(I801_dev, SMBHSTCFG, );
pci_write_config_byte(I801_dev, SMBHSTCFG,
  hostc | SMBHSTCFG_I2C_EN);
} else {
dev_err(_dev->dev,
"I2C_SMBUS_I2C_BLOCK_READ not DB!\n");
return -1;
}
}

some time ago when I was doing a web search i seem to have run into a patch 
which allows doing a i2c block read on SMBus. Is there a patch for this? 
( Output from my lspci: 00:1f.3 SMBus: Intel Corporation 6300ESB SMBus 
Controller (rev 02))


Looking at the documentation for 6300ESB SMBus Controller it seems that
the only I2C read transaction supported is a block read. All the other
read transaction are SMBus type. 

Why is the i2c read block not supported in the driver? 

Thanks in advance for all the input.  Please CC me on th replies as I am not 
subscribed to the list.


Thx,
Venkat


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


i2c block read on an SMBus

2007-12-20 Thread Venkat Subbiah
I am trying to do an i2c block read using a call like 

rc = i2c_smbus_xfer(g_i2c_adp, buf[0], 0x0,
  I2C_SMBUS_READ, 0x0,
  I2C_SMBUS_I2C_BLOCK_DATA, data);

and the logs show me that this hits the else part of this if condition in 
i801_block_transaction function in file  i2c-i801.c. (of kernel version 
2.6.23.11)

if (command == I2C_SMBUS_I2C_BLOCK_DATA) {
if (read_write == I2C_SMBUS_WRITE) {
/* set I2C_EN bit in configuration register */
pci_read_config_byte(I801_dev, SMBHSTCFG, hostc);
pci_write_config_byte(I801_dev, SMBHSTCFG,
  hostc | SMBHSTCFG_I2C_EN);
} else {
dev_err(I801_dev-dev,
I2C_SMBUS_I2C_BLOCK_READ not DB!\n);
return -1;
}
}

some time ago when I was doing a web search i seem to have run into a patch 
which allows doing a i2c block read on SMBus. Is there a patch for this? 
( Output from my lspci: 00:1f.3 SMBus: Intel Corporation 6300ESB SMBus 
Controller (rev 02))


Looking at the documentation for 6300ESB SMBus Controller it seems that
the only I2C read transaction supported is a block read. All the other
read transaction are SMBus type. 

Why is the i2c read block not supported in the driver? 

Thanks in advance for all the input.  Please CC me on th replies as I am not 
subscribed to the list.


Thx,
Venkat


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: irq load balancing

2007-09-13 Thread Venkat Subbiah
Since most network devices have a single status register for both
receiver and transmit (and errors and the like), which needs a lock to
protect access, you will likely end up with serious thrashing of moving
the lock between cpus.
> Any ways to measure the trashing of locks?

Since most network devices have a single status register for both
receiver and transmit (and errors and the like)
> These register accesses will be mostly within the irq handler which I
plan on keeping on the same processor. The network driver is actually
tg3. Will looks closely into the driver.

Thx,
Venkat


-Original Message-
From: Lennart Sorensen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 13, 2007 1:45 PM
To: Venkat Subbiah
Cc: Chris Snook; linux-kernel@vger.kernel.org
Subject: Re: irq load balancing

On Thu, Sep 13, 2007 at 01:31:39PM -0700, Venkat Subbiah wrote:
> Doing it in a round-robin fashion will be disastrous for performance.
> Your cache miss rate will go through the roof and you'll hit the slow
> paths in the network stack most of the time.
> > Most of the work in my system is spent in enrypt/decrypting traffic.
> Right now all this is done in a tasklet within the softirqd and hence
> all landing up on the same CPU.
> On the receive side it'a packet handler that handles the traffic. On
the
> tx side it's done within the transmit path of the packet. So would
> re-architecting this to move the rx packet handler to a different
kernel
> thread(with smp affinity to one CPU) and tx to a different kernel
> thread(with SMP affinity to a different CPU) be advisable. 
> What's the impact on cache miss and slowpath/fastpath in network
stack.

Since most network devices have a single status register for both
receiver and transmit (and errors and the like), which needs a lock to
protect access, you will likely end up with serious thrashing of moving
the lock between cpus.

--
Len Sorensen
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: irq load balancing

2007-09-13 Thread Venkat Subbiah
Doing it in a round-robin fashion will be disastrous for performance.
Your cache miss rate will go through the roof and you'll hit the slow
paths in the network stack most of the time.
> Most of the work in my system is spent in enrypt/decrypting traffic.
Right now all this is done in a tasklet within the softirqd and hence
all landing up on the same CPU.
On the receive side it'a packet handler that handles the traffic. On the
tx side it's done within the transmit path of the packet. So would
re-architecting this to move the rx packet handler to a different kernel
thread(with smp affinity to one CPU) and tx to a different kernel
thread(with SMP affinity to a different CPU) be advisable. 
What's the impact on cache miss and slowpath/fastpath in network stack.

Thx,
-Venkat

-Original Message-
From: Chris Snook [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 12, 2007 2:45 PM
To: Venkat Subbiah
Cc: linux-kernel@vger.kernel.org
Subject: Re: irq load balancing

Venkat Subbiah wrote:
> Most of the load in my system is triggered by a single ethernet IRQ.
> Essentially the IRQ schedules a tasklet and most of the work is done
in the
> taskelet which is scheduled in the IRQ. From what I read looks like
the
> tasklet would be executed on the same CPU on which it was scheduled.
So this
> means even in an SMP system it will be one processor which is
overloaded.
> 
> So will using the user space IRQ loadbalancer really help?

A little bit.  It'll keep other IRQs on different CPUs, which will
prevent other 
interrupts from causing cache and TLB evictions that could slow down the

interrupt handler for the NIC.

> What I am doubtful
> about is that the user space load balance comes along and changes the
> affinity once in a while. But really what I need is every interrupt to
go to
> a different CPU in a round robin fashion.

Doing it in a round-robin fashion will be disastrous for performance.
Your 
cache miss rate will go through the roof and you'll hit the slow paths
in the 
network stack most of the time.

> Looks like the APIC  can distribute IRQ's dynamically? Is this
supported in
> the kernel and any config or proc interface to turn this on/off.

/proc/irq/$FOO/smp_affinity is a bitmask.  You can mask an irq to
multiple 
processors.  Of course, this will absolutely kill your performance.
That's why 
irqbalance never does this.

-- Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: irq load balancing

2007-09-13 Thread Venkat Subbiah
Doing it in a round-robin fashion will be disastrous for performance.
Your cache miss rate will go through the roof and you'll hit the slow
paths in the network stack most of the time.
 Most of the work in my system is spent in enrypt/decrypting traffic.
Right now all this is done in a tasklet within the softirqd and hence
all landing up on the same CPU.
On the receive side it'a packet handler that handles the traffic. On the
tx side it's done within the transmit path of the packet. So would
re-architecting this to move the rx packet handler to a different kernel
thread(with smp affinity to one CPU) and tx to a different kernel
thread(with SMP affinity to a different CPU) be advisable. 
What's the impact on cache miss and slowpath/fastpath in network stack.

Thx,
-Venkat

-Original Message-
From: Chris Snook [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, September 12, 2007 2:45 PM
To: Venkat Subbiah
Cc: linux-kernel@vger.kernel.org
Subject: Re: irq load balancing

Venkat Subbiah wrote:
 Most of the load in my system is triggered by a single ethernet IRQ.
 Essentially the IRQ schedules a tasklet and most of the work is done
in the
 taskelet which is scheduled in the IRQ. From what I read looks like
the
 tasklet would be executed on the same CPU on which it was scheduled.
So this
 means even in an SMP system it will be one processor which is
overloaded.
 
 So will using the user space IRQ loadbalancer really help?

A little bit.  It'll keep other IRQs on different CPUs, which will
prevent other 
interrupts from causing cache and TLB evictions that could slow down the

interrupt handler for the NIC.

 What I am doubtful
 about is that the user space load balance comes along and changes the
 affinity once in a while. But really what I need is every interrupt to
go to
 a different CPU in a round robin fashion.

Doing it in a round-robin fashion will be disastrous for performance.
Your 
cache miss rate will go through the roof and you'll hit the slow paths
in the 
network stack most of the time.

 Looks like the APIC  can distribute IRQ's dynamically? Is this
supported in
 the kernel and any config or proc interface to turn this on/off.

/proc/irq/$FOO/smp_affinity is a bitmask.  You can mask an irq to
multiple 
processors.  Of course, this will absolutely kill your performance.
That's why 
irqbalance never does this.

-- Chris
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


RE: irq load balancing

2007-09-13 Thread Venkat Subbiah
Since most network devices have a single status register for both
receiver and transmit (and errors and the like), which needs a lock to
protect access, you will likely end up with serious thrashing of moving
the lock between cpus.
 Any ways to measure the trashing of locks?

Since most network devices have a single status register for both
receiver and transmit (and errors and the like)
 These register accesses will be mostly within the irq handler which I
plan on keeping on the same processor. The network driver is actually
tg3. Will looks closely into the driver.

Thx,
Venkat


-Original Message-
From: Lennart Sorensen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 13, 2007 1:45 PM
To: Venkat Subbiah
Cc: Chris Snook; linux-kernel@vger.kernel.org
Subject: Re: irq load balancing

On Thu, Sep 13, 2007 at 01:31:39PM -0700, Venkat Subbiah wrote:
 Doing it in a round-robin fashion will be disastrous for performance.
 Your cache miss rate will go through the roof and you'll hit the slow
 paths in the network stack most of the time.
  Most of the work in my system is spent in enrypt/decrypting traffic.
 Right now all this is done in a tasklet within the softirqd and hence
 all landing up on the same CPU.
 On the receive side it'a packet handler that handles the traffic. On
the
 tx side it's done within the transmit path of the packet. So would
 re-architecting this to move the rx packet handler to a different
kernel
 thread(with smp affinity to one CPU) and tx to a different kernel
 thread(with SMP affinity to a different CPU) be advisable. 
 What's the impact on cache miss and slowpath/fastpath in network
stack.

Since most network devices have a single status register for both
receiver and transmit (and errors and the like), which needs a lock to
protect access, you will likely end up with serious thrashing of moving
the lock between cpus.

--
Len Sorensen
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


irq load balancing

2007-09-11 Thread Venkat Subbiah
Most of the load in my system is triggered by a single ethernet IRQ. 
Essentially the IRQ schedules a tasklet and most of the work is done in the 
taskelet which is scheduled in the IRQ. From what I read looks like the tasklet 
would be executed on the same CPU on which it was scheduled. So this means even 
in an SMP system it will be one processor which is overloaded.

So will using the user space IRQ loadbalancer really help? What I am doubtful 
about is that the user space load balance comes along and changes the affinity 
once in a while. But really what I need is every interrupt to go to a different 
CPU in a round robin fashion.

Looks like the APIC  can distribute IRQ's dynamically? Is this supported in the 
kernel and any config or proc interface to turn this on/off.


Thx,
Venkat

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


irq load balancing

2007-09-11 Thread Venkat Subbiah
Most of the load in my system is triggered by a single ethernet IRQ. 
Essentially the IRQ schedules a tasklet and most of the work is done in the 
taskelet which is scheduled in the IRQ. From what I read looks like the tasklet 
would be executed on the same CPU on which it was scheduled. So this means even 
in an SMP system it will be one processor which is overloaded.

So will using the user space IRQ loadbalancer really help? What I am doubtful 
about is that the user space load balance comes along and changes the affinity 
once in a while. But really what I need is every interrupt to go to a different 
CPU in a round robin fashion.

Looks like the APIC  can distribute IRQ's dynamically? Is this supported in the 
kernel and any config or proc interface to turn this on/off.


Thx,
Venkat

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/