Hello Greg, Thanks for answer.
I am trying that you advice me. Seems that there is same results. Currently I am stopped this task, but if someone have more ideas why it can be write please, Ill check. Btw, I tried to look at /sys/kernel/debug/vgt/irqinfo and here seems all is fine: > -------------------------- > > Interrupt control > status: > vGT: VLVDEISR is 10, VLVDEIIR is 0, VLVDEIMR is fffdff7f, VLVDEIER is > 200f0 > vGT: DEISR is 0, DEIIR is 0, DEIMR is 0, DEIER is > 80000000 > vGT: SDEISR is 0, SDEIIR is 0, SDEIMR is 0, SDEIER is > 0 > vGT: GTISR is 0, GTIIR is 0, GTIMR is 400001, GTIER is > 401001 > vGT: PMISR is 0, PMIIR is 0, PMIMR is 0, PMIER is > 70 > vGT: RCS_IMR is ffffffff, VCS_IMR is ffe00fff, BCS_IMR is > ffffffff > Total 207574 interrupts > logged: > # WARNING: precisely this is the number of > vGT > # physical interrupt handler be > called, > # each calling several events can > be > # been handled, so usually this > number > # is less than the total events > number. > 4042: Render Command Streamer MI USER > INTERRUPT > 1: Render MMIO sync flush > status > 1: Video MMIO sync flush > status > 426: Blitter Command Streamer MI USER > INTERRUPT > 1: Billter MMIO sync flush > status > 202447: Pipe A > vblank > 38975: Render geyserville UP evaluation interval > interrupt > 1987: RP UP threshold > interrupt > 21: Render Frequency Downward Timeout During RC6 > interrupt > 11740980876912: Last > pirq > 11740981018263: Last > virq > 78066: Average pirq > cycles > 15262: Average virq > cycles > 228105: Average delay between pirq/virq > handling > > > -->vgt-0: > > ....vreg (gtlc_mir: 80000000, vlvier: 200f0, vlviir: 0, vlvimr: fffdff2f, > vlvis) > ....vreg (gtier: 401001, gtiir: 0, gtimr: 400001, gtisr: > 0) > ....vreg (sdeier: 0, sdeiir: 0, sdeimr: 0, sdeisr: > 0) > ....vreg (pmier: 70, pmiir: 0, pmimr: 0, pmisr: > 0) > ....vreg (rcs_imr: ffffffff, vcs_imr: 0, bcs_imr: > ffffffff > 11740981028847: Last > injection > Total 208373 virtual irq > injection: > 3399: Render Command Streamer MI USER > INTERRUPT > 1: Render MMIO sync flush > status > 1: Video MMIO sync flush > status > 405: Blitter Command Streamer MI USER > INTERRUPT > 1: Billter MMIO sync flush > status > 202205: Pipe A > vblank > 2304: Primary Plane A flip > done > 38642: Render geyserville UP evaluation interval > interrupt > 1737: RP UP threshold > interrupt > 21: Render Frequency Downward Timeout During RC6 > interrupt > > > -->vgt-1: > > ....vreg (gtlc_mir: 80000000, vlvier: 0, vlviir: 0, vlvimr: ffffffff, > vlvisr: 0) > ....vreg (gtier: 401001, gtiir: 0, gtimr: 400001, gtisr: > 0) > ....vreg (sdeier: 0, sdeiir: 0, sdeimr: 0, sdeisr: > 0) > ....vreg (pmier: 0, pmiir: 0, pmimr: 0, pmisr: > 0) > ....vreg (rcs_imr: ffffffff, vcs_imr: 0, bcs_imr: > ffffffff > 9054347637006: Last > injection > Total 259251 virtual irq > injection: > 3359: Render Command Streamer MI USER > INTERRUPT > 1: Render MMIO sync flush > status > 1: Video MMIO sync flush > status > 174: Blitter Command Streamer MI USER > INTERRUPT > 1: Billter MMIO sync flush > status > 391176: Pipe A > vblank > 3604: Primary Plane A flip > done > With best regards, Oleksii On Thu, Jan 21, 2016 at 8:56 PM, Dr. Greg Wettstein <g...@wind.enjellic.com> wrote: > On Jan 5, 10:04am, Oleksii Kurochko wrote: > } Subject: Re: [Xen-devel] [XenGT][IGVT-g] DomU pgt_device structure > initial > > > Hey. > > Hi Oleksii, I hope this note finds your day going well. > > > Strange for me was that I got vmid=0 and gen_type=0, so I decided go > > to i915_gem_vgtbuffer_ioctl and write next at the start: if > > (!xen_initial_domain()) { return -EPERM; } Also same code is in 3.17 > > kernel from XenGT-kernel repo. > > > > it seems that there is no more this error now( from vgt_fb_decoder ), BUT > > there is often freeze or very laggy UI in guest. > > > > What it can be? > > It wasn't in this e-mail but I went through the console logs which > were in one of your postings on the IGVT-g list. I believe you have > your hypervisor configured for synchronous serial console output > (sync_console command-line parameter) and you are directing your dom0 > kernel console logging through the Xen provided serial interface. > > Setting this option is documented to cause significant latencies. In > fact there is a warning about this in the Xen console logs when the > hypervisor boots. Here is the code snippet from > xen/drivers/char/console.c:console_endboot() which produces the > message: > > if ( opt_sync_console ) > { > printk("**********************************************\n"); > printk("******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS\n"); > printk("******* This option is intended to aid debugging " > "of Xen by ensuring\n"); > printk("******* that all output is synchronously delivered " > "on the serial line.\n"); > printk("******* However it can introduce SIGNIFICANT latencies " > "and affect\n"); > printk("******* timekeeping. It is NOT recommended for " > "production use!\n"); > printk("**********************************************\n"); > > > It appears as if you were generating significant amounts of kernel log > output which may be at the root of the unacceptable latencies. I > would start by turning off that option and see if your guest > performance improves. > > > With best regards, > > Oleksii > > Good luck with your work. > > Greg > > }-- End of excerpt from Oleksii Kurochko > > As always, > Dr. G.W. Wettstein, Ph.D. Enjellic Systems Development, LLC. > 4206 N. 19th Ave. Specializing in information infra-structure > Fargo, ND 58102 development. > PH: 701-281-1686 > FAX: 701-281-3949 EMAIL: g...@enjellic.com > > ------------------------------------------------------------------------------ > "Sweeny's Law: The length of a progress report is inversely proportional > to the amount of progress." > > -- > -- Oleksii Kurochko | Embedded Dev GlobalLogic www.globallogic.com
_______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel