[PATCH 2/2] drm: add an fb creation ioctl that takes a pixel format

2011-11-10 Thread InKi Dae
2011/11/9 Rob Clark :
> On Wed, Nov 9, 2011 at 7:25 AM, InKi Dae  wrote:
>> Hello, all.
>>
>> I am trying to implement multi planer using your plane patch and I
>> think it's good but I am still warried about that drm_mode_fb_cmd2
>> structure has only one handle. I know this handle is sent to
>> framebuffer module to create new framebuffer. and the framebuffer
>> would cover entire a image. as you know, the image could be consisted
>> of one more planes. so I think now drm_mode_fb_cmd2 structure doesn't
>> support multi planer because it has only one handle. with update_plane
>> callback, a buffer of the framebuffer would be set to a hardware
>> overlay. how we could set two planes or three planes to the hardware
>> overlay? but there might be my missing point so please give me any
>> comments. in addition, have you been looked into gem flink and open
>> functions for memory sharing between processes? gem object basically
>> has one buffer so we can't modify it because of compatibility. so I
>> think it's right way that gem object manages only one buffer. for such
>> a reason, maybe drm_mode_fb_cmd2 structure should include one more
>> handles and plane count. each handle has a gem object to one plane and
>> plane count means how many planes are requested and when update_plane
>> callback is called by setplane(), we could set them of the specific
>> framebuffer to a hardware overlay.
>
> The current plan is to add a 3rd ioctl, for adding multi-planar fb..
> I guess it is a good thing that I'm not the only one who wants this
> :-)
>
>> another one, and also I have tried to implement the way sharing the
>> memory between v4l2 based drivers and drm based drivers through
>> application and this works fine. this feature had been introduced by
>> v4l2 framework as user ptr. my way also is similar to it. the
>> difference is that application could get new gem handle from specific
>> gem framework of kernel side if user application requests user ptr
>> import with the user space address(mmaped memory). the new gem handle
>> means a gem object to the memory mapped to the user space address.
>> this way makes different applications to be possible to share the
>> memory between v4l2 based driver and drm based driver. and also this
>> feature is considered for IOMMU so it would support non continuous
>> memory also. I will introduce this feature soon.
>
> btw, there was an RFC a little while back for "dmabuf" buffer sharing
> mechanism.. ?the idea would be to export a (for example) GEM buffer to
> a dmabuf handle which could be passed in to other devices, including
> for example v4l2 (although without necessarily requiring a userspace
> mapping)..
>
> http://www.spinics.net/lists/dri-devel/msg15077.html
>
> It sounds like you are looking for a similar thing..
>

Hi, Rob.

GEM framework already supports memory sharing way that a object name
created by gem flink is sent to another process and then the process
opens the object name. at that time, the gem framework of kernel side
creates new gem object. and I know that dmabuf is similar to the ION
introduced by Rebecca who is an engineer of Google at least for buffer
sharing way. but is it possible to share the memory region drawing on
only user virtual address mmaped with another process?. for instance,
as you know, v4l2 based driver has request buf feature that the driver
of kernel side allocates the memory regions as user-desired buffer
count and user gets user virtual address with mmap request after quary
buffer request. so we need to share this memory mmaped at here also.
for this, v4l2 based driver has userptr feature that user application
sets user virtual address to userptr structure and then the address is
translated to bus address(physical address without iommu or device
address with iommu) and sets it to hardware. I think it doesn't need
dmabuf if we would use it only for sharing the gem buffer with another
process because GEM framework already can do it. I will try to find
the way that we can use this feature commonly for generic gem
framework. this feature has already been implemented in our specific
gem framework and also tested.

thank you,
Inki dae.


> BR,
> -R
>
>> Thank you,
>> Inki Dae.
>>
>> 2011/11/9 Jesse Barnes :
>>> To properly support the various plane formats supported by different
>>> hardware, the kernel must know the pixel format of a framebuffer object.
>>> So add a new ioctl taking a format argument corresponding to a fourcc
>>> name from videodev2.h. ?Implement the fb creation hooks in terms of the
>>> new mode_fb_cmd2 using helpers where the old bpp/depth values are
>>> needed.
>>>
>>> Acked-by: Alan Cox 
>>> Reviewed-by: Rob Clark 
>>> Signed-off-by: Jesse Barnes 
>>> ---
>>> ?drivers/gpu/drm/drm_crtc.c ? ? ? ? ? ? ? ?| ?108 
>>> +++-
>>> ?drivers/gpu/drm/drm_crtc_helper.c ? ? ? ? | ? 50 -
>>> ?drivers/gpu/drm/drm_drv.c ? ? ? ? ? ? ? ? | ? ?1 +
>>> ?drivers/gpu/drm/i915/intel_display.c ? ? ?| ? 36 

[PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread Dave Airlie
On Thu, Nov 10, 2011 at 9:15 PM, Mandeep Singh Baines  
wrote:
> David Rientjes (rientjes at google.com) wrote:
>> On Mon, 17 Oct 2011, David Rientjes wrote:
>>
>> > On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
>> >
>> > > From: Hugh Dickins 
>> > >
>> > > Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
>> > > we're going to reboot immediately, the user will not be able to see the
>> > > messages anyway, and messing with the video mode may display artifacts,
>> > > and certainly get into several layers of complexity (including mutexes 
>> > > and
>> > > memory allocations) which we shall be much safer to avoid.
>> > >
>> > > Signed-off-by: Hugh Dickins 
>> > > [ Edited commit message and modified to short-circuit panic_timeout < 0
>> > > ? instead of testing panic_timeout >= 0. ?-Mandeep ]
>> > > Signed-off-by: Mandeep Singh Baines 
>> > > Cc: Dave Airlie 
>> > > Cc: Andrew Morton 
>> > > Cc: dri-devel at lists.freedesktop.org
>> >
>> > Acked-by: David Rientjes 
>> >
>>
>> Dave, where do we stand on this? ?I haven't seen it hit Linus' tree and I
>> don't see it in git://people.freedesktop.org/~airlied/linux.

I've just pulled it into my local drm-next, thanks for reminding me.

Dave.


[PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Thomas Hellstrom
On 11/10/2011 07:05 PM, Jerome Glisse wrote:
> On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
>
>> On 11/09/2011 09:22 PM, j.glisse at gmail.com wrote:
>>  
>>> From: Jerome Glisse
>>>
>>> This is an overhaul of the ttm memory accounting. This tries to keep
>>> the same global behavior while removing the whole zone concept. It
>>> keeps a distrinction for dma32 so that we make sure that ttm don't
>>> starve the dma32 zone.
>>>
>>> There is 3 threshold for memory allocation :
>>> - max_mem is the maximum memory the whole ttm infrastructure is
>>>going to allow allocation for (exception of system process see
>>>below)
>>> - emer_mem is the maximum memory allowed for system process, this
>>>limit is>   to max_mem
>>> - swap_limit is the threshold at which point ttm will start to
>>>try to swap object because ttm is getting close the max_mem
>>>limit
>>> - swap_dma32_limit is the threshold at which point ttm will start
>>>swap object to try to reduce the pressure on the dma32 zone. Note
>>>that we don't specificly target object to swap to it might very
>>>well free more memory from highmem rather than from dma32
>>>
>>> Accounting is done through used_mem&   used_dma32_mem, which sum give
>>> the total amount of memory actually accounted by ttm.
>>>
>>> Idea is that allocation will fail if (used_mem + used_dma32_mem)>
>>> max_mem and if swapping fail to make enough room.
>>>
>>> The used_dma32_mem can be updated as a later stage, allowing to
>>> perform accounting test before allocating a whole batch of pages.
>>>
>>>
>> Jerome, you're removing a fair amount of functionality here, without
>> justifying
>> why it could be removed.
>>  
> All this code was overkill.
>

[1] I don't agree, and since it's well tested, thought throught and 
working, I see no obvious reason to alter it,
within the context of this patch series unless it's absolutely required 
for the functionality.

>
>
>> Consider a low-end system with 1G of kernel memory and 10G of
>> highmem. How do we avoid putting stress on the kernel memory? I also
>> wouldn't be too surprised if DMA32 zones appear in HIGHMEM systems
>> in the future making the current zone concept good to keep.
>>  
> Right now kernel memory is accounted against all zone so it decrease
> not only the kernel zone but also the dma32&  highmem if present.
>

Do you mean that the code is incorrect? In that case, did you consider 
the fact
that zones may overlap? (Although I admit the name "highmem" might be 
misleading. Should be "total").

> Note also that kernel zone in current code == dma32 zone.
>

Last time I looked, the highmem split was typically at slightly less 
than 1GB, depending on the hardware and desired setup. I admit that was 
some time ago, but has that really changed? On all archs?
Furthermore, on !Highmem systems, All pages are in the kernel zone.

> When it comes to future it looks a lot simpler, it seems everyone
> is moving toward more capable and more advanced iommu that can remove
> all the restriction on memory from the device pov. I mean even arm
> are getting more and more advance iommu. I don't see any architecture
> worse supporting not going down that road.
>

While the proposed change is probably possible, with different low <-> 
high splits depending on whether HIGHMEM is defined or not, I think [1] 
is a good reason for not pushing it through.

>
>> Also, in effect you move the DOS from *all* zones into the DMA32
>> zone and create a race in that multiple simultaneous allocators can
>> first pre-allocate out of the global zone, and then update the DMA32
>> zone without synchronization. In this way you might theoretically
>> end up with more DMA32 pages allocated than present in the zone.
>>  
> Ok a respin attached with a simple change, things will be
> accounted against dma32 zone and only when we get page we will
> decrease the dma32 zone usage that way no DOS on dma32.
>
> It also deals with the case where there is still lot of highmem
> but no more dma32.
>

So why not just do a ttm_mem_global_alloc() for the pages you want to 
allocate,
and add a proper adjustment function if memory turns out to be either 
HIGHMEM or !DMA32

>
>> With the proposed code there's also a theoretical problem in that a
>> potentially huge number of pages are unaccounted before they are
>> actually freed.
>>  
> What you mean unaccounted ? The way it works is :
> r = global_memory_alloc(size)
> if (r) fail
> alloc pages
> update memory accounting according to what page where allocated
>
> So memory is always accounted before even being allocated (exception
> are the kernel object for vmwgfx&  ttm_bo but we can move accounting
> there too if you want, those are small allocation and i didn't think
> it was worse changing that).
>
No, I mean the sequence

unaccount_page_array()
---Race--
free_page_array()

/Thomas

> Cheers,
> Jerome
>



[PATCH 13/13] drm/ttm: isolate dma data from ttm_tt V2

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

Move dma data to a superset ttm_dma_tt structure which herit
from ttm_tt. This allow driver that don't use dma functionalities
to not have to waste memory for it.

V2 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c |   18 +++--
 drivers/gpu/drm/nouveau/nouveau_sgdma.c  |   22 --
 drivers/gpu/drm/radeon/radeon_ttm.c  |   43 ++--
 drivers/gpu/drm/ttm/ttm_page_alloc.c |  114 +++---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c |   35 +
 drivers/gpu/drm/ttm/ttm_tt.c |   58 +---
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c   |2 +
 include/drm/ttm/ttm_bo_driver.h  |   31 -
 include/drm/ttm/ttm_page_alloc.h |   33 +
 9 files changed, 202 insertions(+), 154 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 2dc0d83..d6326af 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1052,6 +1052,7 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
 static int
 nouveau_ttm_tt_populate(struct ttm_tt *ttm)
 {
+   struct ttm_dma_tt *ttm_dma = (void *)ttm;
struct drm_nouveau_private *dev_priv;
struct drm_device *dev;
unsigned i;
@@ -1065,7 +1066,7 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)

 #ifdef CONFIG_SWIOTLB
if (swiotlb_nr_tbl()) {
-   return ttm_dma_populate(ttm, dev->dev);
+   return ttm_dma_populate((void *)ttm, dev->dev);
}
 #endif

@@ -1075,14 +1076,14 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)
}

for (i = 0; i < ttm->num_pages; i++) {
-   ttm->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i],
+   ttm_dma->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i],
   0, PAGE_SIZE,
   PCI_DMA_BIDIRECTIONAL);
-   if (pci_dma_mapping_error(dev->pdev, ttm->dma_address[i])) {
+   if (pci_dma_mapping_error(dev->pdev, ttm_dma->dma_address[i])) {
while (--i) {
-   pci_unmap_page(dev->pdev, ttm->dma_address[i],
+   pci_unmap_page(dev->pdev, 
ttm_dma->dma_address[i],
   PAGE_SIZE, 
PCI_DMA_BIDIRECTIONAL);
-   ttm->dma_address[i] = 0;
+   ttm_dma->dma_address[i] = 0;
}
ttm_pool_unpopulate(ttm);
return -EFAULT;
@@ -1094,6 +1095,7 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)
 static void
 nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)
 {
+   struct ttm_dma_tt *ttm_dma = (void *)ttm;
struct drm_nouveau_private *dev_priv;
struct drm_device *dev;
unsigned i;
@@ -1103,14 +1105,14 @@ nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)

 #ifdef CONFIG_SWIOTLB
if (swiotlb_nr_tbl()) {
-   ttm_dma_unpopulate(ttm, dev->dev);
+   ttm_dma_unpopulate((void *)ttm, dev->dev);
return;
}
 #endif

for (i = 0; i < ttm->num_pages; i++) {
-   if (ttm->dma_address[i]) {
-   pci_unmap_page(dev->pdev, ttm->dma_address[i],
+   if (ttm_dma->dma_address[i]) {
+   pci_unmap_page(dev->pdev, ttm_dma->dma_address[i],
   PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
}
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 
b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index ee1eb7c..47f245e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -8,7 +8,10 @@
 #define NV_CTXDMA_PAGE_MASK  (NV_CTXDMA_PAGE_SIZE - 1)

 struct nouveau_sgdma_be {
-   struct ttm_tt ttm;
+   /* this has to be the first field so populate/unpopulated in
+* nouve_bo.c works properly, otherwise have to move them here
+*/
+   struct ttm_dma_tt ttm;
struct drm_device *dev;
u64 offset;
 };
@@ -20,6 +23,7 @@ nouveau_sgdma_destroy(struct ttm_tt *ttm)

if (ttm) {
NV_DEBUG(nvbe->dev, "\n");
+   ttm_dma_tt_fini(>ttm);
kfree(nvbe);
}
 }
@@ -38,7 +42,7 @@ nv04_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem)
nvbe->offset = mem->start << PAGE_SHIFT;
pte = (nvbe->offset >> NV_CTXDMA_PAGE_SHIFT) + 2;
for (i = 0; i < ttm->num_pages; i++) {
-   dma_addr_t dma_offset = ttm->dma_address[i];
+   dma_addr_t dma_offset = nvbe->ttm.dma_address[i];
uint32_t offset_l = lower_32_bits(dma_offset);

for (j = 0; j < 

[PATCH 12/13] drm/nouveau: enable the ttm dma pool when swiotlb is active V3

2011-11-10 Thread j.gli...@gmail.com
From: Konrad Rzeszutek Wilk 

If the card is capable of more than 32-bit, then use the default
TTM page pool code which allocates from anywhere in the memory.

Note: If the 'ttm.no_dma' parameter is set, the override is ignored
and the default TTM pool is used.

V2 use pci_set_consistent_dma_mask
V3 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

CC: Ben Skeggs 
CC: Francisco Jerez 
CC: Dave Airlie 
Signed-off-by: Konrad Rzeszutek Wilk 
Reviewed-by: Jerome Glisse 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c  |   73 -
 drivers/gpu/drm/nouveau/nouveau_debugfs.c |1 +
 drivers/gpu/drm/nouveau/nouveau_mem.c |6 ++
 drivers/gpu/drm/nouveau/nouveau_sgdma.c   |   60 +---
 4 files changed, 79 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index f19ac42..2dc0d83 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1049,10 +1049,79 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
nouveau_fence_unref(_fence);
 }

+static int
+nouveau_ttm_tt_populate(struct ttm_tt *ttm)
+{
+   struct drm_nouveau_private *dev_priv;
+   struct drm_device *dev;
+   unsigned i;
+   int r;
+
+   if (ttm->state != tt_unpopulated)
+   return 0;
+
+   dev_priv = nouveau_bdev(ttm->bdev);
+   dev = dev_priv->dev;
+
+#ifdef CONFIG_SWIOTLB
+   if (swiotlb_nr_tbl()) {
+   return ttm_dma_populate(ttm, dev->dev);
+   }
+#endif
+
+   r = ttm_pool_populate(ttm);
+   if (r) {
+   return r;
+   }
+
+   for (i = 0; i < ttm->num_pages; i++) {
+   ttm->dma_address[i] = pci_map_page(dev->pdev, ttm->pages[i],
+  0, PAGE_SIZE,
+  PCI_DMA_BIDIRECTIONAL);
+   if (pci_dma_mapping_error(dev->pdev, ttm->dma_address[i])) {
+   while (--i) {
+   pci_unmap_page(dev->pdev, ttm->dma_address[i],
+  PAGE_SIZE, 
PCI_DMA_BIDIRECTIONAL);
+   ttm->dma_address[i] = 0;
+   }
+   ttm_pool_unpopulate(ttm);
+   return -EFAULT;
+   }
+   }
+   return 0;
+}
+
+static void
+nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)
+{
+   struct drm_nouveau_private *dev_priv;
+   struct drm_device *dev;
+   unsigned i;
+
+   dev_priv = nouveau_bdev(ttm->bdev);
+   dev = dev_priv->dev;
+
+#ifdef CONFIG_SWIOTLB
+   if (swiotlb_nr_tbl()) {
+   ttm_dma_unpopulate(ttm, dev->dev);
+   return;
+   }
+#endif
+
+   for (i = 0; i < ttm->num_pages; i++) {
+   if (ttm->dma_address[i]) {
+   pci_unmap_page(dev->pdev, ttm->dma_address[i],
+  PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+   }
+   }
+
+   ttm_pool_unpopulate(ttm);
+}
+
 struct ttm_bo_driver nouveau_bo_driver = {
.ttm_tt_create = _ttm_tt_create,
-   .ttm_tt_populate = _pool_populate,
-   .ttm_tt_unpopulate = _pool_unpopulate,
+   .ttm_tt_populate = _ttm_tt_populate,
+   .ttm_tt_unpopulate = _ttm_tt_unpopulate,
.invalidate_caches = nouveau_bo_invalidate_caches,
.init_mem_type = nouveau_bo_init_mem_type,
.evict_flags = nouveau_bo_evict_flags,
diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c 
b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
index 8e15923..f52c2db 100644
--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
@@ -178,6 +178,7 @@ static struct drm_info_list nouveau_debugfs_list[] = {
{ "memory", nouveau_debugfs_memory_info, 0, NULL },
{ "vbios.rom", nouveau_debugfs_vbios_image, 0, NULL },
{ "ttm_page_pool", ttm_page_alloc_debugfs, 0, NULL },
+   { "ttm_dma_page_pool", ttm_dma_page_alloc_debugfs, 0, NULL },
 };
 #define NOUVEAU_DEBUGFS_ENTRIES ARRAY_SIZE(nouveau_debugfs_list)

diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c 
b/drivers/gpu/drm/nouveau/nouveau_mem.c
index 36bec48..37fcaa2 100644
--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
@@ -407,6 +407,12 @@ nouveau_mem_vram_init(struct drm_device *dev)
ret = pci_set_dma_mask(dev->pdev, DMA_BIT_MASK(dma_bits));
if (ret)
return ret;
+   ret = pci_set_consistent_dma_mask(dev->pdev, DMA_BIT_MASK(dma_bits));
+   if (ret) {
+   /* Reset to default value. */
+   pci_set_consistent_dma_mask(dev->pdev, DMA_BIT_MASK(32));
+   }
+

ret = nouveau_ttm_global_init(dev_priv);
if (ret)
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 

[PATCH 11/13] drm/radeon/kms: enable the ttm dma pool if swiotlb is on V3

2011-11-10 Thread j.gli...@gmail.com
From: Konrad Rzeszutek Wilk 

With the exception that we do not handle the AGP case. We only
deal with PCIe cards such as ATI ES1000 or HD3200 that have been
detected to only do DMA up to 32-bits.

V2 force dma32 if we fail to set bigger dma mask
V3 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

CC: Dave Airlie 
CC: Alex Deucher 
Signed-off-by: Konrad Rzeszutek Wilk 
Reviewed-by: Jerome Glisse 
---
 drivers/gpu/drm/radeon/radeon.h|1 -
 drivers/gpu/drm/radeon/radeon_device.c |6 ++
 drivers/gpu/drm/radeon/radeon_gart.c   |   29 +---
 drivers/gpu/drm/radeon/radeon_ttm.c|   83 +--
 4 files changed, 84 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index e3170c7..63257ba 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -332,7 +332,6 @@ struct radeon_gart {
union radeon_gart_table table;
struct page **pages;
dma_addr_t  *pages_addr;
-   bool*ttm_alloced;
boolready;
 };

diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index c33bc91..7c31321 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -765,8 +765,14 @@ int radeon_device_init(struct radeon_device *rdev,
r = pci_set_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits));
if (r) {
rdev->need_dma32 = true;
+   dma_bits = 32;
printk(KERN_WARNING "radeon: No suitable DMA available.\n");
}
+   r = pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits));
+   if (r) {
+   pci_set_consistent_dma_mask(rdev->pdev, DMA_BIT_MASK(32));
+   printk(KERN_WARNING "radeon: No coherent DMA available.\n");
+   }

/* Registers mapping */
/* TODO: block userspace mapping of io register */
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index fdc3a9a..18f496c 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -149,9 +149,6 @@ void radeon_gart_unbind(struct radeon_device *rdev, 
unsigned offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
for (i = 0; i < pages; i++, p++) {
if (rdev->gart.pages[p]) {
-   if (!rdev->gart.ttm_alloced[p])
-   pci_unmap_page(rdev->pdev, 
rdev->gart.pages_addr[p],
-   PAGE_SIZE, 
PCI_DMA_BIDIRECTIONAL);
rdev->gart.pages[p] = NULL;
rdev->gart.pages_addr[p] = rdev->dummy_page.addr;
page_base = rdev->gart.pages_addr[p];
@@ -181,23 +178,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned 
offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);

for (i = 0; i < pages; i++, p++) {
-   /* we reverted the patch using dma_addr in TTM for now but this
-* code stops building on alpha so just comment it out for now 
*/
-   if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */
-   rdev->gart.ttm_alloced[p] = true;
-   rdev->gart.pages_addr[p] = dma_addr[i];
-   } else {
-   /* we need to support large memory configurations */
-   /* assume that unbind have already been call on the 
range */
-   rdev->gart.pages_addr[p] = pci_map_page(rdev->pdev, 
pagelist[i],
-   0, PAGE_SIZE,
-   PCI_DMA_BIDIRECTIONAL);
-   if (pci_dma_mapping_error(rdev->pdev, 
rdev->gart.pages_addr[p])) {
-   /* FIXME: failed to map page (return -ENOMEM?) 
*/
-   radeon_gart_unbind(rdev, offset, pages);
-   return -ENOMEM;
-   }
-   }
+   rdev->gart.pages_addr[p] = dma_addr[i];
rdev->gart.pages[p] = pagelist[i];
page_base = rdev->gart.pages_addr[p];
for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
@@ -259,12 +240,6 @@ int radeon_gart_init(struct radeon_device *rdev)
radeon_gart_fini(rdev);
return -ENOMEM;
}
-   rdev->gart.ttm_alloced = kzalloc(sizeof(bool) *
-rdev->gart.num_cpu_pages, GFP_KERNEL);
-   if (rdev->gart.ttm_alloced == NULL) {
-   radeon_gart_fini(rdev);
-   return -ENOMEM;
-   }
/* set GART entry to point to the dummy page by default */

[PATCH 10/13] drm/ttm: provide dma aware ttm page pool code V7

2011-11-10 Thread j.gli...@gmail.com
From: Konrad Rzeszutek Wilk 

In TTM world the pages for the graphic drivers are kept in three different
pools: write combined, uncached, and cached (write-back). When the pages
are used by the graphic driver the graphic adapter via its built in MMU
(or AGP) programs these pages in. The programming requires the virtual address
(from the graphic adapter perspective) and the physical address (either System 
RAM
or the memory on the card) which is obtained using the pci_map_* calls (which 
does the
virtual to physical - or bus address translation). During the graphic 
application's
"life" those pages can be shuffled around, swapped out to disk, moved from the
VRAM to System RAM or vice-versa. This all works with the existing TTM pool code
- except when we want to use the software IOTLB (SWIOTLB) code to "map" the 
physical
addresses to the graphic adapter MMU. We end up programming the bounce buffer's
physical address instead of the TTM pool memory's and get a non-worky driver.
There are two solutions:
1) using the DMA API to allocate pages that are screened by the DMA API, or
2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back.

This patch fixes the issue by allocating pages using the DMA API. The second
is a viable option - but it has performance drawbacks and potential correctness
issues - think of the write cache page being bounced (SWIOTLB->TTM), the
WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM
page until the page has been recycled in the pool (and used by another 
application).

The bounce buffer does not get activated often - only in cases where we have
a 32-bit capable card and we want to use a page that is allocated above the
4GB limit. The bounce buffer offers the solution of copying the contents
of that 4GB page to an location below 4GB and then back when the operation has 
been
completed (or vice-versa). This is done by using the 'pci_sync_*' calls.
Note: If you look carefully enough in the existing TTM page pool code you will
notice the GFP_DMA32 flag is used  - which should guarantee that the provided 
page
is under 4GB. It certainly is the case, except this gets ignored in two cases:
 - If user specifies 'swiotlb=force' which bounces _every_ page.
 - If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the
   underlaying PFN's aren't necessarily under 4GB).

To not have this extra copying done the other option is to allocate the pages
using the DMA API so that there is not need to map the page and perform the
expensive 'pci_sync_*' calls.

This DMA API capable TTM pool requires for this the 'struct device' to
properly call the DMA API. It also has to track the virtual and bus address of
the page being handed out in case it ends up being swapped out or de-allocated -
to make sure it is de-allocated using the proper's 'struct device'.

Implementation wise the code keeps two lists: one that is attached to the
'struct device' (via the dev->dma_pools list) and a global one to be used when
the 'struct device' is unavailable (think shrinker code). The global list can
iterate over all of the 'struct device' and its associated dma_pool. The list
in dev->dma_pools can only iterate the device's dma_pool.
/[struct 
device_pool]\
/---| dev   
 |
   /+---| dma_pool  
 |
 /-+--\/
\/
 |struct device| /-->[struct dma_pool for WC][struct dma_pool for uncached]<-/--| dma_pool  
 |
 \-+--/ /   
\/
\--/
[Two pools associated with the device (WC and UC), and the parallel list
containing the 'struct dev' and 'struct dma_pool' entries]

The maximum amount of dma pools a device can have is six: write-combined,
uncached, and cached; then there are the DMA32 variants which are:
write-combined dma32, uncached dma32, and cached dma32.

Currently this code only gets activated when any variant of the SWIOTLB IOMMU
code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV
with PCI devices).

Tested-by: Michel D?nzer 
[v1: Using swiotlb_nr_tbl instead of swiotlb_enabled]
[v2: Major overhaul - added 'inuse_list' to seperate used from inuse and reorder
the order of lists to get better performance.]
[v3: Added comments/and some logic based on review, Added Jerome tag]
[v4: rebase on top of ttm_tt & ttm_backend merge]
[v5: rebase on top of ttm memory accounting overhaul]
[v6: New rebase on top of more memory accouting changes]
[v7: well rebase on top of no memory accounting changes]
Reviewed-by: Jerome Glisse 
Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/gpu/drm/ttm/Makefile |4 +
 

[PATCH 09/13] drm/ttm: introduce callback for ttm_tt populate & unpopulate V4

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

Move the page allocation and freeing to driver callback and
provide ttm code helper function for those.

Most intrusive change, is the fact that we now only fully
populate an object this simplify some of code designed around
the page fault design.

V2 Rebase on top of memory accounting overhaul
V3 New rebase on top of more memory accouting changes
V4 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c   |3 +
 drivers/gpu/drm/radeon/radeon_ttm.c|2 +
 drivers/gpu/drm/ttm/ttm_bo_util.c  |   31 ++-
 drivers/gpu/drm/ttm/ttm_bo_vm.c|9 +++-
 drivers/gpu/drm/ttm/ttm_page_alloc.c   |   57 
 drivers/gpu/drm/ttm/ttm_tt.c   |   91 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c |3 +
 include/drm/ttm/ttm_bo_driver.h|   41 --
 include/drm/ttm/ttm_page_alloc.h   |   18 ++
 9 files changed, 135 insertions(+), 120 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index b060fa4..f19ac42 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -28,6 +28,7 @@
  */

 #include "drmP.h"
+#include "ttm/ttm_page_alloc.h"

 #include "nouveau_drm.h"
 #include "nouveau_drv.h"
@@ -1050,6 +1051,8 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)

 struct ttm_bo_driver nouveau_bo_driver = {
.ttm_tt_create = _ttm_tt_create,
+   .ttm_tt_populate = _pool_populate,
+   .ttm_tt_unpopulate = _pool_unpopulate,
.invalidate_caches = nouveau_bo_invalidate_caches,
.init_mem_type = nouveau_bo_init_mem_type,
.evict_flags = nouveau_bo_evict_flags,
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 53ff62b..13d5996 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -584,6 +584,8 @@ struct ttm_tt *radeon_ttm_tt_create(struct ttm_bo_device 
*bdev,

 static struct ttm_bo_driver radeon_bo_driver = {
.ttm_tt_create = _ttm_tt_create,
+   .ttm_tt_populate = _pool_populate,
+   .ttm_tt_unpopulate = _pool_unpopulate,
.invalidate_caches = _invalidate_caches,
.init_mem_type = _init_mem_type,
.evict_flags = _evict_flags,
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 082fcae..60f204d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -244,7 +244,7 @@ static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void 
*src,
unsigned long page,
pgprot_t prot)
 {
-   struct page *d = ttm_tt_get_page(ttm, page);
+   struct page *d = ttm->pages[page];
void *dst;

if (!d)
@@ -281,7 +281,7 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void 
*dst,
unsigned long page,
pgprot_t prot)
 {
-   struct page *s = ttm_tt_get_page(ttm, page);
+   struct page *s = ttm->pages[page];
void *src;

if (!s)
@@ -342,6 +342,12 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
if (old_iomap == NULL && ttm == NULL)
goto out2;

+   if (ttm->state == tt_unpopulated) {
+   ret = ttm->bdev->driver->ttm_tt_populate(ttm);
+   if (ret)
+   goto out1;
+   }
+
add = 0;
dir = 1;

@@ -502,10 +508,16 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
 {
struct ttm_mem_reg *mem = >mem; pgprot_t prot;
struct ttm_tt *ttm = bo->ttm;
-   struct page *d;
-   int i;
+   int ret;

BUG_ON(!ttm);
+
+   if (ttm->state == tt_unpopulated) {
+   ret = ttm->bdev->driver->ttm_tt_populate(ttm);
+   if (ret)
+   return ret;
+   }
+
if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED)) {
/*
 * We're mapping a single page, and the desired
@@ -513,18 +525,9 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
 */

map->bo_kmap_type = ttm_bo_map_kmap;
-   map->page = ttm_tt_get_page(ttm, start_page);
+   map->page = ttm->pages[start_page];
map->virtual = kmap(map->page);
} else {
-   /*
-* Populate the part we're mapping;
-*/
-   for (i = start_page; i < start_page + num_pages; ++i) {
-   d = ttm_tt_get_page(ttm, i);
-   if (!d)
-   return -ENOMEM;
-   }
-
/*
 * We need to use vmap to get the desired page protection
 * or to make 

[PATCH 08/13] drm/ttm: merge ttm_backend and ttm_tt V4

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

ttm_backend will exist only and only with a ttm_tt, and ttm_tt
will be of interesting use only when bind to a backend. Thus to
avoid code & data duplication btw the two merge them.

V2 Rebase on top of memory accounting overhaul
V3 Rebase on top of more memory accounting changes
V4 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/nouveau/nouveau_bo.c|   14 ++-
 drivers/gpu/drm/nouveau/nouveau_drv.h   |5 +-
 drivers/gpu/drm/nouveau/nouveau_sgdma.c |  188 --
 drivers/gpu/drm/radeon/radeon_ttm.c |  222 ---
 drivers/gpu/drm/ttm/ttm_agp_backend.c   |   88 +
 drivers/gpu/drm/ttm/ttm_bo.c|9 +-
 drivers/gpu/drm/ttm/ttm_tt.c|   59 ++---
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c  |   66 +++--
 include/drm/ttm/ttm_bo_driver.h |  104 ++-
 9 files changed, 295 insertions(+), 460 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 7226f41..b060fa4 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -343,8 +343,10 @@ nouveau_bo_wr32(struct nouveau_bo *nvbo, unsigned index, 
u32 val)
*mem = val;
 }

-static struct ttm_backend *
-nouveau_bo_create_ttm_backend_entry(struct ttm_bo_device *bdev)
+static struct ttm_tt *
+nouveau_ttm_tt_create(struct ttm_bo_device *bdev,
+ unsigned long size, uint32_t page_flags,
+ struct page *dummy_read_page)
 {
struct drm_nouveau_private *dev_priv = nouveau_bdev(bdev);
struct drm_device *dev = dev_priv->dev;
@@ -352,11 +354,13 @@ nouveau_bo_create_ttm_backend_entry(struct ttm_bo_device 
*bdev)
switch (dev_priv->gart_info.type) {
 #if __OS_HAS_AGP
case NOUVEAU_GART_AGP:
-   return ttm_agp_backend_init(bdev, dev->agp->bridge);
+   return ttm_agp_tt_create(bdev, dev->agp->bridge,
+size, page_flags, dummy_read_page);
 #endif
case NOUVEAU_GART_PDMA:
case NOUVEAU_GART_HW:
-   return nouveau_sgdma_init_ttm(dev);
+   return nouveau_sgdma_create_ttm(bdev, size, page_flags,
+   dummy_read_page);
default:
NV_ERROR(dev, "Unknown GART type %d\n",
 dev_priv->gart_info.type);
@@ -1045,7 +1049,7 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
 }

 struct ttm_bo_driver nouveau_bo_driver = {
-   .create_ttm_backend_entry = nouveau_bo_create_ttm_backend_entry,
+   .ttm_tt_create = _ttm_tt_create,
.invalidate_caches = nouveau_bo_invalidate_caches,
.init_mem_type = nouveau_bo_init_mem_type,
.evict_flags = nouveau_bo_evict_flags,
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h 
b/drivers/gpu/drm/nouveau/nouveau_drv.h
index 29837da..0c53e39 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -1000,7 +1000,10 @@ extern int nouveau_sgdma_init(struct drm_device *);
 extern void nouveau_sgdma_takedown(struct drm_device *);
 extern uint32_t nouveau_sgdma_get_physical(struct drm_device *,
   uint32_t offset);
-extern struct ttm_backend *nouveau_sgdma_init_ttm(struct drm_device *);
+extern struct ttm_tt *nouveau_sgdma_create_ttm(struct ttm_bo_device *bdev,
+  unsigned long size,
+  uint32_t page_flags,
+  struct page *dummy_read_page);

 /* nouveau_debugfs.c */
 #if defined(CONFIG_DRM_NOUVEAU_DEBUG)
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 
b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index b75258a..bc2ab90 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -8,44 +8,23 @@
 #define NV_CTXDMA_PAGE_MASK  (NV_CTXDMA_PAGE_SIZE - 1)

 struct nouveau_sgdma_be {
-   struct ttm_backend backend;
+   struct ttm_tt ttm;
struct drm_device *dev;
-
-   dma_addr_t *pages;
-   unsigned nr_pages;
-   bool unmap_pages;
-
u64 offset;
-   bool bound;
 };

 static int
-nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages,
-  struct page **pages, struct page *dummy_read_page,
-  dma_addr_t *dma_addrs)
+nouveau_sgdma_dma_map(struct ttm_tt *ttm)
 {
-   struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be;
+   struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm;
struct drm_device *dev = nvbe->dev;
int i;

-   NV_DEBUG(nvbe->dev, "num_pages = %ld\n", num_pages);
-
-   nvbe->pages = dma_addrs;
-   nvbe->nr_pages = 

[PATCH 07/13] drm/ttm: page allocation use page array instead of list

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

Use the ttm_tt pages array for pages allocations, move the list
unwinding into the page allocation functions.

Signed-off-by: Jerome Glisse 
---
 drivers/gpu/drm/ttm/ttm_page_alloc.c |   85 +-
 drivers/gpu/drm/ttm/ttm_tt.c |   36 +++
 include/drm/ttm/ttm_page_alloc.h |8 ++--
 3 files changed, 63 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 727e93d..0f3e6d2 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -619,8 +619,10 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool 
*pool,
  * @return count of pages still required to fulfill the request.
  */
 static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool,
-   struct list_head *pages, int ttm_flags,
-   enum ttm_caching_state cstate, unsigned count)
+   struct list_head *pages,
+   int ttm_flags,
+   enum ttm_caching_state cstate,
+   unsigned count)
 {
unsigned long irq_flags;
struct list_head *p;
@@ -664,13 +666,15 @@ out:
  * On success pages list will hold count number of correctly
  * cached pages.
  */
-int ttm_get_pages(struct list_head *pages, int flags,
- enum ttm_caching_state cstate, unsigned count,
+int ttm_get_pages(struct page **pages, int flags,
+ enum ttm_caching_state cstate, unsigned npages,
  dma_addr_t *dma_address)
 {
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
+   struct list_head plist;
struct page *p = NULL;
gfp_t gfp_flags = GFP_USER;
+   unsigned count;
int r;

/* set zero flag for page allocation if required */
@@ -684,7 +688,7 @@ int ttm_get_pages(struct list_head *pages, int flags,
else
gfp_flags |= GFP_HIGHUSER;

-   for (r = 0; r < count; ++r) {
+   for (r = 0; r < npages; ++r) {
p = alloc_page(gfp_flags);
if (!p) {

@@ -693,85 +697,100 @@ int ttm_get_pages(struct list_head *pages, int flags,
return -ENOMEM;
}

-   list_add(>lru, pages);
+   pages[r] = p;
}
return 0;
}

-
/* combine zero flag to pool flags */
gfp_flags |= pool->gfp_flags;

/* First we take pages from the pool */
-   count = ttm_page_pool_get_pages(pool, pages, flags, cstate, count);
+   INIT_LIST_HEAD();
+   npages = ttm_page_pool_get_pages(pool, , flags, cstate, npages);
+   count = 0;
+   list_for_each_entry(p, , lru) {
+   pages[count++] = p;
+   }

/* clear the pages coming from the pool if requested */
if (flags & TTM_PAGE_FLAG_ZERO_ALLOC) {
-   list_for_each_entry(p, pages, lru) {
+   list_for_each_entry(p, , lru) {
clear_page(page_address(p));
}
}

/* If pool didn't have enough pages allocate new one. */
-   if (count > 0) {
+   if (npages > 0) {
/* ttm_alloc_new_pages doesn't reference pool so we can run
 * multiple requests in parallel.
 **/
-   r = ttm_alloc_new_pages(pages, gfp_flags, flags, cstate, count);
+   INIT_LIST_HEAD();
+   r = ttm_alloc_new_pages(, gfp_flags, flags, cstate, 
npages);
+   list_for_each_entry(p, , lru) {
+   pages[count++] = p;
+   }
if (r) {
/* If there is any pages in the list put them back to
 * the pool. */
printk(KERN_ERR TTM_PFX
   "Failed to allocate extra pages "
   "for large request.");
-   ttm_put_pages(pages, 0, flags, cstate, NULL);
+   ttm_put_pages(pages, count, flags, cstate, NULL);
return r;
}
}

-
return 0;
 }

 /* Put all pages in pages list to correct pool to wait for reuse */
-void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+void ttm_put_pages(struct page **pages, unsigned npages, int flags,
   enum ttm_caching_state cstate, dma_addr_t *dma_address)
 {
unsigned long irq_flags;
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
-   struct page *p, *tmp;
+   unsigned i;

if (pool == NULL) {
/* No pool for this memory type so free the pages */
-
-   list_for_each_entry_safe(p, tmp, pages, lru) {
- 

[PATCH 06/13] drm/ttm: test for dma_address array allocation failure

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

Signed-off-by: Jerome Glisse 
Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Thomas Hellstrom 
---
 drivers/gpu/drm/ttm/ttm_tt.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 3fb4c6d..aceecb5 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -319,7 +319,7 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, 
unsigned long size,
ttm->dummy_read_page = dummy_read_page;

ttm_tt_alloc_page_directory(ttm);
-   if (!ttm->pages) {
+   if (!ttm->pages || !ttm->dma_address) {
ttm_tt_destroy(ttm);
printk(KERN_ERR TTM_PFX "Failed allocating page table\n");
return NULL;
-- 
1.7.7.1



[PATCH 05/13] drm/ttm: use ttm put pages function to properly restore cache attribute

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

On failure we need to make sure the page we free has wb cache
attribute. Do this pas call the proper ttm page helper function.

Signed-off-by: Jerome Glisse 
Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Thomas Hellstrom 
---
 drivers/gpu/drm/ttm/ttm_tt.c |5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 8b7a6d0..3fb4c6d 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -89,7 +89,10 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, 
int index)
}
return p;
 out_err:
-   put_page(p);
+   INIT_LIST_HEAD();
+   list_add(>lru, );
+   ttm_put_pages(, 1, ttm->page_flags,
+ ttm->caching_state, >dma_address[index]);
return NULL;
 }

-- 
1.7.7.1



[PATCH 04/13] drm/ttm: remove unused backend flags field

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

This field is not use by any of the driver just drop it.

Signed-off-by: Jerome Glisse 
Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Thomas Hellstrom 
---
 drivers/gpu/drm/radeon/radeon_ttm.c |1 -
 include/drm/ttm/ttm_bo_driver.h |2 --
 2 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 0b5468b..97c76ae 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -787,7 +787,6 @@ struct ttm_backend *radeon_ttm_backend_create(struct 
radeon_device *rdev)
return NULL;
}
gtt->backend.bdev = >mman.bdev;
-   gtt->backend.flags = 0;
gtt->backend.func = _backend_func;
gtt->rdev = rdev;
gtt->pages = NULL;
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 9da182b..6d17140 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -106,7 +106,6 @@ struct ttm_backend_func {
  * struct ttm_backend
  *
  * @bdev: Pointer to a struct ttm_bo_device.
- * @flags: For driver use.
  * @func: Pointer to a struct ttm_backend_func that describes
  * the backend methods.
  *
@@ -114,7 +113,6 @@ struct ttm_backend_func {

 struct ttm_backend {
struct ttm_bo_device *bdev;
-   uint32_t flags;
struct ttm_backend_func *func;
 };

-- 
1.7.7.1



[PATCH 03/13] drm/ttm: remove split btw highmen and lowmem page

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

Split btw highmem and lowmem page was rendered useless by the
pool code. Remove it. Note further cleanup would change the
ttm page allocation helper to actualy take an array instead
of relying on list this could drasticly reduce the number of
function call in the common case of allocation whole buffer.

Signed-off-by: Jerome Glisse 
Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Thomas Hellstrom 
---
 drivers/gpu/drm/ttm/ttm_tt.c|   11 ++-
 include/drm/ttm/ttm_bo_driver.h |7 ---
 2 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 82a1161..8b7a6d0 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -69,7 +69,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int 
index)
struct ttm_mem_global *mem_glob = ttm->glob->mem_glob;
int ret;

-   while (NULL == (p = ttm->pages[index])) {
+   if (NULL == (p = ttm->pages[index])) {

INIT_LIST_HEAD();

@@ -85,10 +85,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, 
int index)
if (unlikely(ret != 0))
goto out_err;

-   if (PageHighMem(p))
-   ttm->pages[--ttm->first_himem_page] = p;
-   else
-   ttm->pages[++ttm->last_lomem_page] = p;
+   ttm->pages[index] = p;
}
return p;
 out_err:
@@ -270,8 +267,6 @@ static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm)
ttm_put_pages(, count, ttm->page_flags, ttm->caching_state,
  ttm->dma_address);
ttm->state = tt_unpopulated;
-   ttm->first_himem_page = ttm->num_pages;
-   ttm->last_lomem_page = -1;
 }

 void ttm_tt_destroy(struct ttm_tt *ttm)
@@ -315,8 +310,6 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, 
unsigned long size,

ttm->glob = bdev->glob;
ttm->num_pages = (size + PAGE_SIZE - 1) >> PAGE_SHIFT;
-   ttm->first_himem_page = ttm->num_pages;
-   ttm->last_lomem_page = -1;
ttm->caching_state = tt_cached;
ttm->page_flags = page_flags;

diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 37527d6..9da182b 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -136,11 +136,6 @@ enum ttm_caching_state {
  * @dummy_read_page: Page to map where the ttm_tt page array contains a NULL
  * pointer.
  * @pages: Array of pages backing the data.
- * @first_himem_page: Himem pages are put last in the page array, which
- * enables us to run caching attribute changes on only the first part
- * of the page array containing lomem pages. This is the index of the
- * first himem page.
- * @last_lomem_page: Index of the last lomem page in the page array.
  * @num_pages: Number of pages in the page array.
  * @bdev: Pointer to the current struct ttm_bo_device.
  * @be: Pointer to the ttm backend.
@@ -157,8 +152,6 @@ enum ttm_caching_state {
 struct ttm_tt {
struct page *dummy_read_page;
struct page **pages;
-   long first_himem_page;
-   long last_lomem_page;
uint32_t page_flags;
unsigned long num_pages;
struct ttm_bo_global *glob;
-- 
1.7.7.1



[PATCH 02/13] drm/ttm: remove userspace backed ttm object support

2011-11-10 Thread j.gli...@gmail.com
From: Jerome Glisse 

This was never use in none of the driver, properly using userspace
page for bo would need more code (vma interaction mostly). Removing
this dead code in preparation of ttm_tt & backend merge.

Signed-off-by: Jerome Glisse 
Reviewed-by: Konrad Rzeszutek Wilk 
Reviewed-by: Thomas Hellstrom 
---
 drivers/gpu/drm/ttm/ttm_bo.c|   22 
 drivers/gpu/drm/ttm/ttm_tt.c|  105 +--
 include/drm/ttm/ttm_bo_api.h|5 --
 include/drm/ttm/ttm_bo_driver.h |   24 -
 4 files changed, 1 insertions(+), 155 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 617b646..4bde335 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -342,22 +342,6 @@ static int ttm_bo_add_ttm(struct ttm_buffer_object *bo, 
bool zero_alloc)
if (unlikely(bo->ttm == NULL))
ret = -ENOMEM;
break;
-   case ttm_bo_type_user:
-   bo->ttm = ttm_tt_create(bdev, bo->num_pages << PAGE_SHIFT,
-   page_flags | TTM_PAGE_FLAG_USER,
-   glob->dummy_read_page);
-   if (unlikely(bo->ttm == NULL)) {
-   ret = -ENOMEM;
-   break;
-   }
-
-   ret = ttm_tt_set_user(bo->ttm, current,
- bo->buffer_start, bo->num_pages);
-   if (unlikely(ret != 0)) {
-   ttm_tt_destroy(bo->ttm);
-   bo->ttm = NULL;
-   }
-   break;
default:
printk(KERN_ERR TTM_PFX "Illegal buffer object type\n");
ret = -EINVAL;
@@ -907,16 +891,12 @@ static uint32_t ttm_bo_select_caching(struct 
ttm_mem_type_manager *man,
 }

 static bool ttm_bo_mt_compatible(struct ttm_mem_type_manager *man,
-bool disallow_fixed,
 uint32_t mem_type,
 uint32_t proposed_placement,
 uint32_t *masked_placement)
 {
uint32_t cur_flags = ttm_bo_type_flags(mem_type);

-   if ((man->flags & TTM_MEMTYPE_FLAG_FIXED) && disallow_fixed)
-   return false;
-
if ((cur_flags & proposed_placement & TTM_PL_MASK_MEM) == 0)
return false;

@@ -961,7 +941,6 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
man = >man[mem_type];

type_ok = ttm_bo_mt_compatible(man,
-   bo->type == ttm_bo_type_user,
mem_type,
placement->placement[i],
_flags);
@@ -1009,7 +988,6 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
if (!man->has_type)
continue;
if (!ttm_bo_mt_compatible(man,
-   bo->type == ttm_bo_type_user,
mem_type,
placement->busy_placement[i],
_flags))
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 58c271e..82a1161 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -62,43 +62,6 @@ static void ttm_tt_free_page_directory(struct ttm_tt *ttm)
ttm->dma_address = NULL;
 }

-static void ttm_tt_free_user_pages(struct ttm_tt *ttm)
-{
-   int write;
-   int dirty;
-   struct page *page;
-   int i;
-   struct ttm_backend *be = ttm->be;
-
-   BUG_ON(!(ttm->page_flags & TTM_PAGE_FLAG_USER));
-   write = ((ttm->page_flags & TTM_PAGE_FLAG_WRITE) != 0);
-   dirty = ((ttm->page_flags & TTM_PAGE_FLAG_USER_DIRTY) != 0);
-
-   if (be)
-   be->func->clear(be);
-
-   for (i = 0; i < ttm->num_pages; ++i) {
-   page = ttm->pages[i];
-   if (page == NULL)
-   continue;
-
-   if (page == ttm->dummy_read_page) {
-   BUG_ON(write);
-   continue;
-   }
-
-   if (write && dirty && !PageReserved(page))
-   set_page_dirty_lock(page);
-
-   ttm->pages[i] = NULL;
-   ttm_mem_global_free(ttm->glob->mem_glob, PAGE_SIZE);
-   put_page(page);
-   }
-   ttm->state = tt_unpopulated;
-   ttm->first_himem_page = ttm->num_pages;
-   ttm->last_lomem_page = -1;
-}
-
 static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index)
 {
struct page *p;
@@ -325,10 +288,7 @@ void ttm_tt_destroy(struct ttm_tt *ttm)
}

if (likely(ttm->pages != NULL)) {
-   if (ttm->page_flags & 

[PATCH 01/13] swiotlb: Expose swiotlb_nr_tlb function to modules

2011-11-10 Thread j.gli...@gmail.com
From: Konrad Rzeszutek Wilk 

As a mechanism to detect whether SWIOTLB is enabled or not.
We also fix the spelling - it was swioltb instead of
swiotlb.

CC: FUJITA Tomonori 
[v1: Ripped out swiotlb_enabled]
Signed-off-by: Konrad Rzeszutek Wilk 
---
 drivers/xen/swiotlb-xen.c |2 +-
 include/linux/swiotlb.h   |2 +-
 lib/swiotlb.c |5 +++--
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index c984768..c50fb0b 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -152,7 +152,7 @@ void __init xen_swiotlb_init(int verbose)
char *m = NULL;
unsigned int repeat = 3;

-   nr_tbl = swioltb_nr_tbl();
+   nr_tbl = swiotlb_nr_tbl();
if (nr_tbl)
xen_io_tlb_nslabs = nr_tbl;
else {
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 445702c..e872526 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -24,7 +24,7 @@ extern int swiotlb_force;

 extern void swiotlb_init(int verbose);
 extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
verbose);
-extern unsigned long swioltb_nr_tbl(void);
+extern unsigned long swiotlb_nr_tbl(void);

 /*
  * Enumeration for sync targets
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 99093b3..058935e 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -110,11 +110,11 @@ setup_io_tlb_npages(char *str)
 __setup("swiotlb=", setup_io_tlb_npages);
 /* make io_tlb_overflow tunable too? */

-unsigned long swioltb_nr_tbl(void)
+unsigned long swiotlb_nr_tbl(void)
 {
return io_tlb_nslabs;
 }
-
+EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 /* Note that this doesn't work with highmem page */
 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
  volatile void *address)
@@ -321,6 +321,7 @@ void __init swiotlb_free(void)
free_bootmem_late(__pa(io_tlb_start),
  PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
}
+   io_tlb_nslabs = 0;
 }

 static int is_swiotlb_buffer(phys_addr_t paddr)
-- 
1.7.7.1



ttm: merge ttm_backend & ttm_tt, introduce ttm dma allocator V4

2011-11-10 Thread j.gli...@gmail.com
So squeezed all to avoid any memory accouting messing, seems to work ok
so far.

Cheers,
Jerome



Linux 3.2-rc1

2011-11-10 Thread Wu Fengguang
Hi Nick,

On Wed, Nov 09, 2011 at 03:40:19PM +0800, Takashi Iwai wrote:
> At Tue, 8 Nov 2011 12:23:30 -0800,
> Linus Torvalds wrote:
> > 
> > Hmm, I don't know what caused this to trigger, but I'm adding both the
> > i915 people and the HDA people to the cc, and they can fight to the
> > death about this in the HDMI Thunderdome.
> 
> It must be the new addition of ELD-passing code.
> 
> Fengguang, can the drm or i915 driver check whether ELD is changed or
> not?  Writing ELD at each time even when unchanged confuses the audio
> side, as if the monitor is hotplugged.

The attached patch is tested OK to prevent extra hot plug events.

However it has one side effect: when HDMI monitor is hot removed,
the ELD keeps remain valid. I need to find a way to test for the
presence of the monitor and handle that case as well. When all done,
I'll submit the patches together for review.

Thanks,
Fengguang
-- next part --
Subject: drm/i915: don't trigger hotplug events on unchanged ELD
Date: Thu Nov 10 17:48:49 CST 2011

The ELD may or may not change when switching the video mode.
If unchanged, don't trigger hot plug events to HDMI audio driver.

This avoids disturbing the user with repeated printks.

Signed-off-by: Wu Fengguang 
---
 drivers/gpu/drm/i915/intel_display.c |   51 ++---
 1 file changed, 46 insertions(+), 5 deletions(-)

--- linux.orig/drivers/gpu/drm/i915/intel_display.c 2011-11-10 
17:23:04.0 +0800
+++ linux/drivers/gpu/drm/i915/intel_display.c  2011-11-10 17:59:25.0 
+0800
@@ -5811,6 +5811,35 @@ static int intel_crtc_mode_set(struct dr
return ret;
 }

+static bool intel_eld_uptodate(struct drm_connector *connector,
+  int reg_eldv, uint32_t bits_eldv,
+  int reg_elda, uint32_t bits_elda,
+  int reg_edid)
+{
+   struct drm_i915_private *dev_priv = connector->dev->dev_private;
+   uint8_t *eld = connector->eld;
+   uint32_t i;
+
+   i = I915_READ(reg_eldv);
+   i &= bits_eldv;
+
+   if (!eld[0])
+   return !i;
+
+   if (!i)
+   return false;
+
+   i = I915_READ(reg_elda);
+   i &= ~bits_elda;
+   I915_WRITE(reg_elda, i);
+
+   for (i = 0; i < eld[2]; i++)
+   if (I915_READ(reg_edid) != *((uint32_t *)eld + i))
+   return false;
+
+   return true;
+}
+
 static void g4x_write_eld(struct drm_connector *connector,
  struct drm_crtc *crtc)
 {
@@ -5827,6 +5856,12 @@ static void g4x_write_eld(struct drm_con
else
eldv = G4X_ELDV_DEVCTG;

+   if (intel_eld_uptodate(connector,
+  G4X_AUD_CNTL_ST, eldv,
+  G4X_AUD_CNTL_ST, G4X_ELD_ADDR,
+  G4X_HDMIW_HDMIEDID))
+   return;
+
i = I915_READ(G4X_AUD_CNTL_ST);
i &= ~(eldv | G4X_ELD_ADDR);
len = (i >> 9) & 0x1f;  /* ELD buffer size */
@@ -5886,6 +5921,17 @@ static void ironlake_write_eld(struct dr
eldv = GEN5_ELD_VALIDB << ((i - 1) * 4);
}

+   if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
+   DRM_DEBUG_DRIVER("ELD: DisplayPort detected\n");
+   eld[5] |= (1 << 2); /* Conn_Type, 0x1 = DisplayPort */
+   }
+
+   if (intel_eld_uptodate(connector,
+  aud_cntrl_st2, eldv,
+  aud_cntl_st, GEN5_ELD_ADDRESS,
+  hdmiw_hdmiedid))
+   return;
+
i = I915_READ(aud_cntrl_st2);
i &= ~eldv;
I915_WRITE(aud_cntrl_st2, i);
@@ -5893,11 +5939,6 @@ static void ironlake_write_eld(struct dr
if (!eld[0])
return;

-   if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
-   DRM_DEBUG_DRIVER("ELD: DisplayPort detected\n");
-   eld[5] |= (1 << 2); /* Conn_Type, 0x1 = DisplayPort */
-   }
-
i = I915_READ(aud_cntl_st);
i &= ~GEN5_ELD_ADDRESS;
I915_WRITE(aud_cntl_st, i);


[PATCH v3] drm/radeon: Make sure CS mutex is held across GPU reset.

2011-11-10 Thread Michel Dänzer
From: Michel D?nzer 

This was only the case if the GPU reset was triggered from the CS ioctl,
otherwise other processes could happily enter the CS ioctl and wreak havoc
during the GPU reset.

This is a little complicated because the GPU reset can be triggered from the
CS ioctl, in which case we're already holding the mutex, or from other call
paths, in which case we need to lock the mutex. AFAICT the mutex API doesn't
allow recursive locking or finding out the mutex owner, so we need to handle
this with helper functions which allow recursive locking from the same
process.

Signed-off-by: Michel D?nzer 
Reviewed-by: Jerome Glisse 
---

v3: Drop spurious whitespace-only hunk, thanks Jerome for catching that.

 drivers/gpu/drm/radeon/radeon.h|   44 +++-
 drivers/gpu/drm/radeon/radeon_cs.c |   14 +-
 drivers/gpu/drm/radeon/radeon_device.c |   16 ---
 3 files changed, 62 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index c1e056b..fa2ef96 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1151,6 +1151,48 @@ struct r700_vram_scratch {
volatile uint32_t   *ptr;
 };

+
+/*
+ * Mutex which allows recursive locking from the same process.
+ */
+struct radeon_mutex {
+   struct mutexmutex;
+   struct task_struct  *owner;
+   int level;
+};
+
+static inline void radeon_mutex_init(struct radeon_mutex *mutex)
+{
+   mutex_init(>mutex);
+   mutex->owner = NULL;
+   mutex->level = 0;
+}
+
+static inline void radeon_mutex_lock(struct radeon_mutex *mutex)
+{
+   if (mutex_trylock(>mutex)) {
+   /* The mutex was unlocked before, so it's ours now */
+   mutex->owner = current;
+   } else if (mutex->owner != current) {
+   /* Another process locked the mutex, take it */
+   mutex_lock(>mutex);
+   mutex->owner = current;
+   }
+   /* Otherwise the mutex was already locked by this process */
+
+   mutex->level++;
+}
+
+static inline void radeon_mutex_unlock(struct radeon_mutex *mutex)
+{
+   if (--mutex->level > 0)
+   return;
+
+   mutex->owner = NULL;
+   mutex_unlock(>mutex);
+}
+
+
 /*
  * Core structure, functions and helpers.
  */
@@ -1206,7 +1248,7 @@ struct radeon_device {
struct radeon_gem   gem;
struct radeon_pmpm;
uint32_tbios_scratch[RADEON_BIOS_NUM_SCRATCH];
-   struct mutexcs_mutex;
+   struct radeon_mutex cs_mutex;
struct radeon_wbwb;
struct radeon_dummy_pagedummy_page;
boolgpu_lockup;
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index fae00c0..ccaa243 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -222,7 +222,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
struct radeon_cs_chunk *ib_chunk;
int r;

-   mutex_lock(>cs_mutex);
+   radeon_mutex_lock(>cs_mutex);
/* initialize parser */
memset(, 0, sizeof(struct radeon_cs_parser));
parser.filp = filp;
@@ -233,14 +233,14 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r) {
DRM_ERROR("Failed to initialize parser !\n");
radeon_cs_parser_fini(, r);
-   mutex_unlock(>cs_mutex);
+   radeon_mutex_unlock(>cs_mutex);
return r;
}
r =  radeon_ib_get(rdev, );
if (r) {
DRM_ERROR("Failed to get ib !\n");
radeon_cs_parser_fini(, r);
-   mutex_unlock(>cs_mutex);
+   radeon_mutex_unlock(>cs_mutex);
return r;
}
r = radeon_cs_parser_relocs();
@@ -248,7 +248,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r != -ERESTARTSYS)
DRM_ERROR("Failed to parse relocation %d!\n", r);
radeon_cs_parser_fini(, r);
-   mutex_unlock(>cs_mutex);
+   radeon_mutex_unlock(>cs_mutex);
return r;
}
/* Copy the packet into the IB, the parser will read from the
@@ -260,14 +260,14 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r || parser.parser_error) {
DRM_ERROR("Invalid command stream !\n");
radeon_cs_parser_fini(, r);
-   mutex_unlock(>cs_mutex);
+   radeon_mutex_unlock(>cs_mutex);
return r;
}
r = radeon_cs_finish_pages();
if (r) {
DRM_ERROR("Invalid command 

[PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Jerome Glisse
On Thu, Nov 10, 2011 at 09:05:22PM +0100, Thomas Hellstrom wrote:
> On 11/10/2011 07:05 PM, Jerome Glisse wrote:
> >On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
> >>On 11/09/2011 09:22 PM, j.glisse at gmail.com wrote:
> >>>From: Jerome Glisse
> >>>
> >>>This is an overhaul of the ttm memory accounting. This tries to keep
> >>>the same global behavior while removing the whole zone concept. It
> >>>keeps a distrinction for dma32 so that we make sure that ttm don't
> >>>starve the dma32 zone.
> >>>
> >>>There is 3 threshold for memory allocation :
> >>>- max_mem is the maximum memory the whole ttm infrastructure is
> >>>   going to allow allocation for (exception of system process see
> >>>   below)
> >>>- emer_mem is the maximum memory allowed for system process, this
> >>>   limit is>   to max_mem
> >>>- swap_limit is the threshold at which point ttm will start to
> >>>   try to swap object because ttm is getting close the max_mem
> >>>   limit
> >>>- swap_dma32_limit is the threshold at which point ttm will start
> >>>   swap object to try to reduce the pressure on the dma32 zone. Note
> >>>   that we don't specificly target object to swap to it might very
> >>>   well free more memory from highmem rather than from dma32
> >>>
> >>>Accounting is done through used_mem&   used_dma32_mem, which sum give
> >>>the total amount of memory actually accounted by ttm.
> >>>
> >>>Idea is that allocation will fail if (used_mem + used_dma32_mem)>
> >>>max_mem and if swapping fail to make enough room.
> >>>
> >>>The used_dma32_mem can be updated as a later stage, allowing to
> >>>perform accounting test before allocating a whole batch of pages.
> >>>
> >>Jerome, you're removing a fair amount of functionality here, without
> >>justifying
> >>why it could be removed.
> >All this code was overkill.
> 
> [1] I don't agree, and since it's well tested, thought throught and
> working, I see no obvious reason to alter it,
> within the context of this patch series unless it's absolutely
> required for the functionality.

Well one thing i can tell is that it doesn't work on radeon, i pushed
a test to libdrm and here it's the oom that starts doing its beating.
Anyway i won't alter it. Was just trying to make it works, ie be useful
while also being simpler.

> >>Consider a low-end system with 1G of kernel memory and 10G of
> >>highmem. How do we avoid putting stress on the kernel memory? I also
> >>wouldn't be too surprised if DMA32 zones appear in HIGHMEM systems
> >>in the future making the current zone concept good to keep.
> >Right now kernel memory is accounted against all zone so it decrease
> >not only the kernel zone but also the dma32&  highmem if present.
> 
> Do you mean that the code is incorrect? In that case, did you
> consider the fact
> that zones may overlap? (Although I admit the name "highmem" might
> be misleading. Should be "total").

Yeah i am well aware that zone overlap :)

> >Note also that kernel zone in current code == dma32 zone.
> 
> Last time I looked, the highmem split was typically at slightly less
> than 1GB, depending on the hardware and desired setup. I admit that
> was some time ago, but has that really changed? On all archs?
> Furthermore, on !Highmem systems, All pages are in the kernel zone.

I was bit too much focused on my system where 1G of ram is wonderland
and 512M is the average. Thx to AMD i got a system with 8G i should
use it more.

Cheers,
Jerome


[Bug 28426] hardware cursor corruption with radeon+kms

2011-11-10 Thread bugzilla-dae...@freedesktop.org
https://bugs.freedesktop.org/show_bug.cgi?id=28426

Michel D?nzer  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution||FIXED

--- Comment #13 from Michel D?nzer  2011-11-10 09:46:42 
PST ---
(In reply to comment #12)
> Like J?rg Billeter, I haven't seen the problem since switching to Linux 3.1.

So far, so good. Thanks for the updates, guys.

> I don't currently use the affected system as much as I usually do, so it may 
> be
> too early to celebrate, but I guess the bug could be closed as resolved and
> reopened later if need be.

Sounds like a plan. :)

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.


[Bug 42373] Radeon HD 6450 (NI CAICOS) screen corruption on boot

2011-11-10 Thread bugzilla-dae...@freedesktop.org
https://bugs.freedesktop.org/show_bug.cgi?id=42373

--- Comment #13 from Kunal  2011-11-10 
09:38:31 PST ---
Created attachment 53375
  --> https://bugs.freedesktop.org/attachment.cgi?id=53375
dmesg log with "amd_iommu=off iommu=off" options added to cmdline

(In reply to comment #12)
> Does booting with following kernel options help
> amd_iommu=off iommu=off

No, it doesn't help in any way.
Attaching dmesg log.

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.


Strange effect with i915 backlight controller

2011-11-10 Thread Takashi Iwai
At Thu, 10 Nov 2011 16:11:29 +0100,
Daniel Mack wrote:
> 
> On 11/08/2011 01:57 AM, Daniel Mack wrote:
> > Didn't get any response yet, hence copying LKML for a broader audience.
> 
> Nobody, really?
> 
> This is a rather annoying regression, as touching the brightness keys
> appearantly switches off the whole machine. I'm sure this is trivial to
> fix, I just don't have the insight of this driver and the chipset.

I vaguely remember that the bit 0 is invalid on some old chips.
Maybe 915GM is one of them, as it's gen3?  If so, the patch like below
may work.


Takashi

---
diff --git a/drivers/gpu/drm/i915/intel_panel.c 
b/drivers/gpu/drm/i915/intel_panel.c
index 499d4c0..be952d1 100644
--- a/drivers/gpu/drm/i915/intel_panel.c
+++ b/drivers/gpu/drm/i915/intel_panel.c
@@ -249,8 +249,11 @@ static void intel_panel_actually_set_backlight(struct 
drm_device *dev, u32 level
if (IS_PINEVIEW(dev)) {
tmp &= ~(BACKLIGHT_DUTY_CYCLE_MASK - 1);
level <<= 1;
-   } else
+   } else {
tmp &= ~BACKLIGHT_DUTY_CYCLE_MASK;
+   if (INTEL_INFO(dev)->gen < 4)
+   tmp &= ~1;
+   }
I915_WRITE(BLC_PWM_CTL, tmp | level);
 }



Strange effect with i915 backlight controller

2011-11-10 Thread Daniel Mack
On 11/08/2011 01:57 AM, Daniel Mack wrote:
> Didn't get any response yet, hence copying LKML for a broader audience.

Nobody, really?

This is a rather annoying regression, as touching the brightness keys
appearantly switches off the whole machine. I'm sure this is trivial to
fix, I just don't have the insight of this driver and the chipset.

Any pointer greatly appreciated, and I can test patches.


Thanks,
Daniel



> 
> On 11/04/2011 03:36 PM, Daniel Mack wrote:
>> I'm facing a bug on a Samsung X20 notebook which features an i915
>> chipset (output of 'lspci -v' attached).
>>
>> The effect is that setting the backlight to odd values causes the value
>> to be misinterpreted. Harald Hoyer (cc:) had the same thing on a Netbook
>> (I don't recall which model it was).
>>
>> So this will turn the backlight to full brightness:
>>
>> # cat /sys/class/backlight/intel_backlight/max_brightness
>> 29750
>> # echo 29750 > /sys/class/backlight/intel_backlight/brightness
>>
>> However, writing 29749 will turn the display backlight off, and 29748
>> appears to be the next valid lower value.
>>
>> It seems like the IS_PINEVIEW() branch in
>> drivers/gpu/drm/i915/intel_panel.c:intel_panel_actually_set_backlight()
>> could do the right thing, but this code is written for an entirely
>> different model, right?
>>
>> I can reproduce this on 3.0 and 3.1 vanilla as well as with the current
>> mainline git.
>>
>> Let me know if there is any patch that I can test.
>>
>>
>> Thanks,
>> Daniel
> 



[PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread Andrew Morton
On Thu, 10 Nov 2011 13:15:04 -0800
Mandeep Singh Baines  wrote:

> David Rientjes (rientjes at google.com) wrote:
> > On Mon, 17 Oct 2011, David Rientjes wrote:
> > 
> > > On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
> > > 
> > > > From: Hugh Dickins 
> > > > 
> > > > Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
> > > > we're going to reboot immediately, the user will not be able to see the
> > > > messages anyway, and messing with the video mode may display artifacts,
> > > > and certainly get into several layers of complexity (including mutexes 
> > > > and
> > > > memory allocations) which we shall be much safer to avoid.
> > > > 
> > > > Signed-off-by: Hugh Dickins 
> > > > [ Edited commit message and modified to short-circuit panic_timeout < 0
> > > >   instead of testing panic_timeout >= 0.  -Mandeep ]
> > > > Signed-off-by: Mandeep Singh Baines 
> > > > Cc: Dave Airlie 
> > > > Cc: Andrew Morton 
> > > > Cc: dri-devel at lists.freedesktop.org
> > > 
> > > Acked-by: David Rientjes 
> > > 
> > 
> > Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I 
> > don't see it in git://people.freedesktop.org/~airlied/linux.
> 
> The last status I have is Andrew pulling it into mmotm on 10/18/11.
> 
> Subject: + 
> drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch added 
> to -mm tree
> From: akpm at linux-foundation.org
> Date: Tue, 18 Oct 2011 15:42:46 -0700
> 
> 
> The patch titled
>  Subject: drm: avoid switching to text console if there is no panic 
> timeout
> has been added to the -mm tree.  Its filename is
>  drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch

I need to do another round of sending patches to maintainers.

It's a depressing exercise because the great majority of patches are
simply ignored.  Last time I even added "please don't ignore" to the
email Subject: on the more important ones.  Sigh.

> Where is mmotm hosted these days?

On my disk, until kernel.org ftp access returns.

But I regularly email tarballs to Stephen, so it's all in linux-next. 
The mmotm tree is largely unneeded now - use linux-next to get at -mm
patches.



[PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread Mandeep Singh Baines
David Rientjes (rientjes at google.com) wrote:
> On Mon, 17 Oct 2011, David Rientjes wrote:
> 
> > On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
> > 
> > > From: Hugh Dickins 
> > > 
> > > Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
> > > we're going to reboot immediately, the user will not be able to see the
> > > messages anyway, and messing with the video mode may display artifacts,
> > > and certainly get into several layers of complexity (including mutexes and
> > > memory allocations) which we shall be much safer to avoid.
> > > 
> > > Signed-off-by: Hugh Dickins 
> > > [ Edited commit message and modified to short-circuit panic_timeout < 0
> > >   instead of testing panic_timeout >= 0.  -Mandeep ]
> > > Signed-off-by: Mandeep Singh Baines 
> > > Cc: Dave Airlie 
> > > Cc: Andrew Morton 
> > > Cc: dri-devel at lists.freedesktop.org
> > 
> > Acked-by: David Rientjes 
> > 
> 
> Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I 
> don't see it in git://people.freedesktop.org/~airlied/linux.

The last status I have is Andrew pulling it into mmotm on 10/18/11.

Subject: + 
drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch added to 
-mm tree
From: a...@linux-foundation.org
Date: Tue, 18 Oct 2011 15:42:46 -0700


The patch titled
 Subject: drm: avoid switching to text console if there is no panic timeout
has been added to the -mm tree.  Its filename is
 drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch

Where is mmotm hosted these days?

Regards,
Mandeep


[PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Jerome Glisse
On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
> On 11/09/2011 09:22 PM, j.glisse at gmail.com wrote:
> >From: Jerome Glisse
> >
> >This is an overhaul of the ttm memory accounting. This tries to keep
> >the same global behavior while removing the whole zone concept. It
> >keeps a distrinction for dma32 so that we make sure that ttm don't
> >starve the dma32 zone.
> >
> >There is 3 threshold for memory allocation :
> >- max_mem is the maximum memory the whole ttm infrastructure is
> >   going to allow allocation for (exception of system process see
> >   below)
> >- emer_mem is the maximum memory allowed for system process, this
> >   limit is>  to max_mem
> >- swap_limit is the threshold at which point ttm will start to
> >   try to swap object because ttm is getting close the max_mem
> >   limit
> >- swap_dma32_limit is the threshold at which point ttm will start
> >   swap object to try to reduce the pressure on the dma32 zone. Note
> >   that we don't specificly target object to swap to it might very
> >   well free more memory from highmem rather than from dma32
> >
> >Accounting is done through used_mem&  used_dma32_mem, which sum give
> >the total amount of memory actually accounted by ttm.
> >
> >Idea is that allocation will fail if (used_mem + used_dma32_mem)>
> >max_mem and if swapping fail to make enough room.
> >
> >The used_dma32_mem can be updated as a later stage, allowing to
> >perform accounting test before allocating a whole batch of pages.
> >
> 
> Jerome, you're removing a fair amount of functionality here, without
> justifying
> why it could be removed.

All this code was overkill.

> Consider a low-end system with 1G of kernel memory and 10G of
> highmem. How do we avoid putting stress on the kernel memory? I also
> wouldn't be too surprised if DMA32 zones appear in HIGHMEM systems
> in the future making the current zone concept good to keep.

Right now kernel memory is accounted against all zone so it decrease
not only the kernel zone but also the dma32 & highmem if present.
Note also that kernel zone in current code == dma32 zone.

When it comes to future it looks a lot simpler, it seems everyone
is moving toward more capable and more advanced iommu that can remove
all the restriction on memory from the device pov. I mean even arm
are getting more and more advance iommu. I don't see any architecture
worse supporting not going down that road.

> Also, in effect you move the DOS from *all* zones into the DMA32
> zone and create a race in that multiple simultaneous allocators can
> first pre-allocate out of the global zone, and then update the DMA32
> zone without synchronization. In this way you might theoretically
> end up with more DMA32 pages allocated than present in the zone.

Ok a respin attached with a simple change, things will be
accounted against dma32 zone and only when we get page we will
decrease the dma32 zone usage that way no DOS on dma32.

It also deals with the case where there is still lot of highmem
but no more dma32.

> With the proposed code there's also a theoretical problem in that a
> potentially huge number of pages are unaccounted before they are
> actually freed.

What you mean unaccounted ? The way it works is :
r = global_memory_alloc(size)
if (r) fail
alloc pages
update memory accounting according to what page where allocated

So memory is always accounted before even being allocated (exception
are the kernel object for vmwgfx & ttm_bo but we can move accounting
there too if you want, those are small allocation and i didn't think
it was worse changing that).


> A possible way around all this is to pre-allocate out of *all*
> zones, and after the big allocation release back memory to relevant
> zones. If such a big allocation fails, one needs to revert back to a
> page-by-page scheme.
> 
> /Thomas

Really i believe this make the whole accounting a lot simpler. The
whole zone business was overkill as especialy kernelzone == dma32zone
and highmem zone is a superset of this.

With my change what happen is that only the dma32 distinction is kept
around because there is whole batch of device that can only do dma32
and we need to make sure we don't starve those.

I believe my code is lot easier and straighforward to understand.

Cheers,
Jerome


[PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread David Rientjes
On Mon, 17 Oct 2011, David Rientjes wrote:

> On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
> 
> > From: Hugh Dickins 
> > 
> > Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
> > we're going to reboot immediately, the user will not be able to see the
> > messages anyway, and messing with the video mode may display artifacts,
> > and certainly get into several layers of complexity (including mutexes and
> > memory allocations) which we shall be much safer to avoid.
> > 
> > Signed-off-by: Hugh Dickins 
> > [ Edited commit message and modified to short-circuit panic_timeout < 0
> >   instead of testing panic_timeout >= 0.  -Mandeep ]
> > Signed-off-by: Mandeep Singh Baines 
> > Cc: Dave Airlie 
> > Cc: Andrew Morton 
> > Cc: dri-devel at lists.freedesktop.org
> 
> Acked-by: David Rientjes 
> 

Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I 
don't see it in git://people.freedesktop.org/~airlied/linux.


[PATCH] drm: Ensure string is null terminated.

2011-11-10 Thread Vinson Lee
Fixes Coverity buffer not null terminated defect.

Signed-off-by: Vinson Lee 
---
 drivers/gpu/drm/drm_crtc.c |4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index f3ef654..40a3a14 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -2117,8 +2117,10 @@ struct drm_property *drm_property_create(struct 
drm_device *dev, int flags,
property->num_values = num_values;
INIT_LIST_HEAD(>enum_blob_list);

-   if (name)
+   if (name) {
strncpy(property->name, name, DRM_PROP_NAME_LEN);
+   property->name[DRM_PROP_NAME_LEN-1] = '\0';
+   }

list_add_tail(>head, >mode_config.property_list);
return property;
-- 
1.7.1



[PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Thomas Hellstrom
On 11/09/2011 09:22 PM, j.glisse at gmail.com wrote:
> From: Jerome Glisse
>
> This is an overhaul of the ttm memory accounting. This tries to keep
> the same global behavior while removing the whole zone concept. It
> keeps a distrinction for dma32 so that we make sure that ttm don't
> starve the dma32 zone.
>
> There is 3 threshold for memory allocation :
> - max_mem is the maximum memory the whole ttm infrastructure is
>going to allow allocation for (exception of system process see
>below)
> - emer_mem is the maximum memory allowed for system process, this
>limit is>  to max_mem
> - swap_limit is the threshold at which point ttm will start to
>try to swap object because ttm is getting close the max_mem
>limit
> - swap_dma32_limit is the threshold at which point ttm will start
>swap object to try to reduce the pressure on the dma32 zone. Note
>that we don't specificly target object to swap to it might very
>well free more memory from highmem rather than from dma32
>
> Accounting is done through used_mem&  used_dma32_mem, which sum give
> the total amount of memory actually accounted by ttm.
>
> Idea is that allocation will fail if (used_mem + used_dma32_mem)>
> max_mem and if swapping fail to make enough room.
>
> The used_dma32_mem can be updated as a later stage, allowing to
> perform accounting test before allocating a whole batch of pages.
>
>

Jerome, you're removing a fair amount of functionality here, without 
justifying
why it could be removed.

Consider a low-end system with 1G of kernel memory and 10G of highmem. 
How do we avoid putting stress on the kernel memory? I also wouldn't be 
too surprised if DMA32 zones appear in HIGHMEM systems in the future 
making the current zone concept good to keep.

Also, in effect you move the DOS from *all* zones into the DMA32 zone 
and create a race in that multiple simultaneous allocators can first 
pre-allocate out of the global zone, and then update the DMA32 zone 
without synchronization. In this way you might theoretically end up with 
more DMA32 pages allocated than present in the zone.

With the proposed code there's also a theoretical problem in that a 
potentially huge number of pages are unaccounted before they are 
actually freed.

A possible way around all this is to pre-allocate out of *all* zones, 
and after the big allocation release back memory to relevant zones. If 
such a big allocation fails, one needs to revert back to a page-by-page 
scheme.

/Thomas






[PATCH 2/2] drm: add an fb creation ioctl that takes a pixel format

2011-11-10 Thread Rob Clark
On Thu, Nov 10, 2011 at 8:54 AM, InKi Dae  wrote:
> 2011/11/9 Rob Clark :
>> On Wed, Nov 9, 2011 at 7:25 AM, InKi Dae  wrote:
>>> Hello, all.
>>>
>>> I am trying to implement multi planer using your plane patch and I
>>> think it's good but I am still warried about that drm_mode_fb_cmd2
>>> structure has only one handle. I know this handle is sent to
>>> framebuffer module to create new framebuffer. and the framebuffer
>>> would cover entire a image. as you know, the image could be consisted
>>> of one more planes. so I think now drm_mode_fb_cmd2 structure doesn't
>>> support multi planer because it has only one handle. with update_plane
>>> callback, a buffer of the framebuffer would be set to a hardware
>>> overlay. how we could set two planes or three planes to the hardware
>>> overlay? but there might be my missing point so please give me any
>>> comments. in addition, have you been looked into gem flink and open
>>> functions for memory sharing between processes? gem object basically
>>> has one buffer so we can't modify it because of compatibility. so I
>>> think it's right way that gem object manages only one buffer. for such
>>> a reason, maybe drm_mode_fb_cmd2 structure should include one more
>>> handles and plane count. each handle has a gem object to one plane and
>>> plane count means how many planes are requested and when update_plane
>>> callback is called by setplane(), we could set them of the specific
>>> framebuffer to a hardware overlay.
>>
>> The current plan is to add a 3rd ioctl, for adding multi-planar fb..
>> I guess it is a good thing that I'm not the only one who wants this
>> :-)
>>
>>> another one, and also I have tried to implement the way sharing the
>>> memory between v4l2 based drivers and drm based drivers through
>>> application and this works fine. this feature had been introduced by
>>> v4l2 framework as user ptr. my way also is similar to it. the
>>> difference is that application could get new gem handle from specific
>>> gem framework of kernel side if user application requests user ptr
>>> import with the user space address(mmaped memory). the new gem handle
>>> means a gem object to the memory mapped to the user space address.
>>> this way makes different applications to be possible to share the
>>> memory between v4l2 based driver and drm based driver. and also this
>>> feature is considered for IOMMU so it would support non continuous
>>> memory also. I will introduce this feature soon.
>>
>> btw, there was an RFC a little while back for "dmabuf" buffer sharing
>> mechanism.. ?the idea would be to export a (for example) GEM buffer to
>> a dmabuf handle which could be passed in to other devices, including
>> for example v4l2 (although without necessarily requiring a userspace
>> mapping)..
>>
>> http://www.spinics.net/lists/dri-devel/msg15077.html
>>
>> It sounds like you are looking for a similar thing..
>>
>
> Hi, Rob.
>
> GEM framework already supports memory sharing way that a object name
> created by gem flink is sent to another process and then the process
> opens the object name. at that time, the gem framework of kernel side
> creates new gem object. and I know that dmabuf is similar to the ION
> introduced by Rebecca who is an engineer of Google at least for buffer
> sharing way. but is it possible to share the memory region drawing on
> only user virtual address mmaped with another process?. for instance,
> as you know, v4l2 based driver has request buf feature that the driver
> of kernel side allocates the memory regions as user-desired buffer
> count and user gets user virtual address with mmap request after quary
> buffer request. so we need to share this memory mmaped at here also.
> for this, v4l2 based driver has userptr feature that user application
> sets user virtual address to userptr structure and then the address is
> translated to bus address(physical address without iommu or device
> address with iommu) and sets it to hardware. I think it doesn't need
> dmabuf if we would use it only for sharing the gem buffer with another
> process because GEM framework already can do it. I will try to find
> the way that we can use this feature commonly for generic gem
> framework. this feature has already been implemented in our specific
> gem framework and also tested.

There are a few limitations with userptr:
1) will simply fail if importing driver has some special dma
requirements (contiguous memory, specific address range, etc)..
2) requires a userspace virtual mapping of buffer.. which might not
always be required for fully hw accelerated use cases

And in general I'm not a huge fan of dma'ing to arbitrary malloc'd
buffers (which userptr seems to encourage)..

So it's true, that somehow people have managed to ship linux based
products without dmabuf, using various hacks..  but part of the point
of dmabuf is to try to get to a cleaner more generic solution.

BR,
-R

> thank you,
> Inki dae.
>
>
>> BR,
>> -R
>>
>>> Thank you,
>>> Inki Dae.
>>>
>>> 

[PATCH] drm/radeon/kms: fix use of vram scratch page on evergreen/ni

2011-11-10 Thread alexdeuc...@gmail.com
From: Alex Deucher 

This hunk seems to have gotten lost when I rebased the patch.

Reported-by: Sylvain Bertrand 
Signed-off-by: Alex Deucher 
---
 drivers/gpu/drm/radeon/evergreen.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index c6761e8..0067f11 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1278,7 +1278,7 @@ void evergreen_mc_program(struct radeon_device *rdev)
WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR,
rdev->mc.vram_end >> 12);
}
-   WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, 0);
+   WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, rdev->vram_scratch.gpu_addr 
>> 12);
if (rdev->flags & RADEON_IS_IGP) {
tmp = RREG32(MC_FUS_VM_FB_OFFSET) & 0x000F;
tmp |= ((rdev->mc.vram_end >> 20) & 0xF) << 24;
-- 
1.7.3.4



[Bug 28426] hardware cursor corruption with radeon+kms

2011-11-10 Thread bugzilla-dae...@freedesktop.org
https://bugs.freedesktop.org/show_bug.cgi?id=28426

--- Comment #12 from Roger Luethi  2011-11-10 00:24:39 PST 
---
Like J?rg Billeter, I haven't seen the problem since switching to Linux 3.1.

I don't currently use the affected system as much as I usually do, so it may be
too early to celebrate, but I guess the bug could be closed as resolved and
reopened later if need be.

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.


[Bug 28426] hardware cursor corruption with radeon+kms

2011-11-10 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=28426

--- Comment #12 from Roger Luethi r...@hellgate.ch 2011-11-10 00:24:39 PST ---
Like Jürg Billeter, I haven't seen the problem since switching to Linux 3.1.

I don't currently use the affected system as much as I usually do, so it may be
too early to celebrate, but I guess the bug could be closed as resolved and
reopened later if need be.

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: r600 hdmi sound issue

2011-11-10 Thread Greg Dietsche
2011/11/9 Rafał Miłecki zaj...@gmail.com:
 2011/11/9 Greg Dietsche g...@gregd.org:
 Hi,
 I have a ASUS M4A89GTD motherboard and when I play back music in Rhythmbox,
 there is no sound (hdmi connection). Also, the playback speed is in a sort
 of fast forward (for example, after a few seconds of real time, rhythmbox
 shows something like 30 seconds into the song). This seems to be a kernel
 regression. I know that it started with 3.0, is present in 3.1, and is not
 working in 3.2-rc1, so it is still a problem.

 I bisected and found my self here: fe6f0bd03d697835e76dd18d232ba476c65b8282.
 However due to some graphical issues, I'm not actually able to test that
 commit. I tried reverting that commit, but the problem wasn't fixed.

 I'd like to see this problem fixed and can compile and test patches as
 necessary. Please let me know if you need more information - I'm happy to
 provide it :)

 fe6f0bd03d697835e76dd18d232ba476c65b8282 is not likely. I suspect you
 just experience results of so-called-fix:

 805c22168da76a65c978017d0fe0d59cd048e995

 drm/radeon/kms: disable hdmi audio by default

 I'm trying to get in contact with ppl affected by issues when enabling
 audio. Hopefully we can fix audio support and enable it by default
 again.

 For now, please load radeon module with audio=1, or simply boot with
 radeon.audio=1

 --
 Rafał


Thanks Rafał that fixed it for me. (Wish I'd noticed that commit
earler). Anyway, if you need any testers at some point for this
driver, just let me know. I'd be happy to try them out.

Greg
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Thomas Hellstrom

On 11/09/2011 09:22 PM, j.gli...@gmail.com wrote:

From: Jerome Glissejgli...@redhat.com

This is an overhaul of the ttm memory accounting. This tries to keep
the same global behavior while removing the whole zone concept. It
keeps a distrinction for dma32 so that we make sure that ttm don't
starve the dma32 zone.

There is 3 threshold for memory allocation :
- max_mem is the maximum memory the whole ttm infrastructure is
   going to allow allocation for (exception of system process see
   below)
- emer_mem is the maximum memory allowed for system process, this
   limit is  to max_mem
- swap_limit is the threshold at which point ttm will start to
   try to swap object because ttm is getting close the max_mem
   limit
- swap_dma32_limit is the threshold at which point ttm will start
   swap object to try to reduce the pressure on the dma32 zone. Note
   that we don't specificly target object to swap to it might very
   well free more memory from highmem rather than from dma32

Accounting is done through used_mem  used_dma32_mem, which sum give
the total amount of memory actually accounted by ttm.

Idea is that allocation will fail if (used_mem + used_dma32_mem)
max_mem and if swapping fail to make enough room.

The used_dma32_mem can be updated as a later stage, allowing to
perform accounting test before allocating a whole batch of pages.

   


Jerome, you're removing a fair amount of functionality here, without 
justifying

why it could be removed.

Consider a low-end system with 1G of kernel memory and 10G of highmem. 
How do we avoid putting stress on the kernel memory? I also wouldn't be 
too surprised if DMA32 zones appear in HIGHMEM systems in the future 
making the current zone concept good to keep.


Also, in effect you move the DOS from *all* zones into the DMA32 zone 
and create a race in that multiple simultaneous allocators can first 
pre-allocate out of the global zone, and then update the DMA32 zone 
without synchronization. In this way you might theoretically end up with 
more DMA32 pages allocated than present in the zone.


With the proposed code there's also a theoretical problem in that a 
potentially huge number of pages are unaccounted before they are 
actually freed.


A possible way around all this is to pre-allocate out of *all* zones, 
and after the big allocation release back memory to relevant zones. If 
such a big allocation fails, one needs to revert back to a page-by-page 
scheme.


/Thomas




___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Linux 3.2-rc1

2011-11-10 Thread Wu Fengguang
Hi Nick,

On Wed, Nov 09, 2011 at 03:40:19PM +0800, Takashi Iwai wrote:
 At Tue, 8 Nov 2011 12:23:30 -0800,
 Linus Torvalds wrote:
  
  Hmm, I don't know what caused this to trigger, but I'm adding both the
  i915 people and the HDA people to the cc, and they can fight to the
  death about this in the HDMI Thunderdome.
 
 It must be the new addition of ELD-passing code.
 
 Fengguang, can the drm or i915 driver check whether ELD is changed or
 not?  Writing ELD at each time even when unchanged confuses the audio
 side, as if the monitor is hotplugged.

The attached patch is tested OK to prevent extra hot plug events.

However it has one side effect: when HDMI monitor is hot removed,
the ELD keeps remain valid. I need to find a way to test for the
presence of the monitor and handle that case as well. When all done,
I'll submit the patches together for review.

Thanks,
Fengguang
Subject: drm/i915: don't trigger hotplug events on unchanged ELD
Date: Thu Nov 10 17:48:49 CST 2011

The ELD may or may not change when switching the video mode.
If unchanged, don't trigger hot plug events to HDMI audio driver.

This avoids disturbing the user with repeated printks.

Signed-off-by: Wu Fengguang fengguang...@intel.com
---
 drivers/gpu/drm/i915/intel_display.c |   51 ++---
 1 file changed, 46 insertions(+), 5 deletions(-)

--- linux.orig/drivers/gpu/drm/i915/intel_display.c 2011-11-10 
17:23:04.0 +0800
+++ linux/drivers/gpu/drm/i915/intel_display.c  2011-11-10 17:59:25.0 
+0800
@@ -5811,6 +5811,35 @@ static int intel_crtc_mode_set(struct dr
return ret;
 }
 
+static bool intel_eld_uptodate(struct drm_connector *connector,
+  int reg_eldv, uint32_t bits_eldv,
+  int reg_elda, uint32_t bits_elda,
+  int reg_edid)
+{
+   struct drm_i915_private *dev_priv = connector-dev-dev_private;
+   uint8_t *eld = connector-eld;
+   uint32_t i;
+
+   i = I915_READ(reg_eldv);
+   i = bits_eldv;
+
+   if (!eld[0])
+   return !i;
+
+   if (!i)
+   return false;
+
+   i = I915_READ(reg_elda);
+   i = ~bits_elda;
+   I915_WRITE(reg_elda, i);
+
+   for (i = 0; i  eld[2]; i++)
+   if (I915_READ(reg_edid) != *((uint32_t *)eld + i))
+   return false;
+
+   return true;
+}
+
 static void g4x_write_eld(struct drm_connector *connector,
  struct drm_crtc *crtc)
 {
@@ -5827,6 +5856,12 @@ static void g4x_write_eld(struct drm_con
else
eldv = G4X_ELDV_DEVCTG;
 
+   if (intel_eld_uptodate(connector,
+  G4X_AUD_CNTL_ST, eldv,
+  G4X_AUD_CNTL_ST, G4X_ELD_ADDR,
+  G4X_HDMIW_HDMIEDID))
+   return;
+
i = I915_READ(G4X_AUD_CNTL_ST);
i = ~(eldv | G4X_ELD_ADDR);
len = (i  9)  0x1f;  /* ELD buffer size */
@@ -5886,6 +5921,17 @@ static void ironlake_write_eld(struct dr
eldv = GEN5_ELD_VALIDB  ((i - 1) * 4);
}
 
+   if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
+   DRM_DEBUG_DRIVER(ELD: DisplayPort detected\n);
+   eld[5] |= (1  2); /* Conn_Type, 0x1 = DisplayPort */
+   }
+
+   if (intel_eld_uptodate(connector,
+  aud_cntrl_st2, eldv,
+  aud_cntl_st, GEN5_ELD_ADDRESS,
+  hdmiw_hdmiedid))
+   return;
+
i = I915_READ(aud_cntrl_st2);
i = ~eldv;
I915_WRITE(aud_cntrl_st2, i);
@@ -5893,11 +5939,6 @@ static void ironlake_write_eld(struct dr
if (!eld[0])
return;
 
-   if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT)) {
-   DRM_DEBUG_DRIVER(ELD: DisplayPort detected\n);
-   eld[5] |= (1  2); /* Conn_Type, 0x1 = DisplayPort */
-   }
-
i = I915_READ(aud_cntl_st);
i = ~GEN5_ELD_ADDRESS;
I915_WRITE(aud_cntl_st, i);
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH] drm/radeon/kms: fix use of vram scratch page on evergreen/ni

2011-11-10 Thread alexdeucher
From: Alex Deucher alexander.deuc...@amd.com

This hunk seems to have gotten lost when I rebased the patch.

Reported-by: Sylvain Bertrand sylvain.bertr...@gmail.com
Signed-off-by: Alex Deucher alexander.deuc...@amd.com
---
 drivers/gpu/drm/radeon/evergreen.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/radeon/evergreen.c 
b/drivers/gpu/drm/radeon/evergreen.c
index c6761e8..0067f11 100644
--- a/drivers/gpu/drm/radeon/evergreen.c
+++ b/drivers/gpu/drm/radeon/evergreen.c
@@ -1278,7 +1278,7 @@ void evergreen_mc_program(struct radeon_device *rdev)
WREG32(MC_VM_SYSTEM_APERTURE_HIGH_ADDR,
rdev-mc.vram_end  12);
}
-   WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, 0);
+   WREG32(MC_VM_SYSTEM_APERTURE_DEFAULT_ADDR, rdev-vram_scratch.gpu_addr 
 12);
if (rdev-flags  RADEON_IS_IGP) {
tmp = RREG32(MC_FUS_VM_FB_OFFSET)  0x000F;
tmp |= ((rdev-mc.vram_end  20)  0xF)  24;
-- 
1.7.3.4

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm: add an fb creation ioctl that takes a pixel format

2011-11-10 Thread InKi Dae
2011/11/9 Rob Clark robdcl...@gmail.com:
 On Wed, Nov 9, 2011 at 7:25 AM, InKi Dae daei...@gmail.com wrote:
 Hello, all.

 I am trying to implement multi planer using your plane patch and I
 think it's good but I am still warried about that drm_mode_fb_cmd2
 structure has only one handle. I know this handle is sent to
 framebuffer module to create new framebuffer. and the framebuffer
 would cover entire a image. as you know, the image could be consisted
 of one more planes. so I think now drm_mode_fb_cmd2 structure doesn't
 support multi planer because it has only one handle. with update_plane
 callback, a buffer of the framebuffer would be set to a hardware
 overlay. how we could set two planes or three planes to the hardware
 overlay? but there might be my missing point so please give me any
 comments. in addition, have you been looked into gem flink and open
 functions for memory sharing between processes? gem object basically
 has one buffer so we can't modify it because of compatibility. so I
 think it's right way that gem object manages only one buffer. for such
 a reason, maybe drm_mode_fb_cmd2 structure should include one more
 handles and plane count. each handle has a gem object to one plane and
 plane count means how many planes are requested and when update_plane
 callback is called by setplane(), we could set them of the specific
 framebuffer to a hardware overlay.

 The current plan is to add a 3rd ioctl, for adding multi-planar fb..
 I guess it is a good thing that I'm not the only one who wants this
 :-)

 another one, and also I have tried to implement the way sharing the
 memory between v4l2 based drivers and drm based drivers through
 application and this works fine. this feature had been introduced by
 v4l2 framework as user ptr. my way also is similar to it. the
 difference is that application could get new gem handle from specific
 gem framework of kernel side if user application requests user ptr
 import with the user space address(mmaped memory). the new gem handle
 means a gem object to the memory mapped to the user space address.
 this way makes different applications to be possible to share the
 memory between v4l2 based driver and drm based driver. and also this
 feature is considered for IOMMU so it would support non continuous
 memory also. I will introduce this feature soon.

 btw, there was an RFC a little while back for dmabuf buffer sharing
 mechanism..  the idea would be to export a (for example) GEM buffer to
 a dmabuf handle which could be passed in to other devices, including
 for example v4l2 (although without necessarily requiring a userspace
 mapping)..

 http://www.spinics.net/lists/dri-devel/msg15077.html

 It sounds like you are looking for a similar thing..


Hi, Rob.

GEM framework already supports memory sharing way that a object name
created by gem flink is sent to another process and then the process
opens the object name. at that time, the gem framework of kernel side
creates new gem object. and I know that dmabuf is similar to the ION
introduced by Rebecca who is an engineer of Google at least for buffer
sharing way. but is it possible to share the memory region drawing on
only user virtual address mmaped with another process?. for instance,
as you know, v4l2 based driver has request buf feature that the driver
of kernel side allocates the memory regions as user-desired buffer
count and user gets user virtual address with mmap request after quary
buffer request. so we need to share this memory mmaped at here also.
for this, v4l2 based driver has userptr feature that user application
sets user virtual address to userptr structure and then the address is
translated to bus address(physical address without iommu or device
address with iommu) and sets it to hardware. I think it doesn't need
dmabuf if we would use it only for sharing the gem buffer with another
process because GEM framework already can do it. I will try to find
the way that we can use this feature commonly for generic gem
framework. this feature has already been implemented in our specific
gem framework and also tested.

thank you,
Inki dae.


 BR,
 -R

 Thank you,
 Inki Dae.

 2011/11/9 Jesse Barnes jbar...@virtuousgeek.org:
 To properly support the various plane formats supported by different
 hardware, the kernel must know the pixel format of a framebuffer object.
 So add a new ioctl taking a format argument corresponding to a fourcc
 name from videodev2.h.  Implement the fb creation hooks in terms of the
 new mode_fb_cmd2 using helpers where the old bpp/depth values are
 needed.

 Acked-by: Alan Cox a...@lxorguk.ukuu.org.uk
 Reviewed-by: Rob Clark rob.cl...@linaro.org
 Signed-off-by: Jesse Barnes jbar...@virtuousgeek.org
 ---
  drivers/gpu/drm/drm_crtc.c                |  108 
 +++-
  drivers/gpu/drm/drm_crtc_helper.c         |   50 -
  drivers/gpu/drm/drm_drv.c                 |    1 +
  drivers/gpu/drm/i915/intel_display.c      |   36 +-
  

Re: Strange effect with i915 backlight controller

2011-11-10 Thread Daniel Mack
On 11/08/2011 01:57 AM, Daniel Mack wrote:
 Didn't get any response yet, hence copying LKML for a broader audience.

Nobody, really?

This is a rather annoying regression, as touching the brightness keys
appearantly switches off the whole machine. I'm sure this is trivial to
fix, I just don't have the insight of this driver and the chipset.

Any pointer greatly appreciated, and I can test patches.


Thanks,
Daniel



 
 On 11/04/2011 03:36 PM, Daniel Mack wrote:
 I'm facing a bug on a Samsung X20 notebook which features an i915
 chipset (output of 'lspci -v' attached).

 The effect is that setting the backlight to odd values causes the value
 to be misinterpreted. Harald Hoyer (cc:) had the same thing on a Netbook
 (I don't recall which model it was).

 So this will turn the backlight to full brightness:

 # cat /sys/class/backlight/intel_backlight/max_brightness
 29750
 # echo 29750  /sys/class/backlight/intel_backlight/brightness

 However, writing 29749 will turn the display backlight off, and 29748
 appears to be the next valid lower value.

 It seems like the IS_PINEVIEW() branch in
 drivers/gpu/drm/i915/intel_panel.c:intel_panel_actually_set_backlight()
 could do the right thing, but this code is written for an entirely
 different model, right?

 I can reproduce this on 3.0 and 3.1 vanilla as well as with the current
 mainline git.

 Let me know if there is any patch that I can test.


 Thanks,
 Daniel
 

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: Strange effect with i915 backlight controller

2011-11-10 Thread Takashi Iwai
At Thu, 10 Nov 2011 16:11:29 +0100,
Daniel Mack wrote:
 
 On 11/08/2011 01:57 AM, Daniel Mack wrote:
  Didn't get any response yet, hence copying LKML for a broader audience.
 
 Nobody, really?
 
 This is a rather annoying regression, as touching the brightness keys
 appearantly switches off the whole machine. I'm sure this is trivial to
 fix, I just don't have the insight of this driver and the chipset.

I vaguely remember that the bit 0 is invalid on some old chips.
Maybe 915GM is one of them, as it's gen3?  If so, the patch like below
may work.


Takashi

---
diff --git a/drivers/gpu/drm/i915/intel_panel.c 
b/drivers/gpu/drm/i915/intel_panel.c
index 499d4c0..be952d1 100644
--- a/drivers/gpu/drm/i915/intel_panel.c
+++ b/drivers/gpu/drm/i915/intel_panel.c
@@ -249,8 +249,11 @@ static void intel_panel_actually_set_backlight(struct 
drm_device *dev, u32 level
if (IS_PINEVIEW(dev)) {
tmp = ~(BACKLIGHT_DUTY_CYCLE_MASK - 1);
level = 1;
-   } else
+   } else {
tmp = ~BACKLIGHT_DUTY_CYCLE_MASK;
+   if (INTEL_INFO(dev)-gen  4)
+   tmp = ~1;
+   }
I915_WRITE(BLC_PWM_CTL, tmp | level);
 }
 
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 2/2] drm: add an fb creation ioctl that takes a pixel format

2011-11-10 Thread Rob Clark
On Thu, Nov 10, 2011 at 8:54 AM, InKi Dae daei...@gmail.com wrote:
 2011/11/9 Rob Clark robdcl...@gmail.com:
 On Wed, Nov 9, 2011 at 7:25 AM, InKi Dae daei...@gmail.com wrote:
 Hello, all.

 I am trying to implement multi planer using your plane patch and I
 think it's good but I am still warried about that drm_mode_fb_cmd2
 structure has only one handle. I know this handle is sent to
 framebuffer module to create new framebuffer. and the framebuffer
 would cover entire a image. as you know, the image could be consisted
 of one more planes. so I think now drm_mode_fb_cmd2 structure doesn't
 support multi planer because it has only one handle. with update_plane
 callback, a buffer of the framebuffer would be set to a hardware
 overlay. how we could set two planes or three planes to the hardware
 overlay? but there might be my missing point so please give me any
 comments. in addition, have you been looked into gem flink and open
 functions for memory sharing between processes? gem object basically
 has one buffer so we can't modify it because of compatibility. so I
 think it's right way that gem object manages only one buffer. for such
 a reason, maybe drm_mode_fb_cmd2 structure should include one more
 handles and plane count. each handle has a gem object to one plane and
 plane count means how many planes are requested and when update_plane
 callback is called by setplane(), we could set them of the specific
 framebuffer to a hardware overlay.

 The current plan is to add a 3rd ioctl, for adding multi-planar fb..
 I guess it is a good thing that I'm not the only one who wants this
 :-)

 another one, and also I have tried to implement the way sharing the
 memory between v4l2 based drivers and drm based drivers through
 application and this works fine. this feature had been introduced by
 v4l2 framework as user ptr. my way also is similar to it. the
 difference is that application could get new gem handle from specific
 gem framework of kernel side if user application requests user ptr
 import with the user space address(mmaped memory). the new gem handle
 means a gem object to the memory mapped to the user space address.
 this way makes different applications to be possible to share the
 memory between v4l2 based driver and drm based driver. and also this
 feature is considered for IOMMU so it would support non continuous
 memory also. I will introduce this feature soon.

 btw, there was an RFC a little while back for dmabuf buffer sharing
 mechanism..  the idea would be to export a (for example) GEM buffer to
 a dmabuf handle which could be passed in to other devices, including
 for example v4l2 (although without necessarily requiring a userspace
 mapping)..

 http://www.spinics.net/lists/dri-devel/msg15077.html

 It sounds like you are looking for a similar thing..


 Hi, Rob.

 GEM framework already supports memory sharing way that a object name
 created by gem flink is sent to another process and then the process
 opens the object name. at that time, the gem framework of kernel side
 creates new gem object. and I know that dmabuf is similar to the ION
 introduced by Rebecca who is an engineer of Google at least for buffer
 sharing way. but is it possible to share the memory region drawing on
 only user virtual address mmaped with another process?. for instance,
 as you know, v4l2 based driver has request buf feature that the driver
 of kernel side allocates the memory regions as user-desired buffer
 count and user gets user virtual address with mmap request after quary
 buffer request. so we need to share this memory mmaped at here also.
 for this, v4l2 based driver has userptr feature that user application
 sets user virtual address to userptr structure and then the address is
 translated to bus address(physical address without iommu or device
 address with iommu) and sets it to hardware. I think it doesn't need
 dmabuf if we would use it only for sharing the gem buffer with another
 process because GEM framework already can do it. I will try to find
 the way that we can use this feature commonly for generic gem
 framework. this feature has already been implemented in our specific
 gem framework and also tested.

There are a few limitations with userptr:
1) will simply fail if importing driver has some special dma
requirements (contiguous memory, specific address range, etc)..
2) requires a userspace virtual mapping of buffer.. which might not
always be required for fully hw accelerated use cases

And in general I'm not a huge fan of dma'ing to arbitrary malloc'd
buffers (which userptr seems to encourage)..

So it's true, that somehow people have managed to ship linux based
products without dmabuf, using various hacks..  but part of the point
of dmabuf is to try to get to a cleaner more generic solution.

BR,
-R

 thank you,
 Inki dae.


 BR,
 -R

 Thank you,
 Inki Dae.

 2011/11/9 Jesse Barnes jbar...@virtuousgeek.org:
 To properly support the various plane formats supported by different
 hardware, 

[Bug 42373] Radeon HD 6450 (NI CAICOS) screen corruption on boot

2011-11-10 Thread bugzilla-daemon
https://bugs.freedesktop.org/show_bug.cgi?id=42373

--- Comment #13 from Kunal kunal.gangakhed...@gmail.com 2011-11-10 09:38:31 
PST ---
Created attachment 53375
  -- https://bugs.freedesktop.org/attachment.cgi?id=53375
dmesg log with amd_iommu=off iommu=off options added to cmdline

(In reply to comment #12)
 Does booting with following kernel options help
 amd_iommu=off iommu=off

No, it doesn't help in any way.
Attaching dmesg log.

-- 
Configure bugmail: https://bugs.freedesktop.org/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH v3] drm/radeon: Make sure CS mutex is held across GPU reset.

2011-11-10 Thread Michel Dänzer
From: Michel Dänzer michel.daen...@amd.com

This was only the case if the GPU reset was triggered from the CS ioctl,
otherwise other processes could happily enter the CS ioctl and wreak havoc
during the GPU reset.

This is a little complicated because the GPU reset can be triggered from the
CS ioctl, in which case we're already holding the mutex, or from other call
paths, in which case we need to lock the mutex. AFAICT the mutex API doesn't
allow recursive locking or finding out the mutex owner, so we need to handle
this with helper functions which allow recursive locking from the same
process.

Signed-off-by: Michel Dänzer michel.daen...@amd.com
Reviewed-by: Jerome Glisse jgli...@redhat.com
---

v3: Drop spurious whitespace-only hunk, thanks Jerome for catching that.

 drivers/gpu/drm/radeon/radeon.h|   44 +++-
 drivers/gpu/drm/radeon/radeon_cs.c |   14 +-
 drivers/gpu/drm/radeon/radeon_device.c |   16 ---
 3 files changed, 62 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index c1e056b..fa2ef96 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -1151,6 +1151,48 @@ struct r700_vram_scratch {
volatile uint32_t   *ptr;
 };
 
+
+/*
+ * Mutex which allows recursive locking from the same process.
+ */
+struct radeon_mutex {
+   struct mutexmutex;
+   struct task_struct  *owner;
+   int level;
+};
+
+static inline void radeon_mutex_init(struct radeon_mutex *mutex)
+{
+   mutex_init(mutex-mutex);
+   mutex-owner = NULL;
+   mutex-level = 0;
+}
+
+static inline void radeon_mutex_lock(struct radeon_mutex *mutex)
+{
+   if (mutex_trylock(mutex-mutex)) {
+   /* The mutex was unlocked before, so it's ours now */
+   mutex-owner = current;
+   } else if (mutex-owner != current) {
+   /* Another process locked the mutex, take it */
+   mutex_lock(mutex-mutex);
+   mutex-owner = current;
+   }
+   /* Otherwise the mutex was already locked by this process */
+
+   mutex-level++;
+}
+
+static inline void radeon_mutex_unlock(struct radeon_mutex *mutex)
+{
+   if (--mutex-level  0)
+   return;
+
+   mutex-owner = NULL;
+   mutex_unlock(mutex-mutex);
+}
+
+
 /*
  * Core structure, functions and helpers.
  */
@@ -1206,7 +1248,7 @@ struct radeon_device {
struct radeon_gem   gem;
struct radeon_pmpm;
uint32_tbios_scratch[RADEON_BIOS_NUM_SCRATCH];
-   struct mutexcs_mutex;
+   struct radeon_mutex cs_mutex;
struct radeon_wbwb;
struct radeon_dummy_pagedummy_page;
boolgpu_lockup;
diff --git a/drivers/gpu/drm/radeon/radeon_cs.c 
b/drivers/gpu/drm/radeon/radeon_cs.c
index fae00c0..ccaa243 100644
--- a/drivers/gpu/drm/radeon/radeon_cs.c
+++ b/drivers/gpu/drm/radeon/radeon_cs.c
@@ -222,7 +222,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
struct radeon_cs_chunk *ib_chunk;
int r;
 
-   mutex_lock(rdev-cs_mutex);
+   radeon_mutex_lock(rdev-cs_mutex);
/* initialize parser */
memset(parser, 0, sizeof(struct radeon_cs_parser));
parser.filp = filp;
@@ -233,14 +233,14 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r) {
DRM_ERROR(Failed to initialize parser !\n);
radeon_cs_parser_fini(parser, r);
-   mutex_unlock(rdev-cs_mutex);
+   radeon_mutex_unlock(rdev-cs_mutex);
return r;
}
r =  radeon_ib_get(rdev, parser.ib);
if (r) {
DRM_ERROR(Failed to get ib !\n);
radeon_cs_parser_fini(parser, r);
-   mutex_unlock(rdev-cs_mutex);
+   radeon_mutex_unlock(rdev-cs_mutex);
return r;
}
r = radeon_cs_parser_relocs(parser);
@@ -248,7 +248,7 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r != -ERESTARTSYS)
DRM_ERROR(Failed to parse relocation %d!\n, r);
radeon_cs_parser_fini(parser, r);
-   mutex_unlock(rdev-cs_mutex);
+   radeon_mutex_unlock(rdev-cs_mutex);
return r;
}
/* Copy the packet into the IB, the parser will read from the
@@ -260,14 +260,14 @@ int radeon_cs_ioctl(struct drm_device *dev, void *data, 
struct drm_file *filp)
if (r || parser.parser_error) {
DRM_ERROR(Invalid command stream !\n);
radeon_cs_parser_fini(parser, r);
-   mutex_unlock(rdev-cs_mutex);
+   radeon_mutex_unlock(rdev-cs_mutex);
 

Re: [PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Jerome Glisse
On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
 On 11/09/2011 09:22 PM, j.gli...@gmail.com wrote:
 From: Jerome Glissejgli...@redhat.com
 
 This is an overhaul of the ttm memory accounting. This tries to keep
 the same global behavior while removing the whole zone concept. It
 keeps a distrinction for dma32 so that we make sure that ttm don't
 starve the dma32 zone.
 
 There is 3 threshold for memory allocation :
 - max_mem is the maximum memory the whole ttm infrastructure is
going to allow allocation for (exception of system process see
below)
 - emer_mem is the maximum memory allowed for system process, this
limit is  to max_mem
 - swap_limit is the threshold at which point ttm will start to
try to swap object because ttm is getting close the max_mem
limit
 - swap_dma32_limit is the threshold at which point ttm will start
swap object to try to reduce the pressure on the dma32 zone. Note
that we don't specificly target object to swap to it might very
well free more memory from highmem rather than from dma32
 
 Accounting is done through used_mem  used_dma32_mem, which sum give
 the total amount of memory actually accounted by ttm.
 
 Idea is that allocation will fail if (used_mem + used_dma32_mem)
 max_mem and if swapping fail to make enough room.
 
 The used_dma32_mem can be updated as a later stage, allowing to
 perform accounting test before allocating a whole batch of pages.
 
 
 Jerome, you're removing a fair amount of functionality here, without
 justifying
 why it could be removed.

All this code was overkill.
 
 Consider a low-end system with 1G of kernel memory and 10G of
 highmem. How do we avoid putting stress on the kernel memory? I also
 wouldn't be too surprised if DMA32 zones appear in HIGHMEM systems
 in the future making the current zone concept good to keep.

Right now kernel memory is accounted against all zone so it decrease
not only the kernel zone but also the dma32  highmem if present.
Note also that kernel zone in current code == dma32 zone.

When it comes to future it looks a lot simpler, it seems everyone
is moving toward more capable and more advanced iommu that can remove
all the restriction on memory from the device pov. I mean even arm
are getting more and more advance iommu. I don't see any architecture
worse supporting not going down that road.

 Also, in effect you move the DOS from *all* zones into the DMA32
 zone and create a race in that multiple simultaneous allocators can
 first pre-allocate out of the global zone, and then update the DMA32
 zone without synchronization. In this way you might theoretically
 end up with more DMA32 pages allocated than present in the zone.

Ok a respin attached with a simple change, things will be
accounted against dma32 zone and only when we get page we will
decrease the dma32 zone usage that way no DOS on dma32.

It also deals with the case where there is still lot of highmem
but no more dma32.

 With the proposed code there's also a theoretical problem in that a
 potentially huge number of pages are unaccounted before they are
 actually freed.

What you mean unaccounted ? The way it works is :
r = global_memory_alloc(size)
if (r) fail
alloc pages
update memory accounting according to what page where allocated

So memory is always accounted before even being allocated (exception
are the kernel object for vmwgfx  ttm_bo but we can move accounting
there too if you want, those are small allocation and i didn't think
it was worse changing that).


 A possible way around all this is to pre-allocate out of *all*
 zones, and after the big allocation release back memory to relevant
 zones. If such a big allocation fails, one needs to revert back to a
 page-by-page scheme.
 
 /Thomas

Really i believe this make the whole accounting a lot simpler. The
whole zone business was overkill as especialy kernelzone == dma32zone
and highmem zone is a superset of this.

With my change what happen is that only the dma32 distinction is kept
around because there is whole batch of device that can only do dma32
and we need to make sure we don't starve those.

I believe my code is lot easier and straighforward to understand.

Cheers,
Jerome
From 03ec0abcfa4bc3060fd0d7a797758fc50088ec29 Mon Sep 17 00:00:00 2001
From: Jerome Glisse jgli...@redhat.com
Date: Wed, 9 Nov 2011 08:11:00 -0500
Subject: [PATCH] drm/ttm: overhaul memory accounting V2

This is an overhaul of the ttm memory accounting. This tries to keep
the same global behavior while removing the whole zone concept. It
keeps a distrinction for dma32 so that we make sure that ttm don't
starve the dma32 zone.

There is 3 threshold for memory allocation :
- max_mem is the maximum memory the whole ttm infrastructure is
  going to allow allocation for (exception of system process see
  below)
- emer_mem is the maximum memory allowed for system process, this
  limit is  to max_mem
- swap_limit is the threshold at which point ttm will start to
  try to swap 

[PATCH] drm: Ensure string is null terminated.

2011-11-10 Thread Vinson Lee
Fixes Coverity buffer not null terminated defect.

Signed-off-by: Vinson Lee v...@vmware.com
---
 drivers/gpu/drm/drm_crtc.c |4 +++-
 1 files changed, 3 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
index f3ef654..40a3a14 100644
--- a/drivers/gpu/drm/drm_crtc.c
+++ b/drivers/gpu/drm/drm_crtc.c
@@ -2117,8 +2117,10 @@ struct drm_property *drm_property_create(struct 
drm_device *dev, int flags,
property-num_values = num_values;
INIT_LIST_HEAD(property-enum_blob_list);
 
-   if (name)
+   if (name) {
strncpy(property-name, name, DRM_PROP_NAME_LEN);
+   property-name[DRM_PROP_NAME_LEN-1] = '\0';
+   }
 
list_add_tail(property-head, dev-mode_config.property_list);
return property;
-- 
1.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Thomas Hellstrom

On 11/10/2011 07:05 PM, Jerome Glisse wrote:

On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
   

On 11/09/2011 09:22 PM, j.gli...@gmail.com wrote:
 

From: Jerome Glissejgli...@redhat.com

This is an overhaul of the ttm memory accounting. This tries to keep
the same global behavior while removing the whole zone concept. It
keeps a distrinction for dma32 so that we make sure that ttm don't
starve the dma32 zone.

There is 3 threshold for memory allocation :
- max_mem is the maximum memory the whole ttm infrastructure is
   going to allow allocation for (exception of system process see
   below)
- emer_mem is the maximum memory allowed for system process, this
   limit is   to max_mem
- swap_limit is the threshold at which point ttm will start to
   try to swap object because ttm is getting close the max_mem
   limit
- swap_dma32_limit is the threshold at which point ttm will start
   swap object to try to reduce the pressure on the dma32 zone. Note
   that we don't specificly target object to swap to it might very
   well free more memory from highmem rather than from dma32

Accounting is done through used_mem   used_dma32_mem, which sum give
the total amount of memory actually accounted by ttm.

Idea is that allocation will fail if (used_mem + used_dma32_mem)
max_mem and if swapping fail to make enough room.

The used_dma32_mem can be updated as a later stage, allowing to
perform accounting test before allocating a whole batch of pages.

   

Jerome, you're removing a fair amount of functionality here, without
justifying
why it could be removed.
 

All this code was overkill.
   


[1] I don't agree, and since it's well tested, thought throught and 
working, I see no obvious reason to alter it,
within the context of this patch series unless it's absolutely required 
for the functionality.




   

Consider a low-end system with 1G of kernel memory and 10G of
highmem. How do we avoid putting stress on the kernel memory? I also
wouldn't be too surprised if DMA32 zones appear in HIGHMEM systems
in the future making the current zone concept good to keep.
 

Right now kernel memory is accounted against all zone so it decrease
not only the kernel zone but also the dma32  highmem if present.
   


Do you mean that the code is incorrect? In that case, did you consider 
the fact
that zones may overlap? (Although I admit the name highmem might be 
misleading. Should be total).



Note also that kernel zone in current code == dma32 zone.
   


Last time I looked, the highmem split was typically at slightly less 
than 1GB, depending on the hardware and desired setup. I admit that was 
some time ago, but has that really changed? On all archs?

Furthermore, on !Highmem systems, All pages are in the kernel zone.


When it comes to future it looks a lot simpler, it seems everyone
is moving toward more capable and more advanced iommu that can remove
all the restriction on memory from the device pov. I mean even arm
are getting more and more advance iommu. I don't see any architecture
worse supporting not going down that road.
   


While the proposed change is probably possible, with different low - 
high splits depending on whether HIGHMEM is defined or not, I think [1] 
is a good reason for not pushing it through.


   

Also, in effect you move the DOS from *all* zones into the DMA32
zone and create a race in that multiple simultaneous allocators can
first pre-allocate out of the global zone, and then update the DMA32
zone without synchronization. In this way you might theoretically
end up with more DMA32 pages allocated than present in the zone.
 

Ok a respin attached with a simple change, things will be
accounted against dma32 zone and only when we get page we will
decrease the dma32 zone usage that way no DOS on dma32.

It also deals with the case where there is still lot of highmem
but no more dma32.
   


So why not just do a ttm_mem_global_alloc() for the pages you want to 
allocate,
and add a proper adjustment function if memory turns out to be either 
HIGHMEM or !DMA32


   

With the proposed code there's also a theoretical problem in that a
potentially huge number of pages are unaccounted before they are
actually freed.
 

What you mean unaccounted ? The way it works is :
r = global_memory_alloc(size)
if (r) fail
alloc pages
update memory accounting according to what page where allocated

So memory is always accounted before even being allocated (exception
are the kernel object for vmwgfx  ttm_bo but we can move accounting
there too if you want, those are small allocation and i didn't think
it was worse changing that).
   

No, I mean the sequence

unaccount_page_array()
---Race--
free_page_array()

/Thomas


Cheers,
Jerome
   


___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread David Rientjes
On Mon, 17 Oct 2011, David Rientjes wrote:

 On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
 
  From: Hugh Dickins hu...@chromium.org
  
  Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
  we're going to reboot immediately, the user will not be able to see the
  messages anyway, and messing with the video mode may display artifacts,
  and certainly get into several layers of complexity (including mutexes and
  memory allocations) which we shall be much safer to avoid.
  
  Signed-off-by: Hugh Dickins hu...@google.com
  [ Edited commit message and modified to short-circuit panic_timeout  0
instead of testing panic_timeout = 0.  -Mandeep ]
  Signed-off-by: Mandeep Singh Baines m...@chromium.org
  Cc: Dave Airlie airl...@redhat.com
  Cc: Andrew Morton a...@linux-foundation.org
  Cc: dri-devel@lists.freedesktop.org
 
 Acked-by: David Rientjes rient...@google.com
 

Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I 
don't see it in git://people.freedesktop.org/~airlied/linux.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread Mandeep Singh Baines
David Rientjes (rient...@google.com) wrote:
 On Mon, 17 Oct 2011, David Rientjes wrote:
 
  On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
  
   From: Hugh Dickins hu...@chromium.org
   
   Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
   we're going to reboot immediately, the user will not be able to see the
   messages anyway, and messing with the video mode may display artifacts,
   and certainly get into several layers of complexity (including mutexes and
   memory allocations) which we shall be much safer to avoid.
   
   Signed-off-by: Hugh Dickins hu...@google.com
   [ Edited commit message and modified to short-circuit panic_timeout  0
 instead of testing panic_timeout = 0.  -Mandeep ]
   Signed-off-by: Mandeep Singh Baines m...@chromium.org
   Cc: Dave Airlie airl...@redhat.com
   Cc: Andrew Morton a...@linux-foundation.org
   Cc: dri-devel@lists.freedesktop.org
  
  Acked-by: David Rientjes rient...@google.com
  
 
 Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I 
 don't see it in git://people.freedesktop.org/~airlied/linux.

The last status I have is Andrew pulling it into mmotm on 10/18/11.

Subject: + 
drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch added to 
-mm tree
From: a...@linux-foundation.org
Date: Tue, 18 Oct 2011 15:42:46 -0700


The patch titled
 Subject: drm: avoid switching to text console if there is no panic timeout
has been added to the -mm tree.  Its filename is
 drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch

Where is mmotm hosted these days?

Regards,
Mandeep
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread Dave Airlie
On Thu, Nov 10, 2011 at 9:15 PM, Mandeep Singh Baines m...@chromium.org wrote:
 David Rientjes (rient...@google.com) wrote:
 On Mon, 17 Oct 2011, David Rientjes wrote:

  On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
 
   From: Hugh Dickins hu...@chromium.org
  
   Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
   we're going to reboot immediately, the user will not be able to see the
   messages anyway, and messing with the video mode may display artifacts,
   and certainly get into several layers of complexity (including mutexes 
   and
   memory allocations) which we shall be much safer to avoid.
  
   Signed-off-by: Hugh Dickins hu...@google.com
   [ Edited commit message and modified to short-circuit panic_timeout  0
     instead of testing panic_timeout = 0.  -Mandeep ]
   Signed-off-by: Mandeep Singh Baines m...@chromium.org
   Cc: Dave Airlie airl...@redhat.com
   Cc: Andrew Morton a...@linux-foundation.org
   Cc: dri-devel@lists.freedesktop.org
 
  Acked-by: David Rientjes rient...@google.com
 

 Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I
 don't see it in git://people.freedesktop.org/~airlied/linux.

I've just pulled it into my local drm-next, thanks for reminding me.

Dave.
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH] drm: avoid switching to text console if there is no panic timeout

2011-11-10 Thread Andrew Morton
On Thu, 10 Nov 2011 13:15:04 -0800
Mandeep Singh Baines m...@chromium.org wrote:

 David Rientjes (rient...@google.com) wrote:
  On Mon, 17 Oct 2011, David Rientjes wrote:
  
   On Mon, 17 Oct 2011, Mandeep Singh Baines wrote:
   
From: Hugh Dickins hu...@chromium.org

Add a check for panic_timeout in the drm_fb_helper_panic() notifier: if
we're going to reboot immediately, the user will not be able to see the
messages anyway, and messing with the video mode may display artifacts,
and certainly get into several layers of complexity (including mutexes 
and
memory allocations) which we shall be much safer to avoid.

Signed-off-by: Hugh Dickins hu...@google.com
[ Edited commit message and modified to short-circuit panic_timeout  0
  instead of testing panic_timeout = 0.  -Mandeep ]
Signed-off-by: Mandeep Singh Baines m...@chromium.org
Cc: Dave Airlie airl...@redhat.com
Cc: Andrew Morton a...@linux-foundation.org
Cc: dri-devel@lists.freedesktop.org
   
   Acked-by: David Rientjes rient...@google.com
   
  
  Dave, where do we stand on this?  I haven't seen it hit Linus' tree and I 
  don't see it in git://people.freedesktop.org/~airlied/linux.
 
 The last status I have is Andrew pulling it into mmotm on 10/18/11.
 
 Subject: + 
 drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch added 
 to -mm tree
 From: a...@linux-foundation.org
 Date: Tue, 18 Oct 2011 15:42:46 -0700
 
 
 The patch titled
  Subject: drm: avoid switching to text console if there is no panic 
 timeout
 has been added to the -mm tree.  Its filename is
  drm-avoid-switching-to-text-console-if-there-is-no-panic-timeout.patch

I need to do another round of sending patches to maintainers.

It's a depressing exercise because the great majority of patches are
simply ignored.  Last time I even added please don't ignore to the
email Subject: on the more important ones.  Sigh.

 Where is mmotm hosted these days?

On my disk, until kernel.org ftp access returns.

But I regularly email tarballs to Stephen, so it's all in linux-next. 
The mmotm tree is largely unneeded now - use linux-next to get at -mm
patches.

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


Re: [PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Jerome Glisse
On Thu, Nov 10, 2011 at 09:05:22PM +0100, Thomas Hellstrom wrote:
 On 11/10/2011 07:05 PM, Jerome Glisse wrote:
 On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
 On 11/09/2011 09:22 PM, j.gli...@gmail.com wrote:
 From: Jerome Glissejgli...@redhat.com
 
 This is an overhaul of the ttm memory accounting. This tries to keep
 the same global behavior while removing the whole zone concept. It
 keeps a distrinction for dma32 so that we make sure that ttm don't
 starve the dma32 zone.
 
 There is 3 threshold for memory allocation :
 - max_mem is the maximum memory the whole ttm infrastructure is
going to allow allocation for (exception of system process see
below)
 - emer_mem is the maximum memory allowed for system process, this
limit is   to max_mem
 - swap_limit is the threshold at which point ttm will start to
try to swap object because ttm is getting close the max_mem
limit
 - swap_dma32_limit is the threshold at which point ttm will start
swap object to try to reduce the pressure on the dma32 zone. Note
that we don't specificly target object to swap to it might very
well free more memory from highmem rather than from dma32
 
 Accounting is done through used_mem   used_dma32_mem, which sum give
 the total amount of memory actually accounted by ttm.
 
 Idea is that allocation will fail if (used_mem + used_dma32_mem)
 max_mem and if swapping fail to make enough room.
 
 The used_dma32_mem can be updated as a later stage, allowing to
 perform accounting test before allocating a whole batch of pages.
 
 Jerome, you're removing a fair amount of functionality here, without
 justifying
 why it could be removed.
 All this code was overkill.
 
 [1] I don't agree, and since it's well tested, thought throught and
 working, I see no obvious reason to alter it,
 within the context of this patch series unless it's absolutely
 required for the functionality.

Well one thing i can tell is that it doesn't work on radeon, i pushed
a test to libdrm and here it's the oom that starts doing its beating.
Anyway i won't alter it. Was just trying to make it works, ie be useful
while also being simpler.

 Consider a low-end system with 1G of kernel memory and 10G of
 highmem. How do we avoid putting stress on the kernel memory? I also
 wouldn't be too surprised if DMA32 zones appear in HIGHMEM systems
 in the future making the current zone concept good to keep.
 Right now kernel memory is accounted against all zone so it decrease
 not only the kernel zone but also the dma32  highmem if present.
 
 Do you mean that the code is incorrect? In that case, did you
 consider the fact
 that zones may overlap? (Although I admit the name highmem might
 be misleading. Should be total).

Yeah i am well aware that zone overlap :)

 Note also that kernel zone in current code == dma32 zone.
 
 Last time I looked, the highmem split was typically at slightly less
 than 1GB, depending on the hardware and desired setup. I admit that
 was some time ago, but has that really changed? On all archs?
 Furthermore, on !Highmem systems, All pages are in the kernel zone.

I was bit too much focused on my system where 1G of ram is wonderland
and 512M is the average. Thx to AMD i got a system with 8G i should
use it more.

Cheers,
Jerome
___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


ttm: merge ttm_backend ttm_tt, introduce ttm dma allocator V4

2011-11-10 Thread j . glisse
So squeezed all to avoid any memory accouting messing, seems to work ok
so far.

Cheers,
Jerome

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 01/13] swiotlb: Expose swiotlb_nr_tlb function to modules

2011-11-10 Thread j . glisse
From: Konrad Rzeszutek Wilk konrad.w...@oracle.com

As a mechanism to detect whether SWIOTLB is enabled or not.
We also fix the spelling - it was swioltb instead of
swiotlb.

CC: FUJITA Tomonori fujita.tomon...@lab.ntt.co.jp
[v1: Ripped out swiotlb_enabled]
Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
---
 drivers/xen/swiotlb-xen.c |2 +-
 include/linux/swiotlb.h   |2 +-
 lib/swiotlb.c |5 +++--
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index c984768..c50fb0b 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -152,7 +152,7 @@ void __init xen_swiotlb_init(int verbose)
char *m = NULL;
unsigned int repeat = 3;
 
-   nr_tbl = swioltb_nr_tbl();
+   nr_tbl = swiotlb_nr_tbl();
if (nr_tbl)
xen_io_tlb_nslabs = nr_tbl;
else {
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 445702c..e872526 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -24,7 +24,7 @@ extern int swiotlb_force;
 
 extern void swiotlb_init(int verbose);
 extern void swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
verbose);
-extern unsigned long swioltb_nr_tbl(void);
+extern unsigned long swiotlb_nr_tbl(void);
 
 /*
  * Enumeration for sync targets
diff --git a/lib/swiotlb.c b/lib/swiotlb.c
index 99093b3..058935e 100644
--- a/lib/swiotlb.c
+++ b/lib/swiotlb.c
@@ -110,11 +110,11 @@ setup_io_tlb_npages(char *str)
 __setup(swiotlb=, setup_io_tlb_npages);
 /* make io_tlb_overflow tunable too? */
 
-unsigned long swioltb_nr_tbl(void)
+unsigned long swiotlb_nr_tbl(void)
 {
return io_tlb_nslabs;
 }
-
+EXPORT_SYMBOL_GPL(swiotlb_nr_tbl);
 /* Note that this doesn't work with highmem page */
 static dma_addr_t swiotlb_virt_to_bus(struct device *hwdev,
  volatile void *address)
@@ -321,6 +321,7 @@ void __init swiotlb_free(void)
free_bootmem_late(__pa(io_tlb_start),
  PAGE_ALIGN(io_tlb_nslabs  IO_TLB_SHIFT));
}
+   io_tlb_nslabs = 0;
 }
 
 static int is_swiotlb_buffer(phys_addr_t paddr)
-- 
1.7.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 02/13] drm/ttm: remove userspace backed ttm object support

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

This was never use in none of the driver, properly using userspace
page for bo would need more code (vma interaction mostly). Removing
this dead code in preparation of ttm_tt  backend merge.

Signed-off-by: Jerome Glisse jgli...@redhat.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Thomas Hellstrom thellst...@vmware.com
---
 drivers/gpu/drm/ttm/ttm_bo.c|   22 
 drivers/gpu/drm/ttm/ttm_tt.c|  105 +--
 include/drm/ttm/ttm_bo_api.h|5 --
 include/drm/ttm/ttm_bo_driver.h |   24 -
 4 files changed, 1 insertions(+), 155 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index 617b646..4bde335 100644
--- a/drivers/gpu/drm/ttm/ttm_bo.c
+++ b/drivers/gpu/drm/ttm/ttm_bo.c
@@ -342,22 +342,6 @@ static int ttm_bo_add_ttm(struct ttm_buffer_object *bo, 
bool zero_alloc)
if (unlikely(bo-ttm == NULL))
ret = -ENOMEM;
break;
-   case ttm_bo_type_user:
-   bo-ttm = ttm_tt_create(bdev, bo-num_pages  PAGE_SHIFT,
-   page_flags | TTM_PAGE_FLAG_USER,
-   glob-dummy_read_page);
-   if (unlikely(bo-ttm == NULL)) {
-   ret = -ENOMEM;
-   break;
-   }
-
-   ret = ttm_tt_set_user(bo-ttm, current,
- bo-buffer_start, bo-num_pages);
-   if (unlikely(ret != 0)) {
-   ttm_tt_destroy(bo-ttm);
-   bo-ttm = NULL;
-   }
-   break;
default:
printk(KERN_ERR TTM_PFX Illegal buffer object type\n);
ret = -EINVAL;
@@ -907,16 +891,12 @@ static uint32_t ttm_bo_select_caching(struct 
ttm_mem_type_manager *man,
 }
 
 static bool ttm_bo_mt_compatible(struct ttm_mem_type_manager *man,
-bool disallow_fixed,
 uint32_t mem_type,
 uint32_t proposed_placement,
 uint32_t *masked_placement)
 {
uint32_t cur_flags = ttm_bo_type_flags(mem_type);
 
-   if ((man-flags  TTM_MEMTYPE_FLAG_FIXED)  disallow_fixed)
-   return false;
-
if ((cur_flags  proposed_placement  TTM_PL_MASK_MEM) == 0)
return false;
 
@@ -961,7 +941,6 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
man = bdev-man[mem_type];
 
type_ok = ttm_bo_mt_compatible(man,
-   bo-type == ttm_bo_type_user,
mem_type,
placement-placement[i],
cur_flags);
@@ -1009,7 +988,6 @@ int ttm_bo_mem_space(struct ttm_buffer_object *bo,
if (!man-has_type)
continue;
if (!ttm_bo_mt_compatible(man,
-   bo-type == ttm_bo_type_user,
mem_type,
placement-busy_placement[i],
cur_flags))
diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 58c271e..82a1161 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -62,43 +62,6 @@ static void ttm_tt_free_page_directory(struct ttm_tt *ttm)
ttm-dma_address = NULL;
 }
 
-static void ttm_tt_free_user_pages(struct ttm_tt *ttm)
-{
-   int write;
-   int dirty;
-   struct page *page;
-   int i;
-   struct ttm_backend *be = ttm-be;
-
-   BUG_ON(!(ttm-page_flags  TTM_PAGE_FLAG_USER));
-   write = ((ttm-page_flags  TTM_PAGE_FLAG_WRITE) != 0);
-   dirty = ((ttm-page_flags  TTM_PAGE_FLAG_USER_DIRTY) != 0);
-
-   if (be)
-   be-func-clear(be);
-
-   for (i = 0; i  ttm-num_pages; ++i) {
-   page = ttm-pages[i];
-   if (page == NULL)
-   continue;
-
-   if (page == ttm-dummy_read_page) {
-   BUG_ON(write);
-   continue;
-   }
-
-   if (write  dirty  !PageReserved(page))
-   set_page_dirty_lock(page);
-
-   ttm-pages[i] = NULL;
-   ttm_mem_global_free(ttm-glob-mem_glob, PAGE_SIZE);
-   put_page(page);
-   }
-   ttm-state = tt_unpopulated;
-   ttm-first_himem_page = ttm-num_pages;
-   ttm-last_lomem_page = -1;
-}
-
 static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int index)
 {
struct page *p;
@@ -325,10 +288,7 @@ void ttm_tt_destroy(struct ttm_tt *ttm)
}
 
if (likely(ttm-pages != NULL)) {
-   if 

[PATCH 03/13] drm/ttm: remove split btw highmen and lowmem page

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

Split btw highmem and lowmem page was rendered useless by the
pool code. Remove it. Note further cleanup would change the
ttm page allocation helper to actualy take an array instead
of relying on list this could drasticly reduce the number of
function call in the common case of allocation whole buffer.

Signed-off-by: Jerome Glisse jgli...@redhat.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Thomas Hellstrom thellst...@vmware.com
---
 drivers/gpu/drm/ttm/ttm_tt.c|   11 ++-
 include/drm/ttm/ttm_bo_driver.h |7 ---
 2 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 82a1161..8b7a6d0 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -69,7 +69,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, int 
index)
struct ttm_mem_global *mem_glob = ttm-glob-mem_glob;
int ret;
 
-   while (NULL == (p = ttm-pages[index])) {
+   if (NULL == (p = ttm-pages[index])) {
 
INIT_LIST_HEAD(h);
 
@@ -85,10 +85,7 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, 
int index)
if (unlikely(ret != 0))
goto out_err;
 
-   if (PageHighMem(p))
-   ttm-pages[--ttm-first_himem_page] = p;
-   else
-   ttm-pages[++ttm-last_lomem_page] = p;
+   ttm-pages[index] = p;
}
return p;
 out_err:
@@ -270,8 +267,6 @@ static void ttm_tt_free_alloced_pages(struct ttm_tt *ttm)
ttm_put_pages(h, count, ttm-page_flags, ttm-caching_state,
  ttm-dma_address);
ttm-state = tt_unpopulated;
-   ttm-first_himem_page = ttm-num_pages;
-   ttm-last_lomem_page = -1;
 }
 
 void ttm_tt_destroy(struct ttm_tt *ttm)
@@ -315,8 +310,6 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, 
unsigned long size,
 
ttm-glob = bdev-glob;
ttm-num_pages = (size + PAGE_SIZE - 1)  PAGE_SHIFT;
-   ttm-first_himem_page = ttm-num_pages;
-   ttm-last_lomem_page = -1;
ttm-caching_state = tt_cached;
ttm-page_flags = page_flags;
 
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 37527d6..9da182b 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -136,11 +136,6 @@ enum ttm_caching_state {
  * @dummy_read_page: Page to map where the ttm_tt page array contains a NULL
  * pointer.
  * @pages: Array of pages backing the data.
- * @first_himem_page: Himem pages are put last in the page array, which
- * enables us to run caching attribute changes on only the first part
- * of the page array containing lomem pages. This is the index of the
- * first himem page.
- * @last_lomem_page: Index of the last lomem page in the page array.
  * @num_pages: Number of pages in the page array.
  * @bdev: Pointer to the current struct ttm_bo_device.
  * @be: Pointer to the ttm backend.
@@ -157,8 +152,6 @@ enum ttm_caching_state {
 struct ttm_tt {
struct page *dummy_read_page;
struct page **pages;
-   long first_himem_page;
-   long last_lomem_page;
uint32_t page_flags;
unsigned long num_pages;
struct ttm_bo_global *glob;
-- 
1.7.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 04/13] drm/ttm: remove unused backend flags field

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

This field is not use by any of the driver just drop it.

Signed-off-by: Jerome Glisse jgli...@redhat.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Thomas Hellstrom thellst...@vmware.com
---
 drivers/gpu/drm/radeon/radeon_ttm.c |1 -
 include/drm/ttm/ttm_bo_driver.h |2 --
 2 files changed, 0 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 0b5468b..97c76ae 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -787,7 +787,6 @@ struct ttm_backend *radeon_ttm_backend_create(struct 
radeon_device *rdev)
return NULL;
}
gtt-backend.bdev = rdev-mman.bdev;
-   gtt-backend.flags = 0;
gtt-backend.func = radeon_backend_func;
gtt-rdev = rdev;
gtt-pages = NULL;
diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h
index 9da182b..6d17140 100644
--- a/include/drm/ttm/ttm_bo_driver.h
+++ b/include/drm/ttm/ttm_bo_driver.h
@@ -106,7 +106,6 @@ struct ttm_backend_func {
  * struct ttm_backend
  *
  * @bdev: Pointer to a struct ttm_bo_device.
- * @flags: For driver use.
  * @func: Pointer to a struct ttm_backend_func that describes
  * the backend methods.
  *
@@ -114,7 +113,6 @@ struct ttm_backend_func {
 
 struct ttm_backend {
struct ttm_bo_device *bdev;
-   uint32_t flags;
struct ttm_backend_func *func;
 };
 
-- 
1.7.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 05/13] drm/ttm: use ttm put pages function to properly restore cache attribute

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

On failure we need to make sure the page we free has wb cache
attribute. Do this pas call the proper ttm page helper function.

Signed-off-by: Jerome Glisse jgli...@redhat.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Thomas Hellstrom thellst...@vmware.com
---
 drivers/gpu/drm/ttm/ttm_tt.c |5 -
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 8b7a6d0..3fb4c6d 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -89,7 +89,10 @@ static struct page *__ttm_tt_get_page(struct ttm_tt *ttm, 
int index)
}
return p;
 out_err:
-   put_page(p);
+   INIT_LIST_HEAD(h);
+   list_add(p-lru, h);
+   ttm_put_pages(h, 1, ttm-page_flags,
+ ttm-caching_state, ttm-dma_address[index]);
return NULL;
 }
 
-- 
1.7.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 06/13] drm/ttm: test for dma_address array allocation failure

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

Signed-off-by: Jerome Glisse jgli...@redhat.com
Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Thomas Hellstrom thellst...@vmware.com
---
 drivers/gpu/drm/ttm/ttm_tt.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c
index 3fb4c6d..aceecb5 100644
--- a/drivers/gpu/drm/ttm/ttm_tt.c
+++ b/drivers/gpu/drm/ttm/ttm_tt.c
@@ -319,7 +319,7 @@ struct ttm_tt *ttm_tt_create(struct ttm_bo_device *bdev, 
unsigned long size,
ttm-dummy_read_page = dummy_read_page;
 
ttm_tt_alloc_page_directory(ttm);
-   if (!ttm-pages) {
+   if (!ttm-pages || !ttm-dma_address) {
ttm_tt_destroy(ttm);
printk(KERN_ERR TTM_PFX Failed allocating page table\n);
return NULL;
-- 
1.7.7.1

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel


[PATCH 07/13] drm/ttm: page allocation use page array instead of list

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

Use the ttm_tt pages array for pages allocations, move the list
unwinding into the page allocation functions.

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/ttm/ttm_page_alloc.c |   85 +-
 drivers/gpu/drm/ttm/ttm_tt.c |   36 +++
 include/drm/ttm/ttm_page_alloc.h |8 ++--
 3 files changed, 63 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c 
b/drivers/gpu/drm/ttm/ttm_page_alloc.c
index 727e93d..0f3e6d2 100644
--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c
+++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c
@@ -619,8 +619,10 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool 
*pool,
  * @return count of pages still required to fulfill the request.
  */
 static unsigned ttm_page_pool_get_pages(struct ttm_page_pool *pool,
-   struct list_head *pages, int ttm_flags,
-   enum ttm_caching_state cstate, unsigned count)
+   struct list_head *pages,
+   int ttm_flags,
+   enum ttm_caching_state cstate,
+   unsigned count)
 {
unsigned long irq_flags;
struct list_head *p;
@@ -664,13 +666,15 @@ out:
  * On success pages list will hold count number of correctly
  * cached pages.
  */
-int ttm_get_pages(struct list_head *pages, int flags,
- enum ttm_caching_state cstate, unsigned count,
+int ttm_get_pages(struct page **pages, int flags,
+ enum ttm_caching_state cstate, unsigned npages,
  dma_addr_t *dma_address)
 {
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
+   struct list_head plist;
struct page *p = NULL;
gfp_t gfp_flags = GFP_USER;
+   unsigned count;
int r;
 
/* set zero flag for page allocation if required */
@@ -684,7 +688,7 @@ int ttm_get_pages(struct list_head *pages, int flags,
else
gfp_flags |= GFP_HIGHUSER;
 
-   for (r = 0; r  count; ++r) {
+   for (r = 0; r  npages; ++r) {
p = alloc_page(gfp_flags);
if (!p) {
 
@@ -693,85 +697,100 @@ int ttm_get_pages(struct list_head *pages, int flags,
return -ENOMEM;
}
 
-   list_add(p-lru, pages);
+   pages[r] = p;
}
return 0;
}
 
-
/* combine zero flag to pool flags */
gfp_flags |= pool-gfp_flags;
 
/* First we take pages from the pool */
-   count = ttm_page_pool_get_pages(pool, pages, flags, cstate, count);
+   INIT_LIST_HEAD(plist);
+   npages = ttm_page_pool_get_pages(pool, plist, flags, cstate, npages);
+   count = 0;
+   list_for_each_entry(p, plist, lru) {
+   pages[count++] = p;
+   }
 
/* clear the pages coming from the pool if requested */
if (flags  TTM_PAGE_FLAG_ZERO_ALLOC) {
-   list_for_each_entry(p, pages, lru) {
+   list_for_each_entry(p, plist, lru) {
clear_page(page_address(p));
}
}
 
/* If pool didn't have enough pages allocate new one. */
-   if (count  0) {
+   if (npages  0) {
/* ttm_alloc_new_pages doesn't reference pool so we can run
 * multiple requests in parallel.
 **/
-   r = ttm_alloc_new_pages(pages, gfp_flags, flags, cstate, count);
+   INIT_LIST_HEAD(plist);
+   r = ttm_alloc_new_pages(plist, gfp_flags, flags, cstate, 
npages);
+   list_for_each_entry(p, plist, lru) {
+   pages[count++] = p;
+   }
if (r) {
/* If there is any pages in the list put them back to
 * the pool. */
printk(KERN_ERR TTM_PFX
   Failed to allocate extra pages 
   for large request.);
-   ttm_put_pages(pages, 0, flags, cstate, NULL);
+   ttm_put_pages(pages, count, flags, cstate, NULL);
return r;
}
}
 
-
return 0;
 }
 
 /* Put all pages in pages list to correct pool to wait for reuse */
-void ttm_put_pages(struct list_head *pages, unsigned page_count, int flags,
+void ttm_put_pages(struct page **pages, unsigned npages, int flags,
   enum ttm_caching_state cstate, dma_addr_t *dma_address)
 {
unsigned long irq_flags;
struct ttm_page_pool *pool = ttm_get_pool(flags, cstate);
-   struct page *p, *tmp;
+   unsigned i;
 
if (pool == NULL) {
/* No pool for this memory type so free the pages */
-
-   

[PATCH 09/13] drm/ttm: introduce callback for ttm_tt populate unpopulate V4

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

Move the page allocation and freeing to driver callback and
provide ttm code helper function for those.

Most intrusive change, is the fact that we now only fully
populate an object this simplify some of code designed around
the page fault design.

V2 Rebase on top of memory accounting overhaul
V3 New rebase on top of more memory accouting changes
V4 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/nouveau/nouveau_bo.c   |3 +
 drivers/gpu/drm/radeon/radeon_ttm.c|2 +
 drivers/gpu/drm/ttm/ttm_bo_util.c  |   31 ++-
 drivers/gpu/drm/ttm/ttm_bo_vm.c|9 +++-
 drivers/gpu/drm/ttm/ttm_page_alloc.c   |   57 
 drivers/gpu/drm/ttm/ttm_tt.c   |   91 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c |3 +
 include/drm/ttm/ttm_bo_driver.h|   41 --
 include/drm/ttm/ttm_page_alloc.h   |   18 ++
 9 files changed, 135 insertions(+), 120 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index b060fa4..f19ac42 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -28,6 +28,7 @@
  */
 
 #include drmP.h
+#include ttm/ttm_page_alloc.h
 
 #include nouveau_drm.h
 #include nouveau_drv.h
@@ -1050,6 +1051,8 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
 
 struct ttm_bo_driver nouveau_bo_driver = {
.ttm_tt_create = nouveau_ttm_tt_create,
+   .ttm_tt_populate = ttm_pool_populate,
+   .ttm_tt_unpopulate = ttm_pool_unpopulate,
.invalidate_caches = nouveau_bo_invalidate_caches,
.init_mem_type = nouveau_bo_init_mem_type,
.evict_flags = nouveau_bo_evict_flags,
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c 
b/drivers/gpu/drm/radeon/radeon_ttm.c
index 53ff62b..13d5996 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -584,6 +584,8 @@ struct ttm_tt *radeon_ttm_tt_create(struct ttm_bo_device 
*bdev,
 
 static struct ttm_bo_driver radeon_bo_driver = {
.ttm_tt_create = radeon_ttm_tt_create,
+   .ttm_tt_populate = ttm_pool_populate,
+   .ttm_tt_unpopulate = ttm_pool_unpopulate,
.invalidate_caches = radeon_invalidate_caches,
.init_mem_type = radeon_init_mem_type,
.evict_flags = radeon_evict_flags,
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c 
b/drivers/gpu/drm/ttm/ttm_bo_util.c
index 082fcae..60f204d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -244,7 +244,7 @@ static int ttm_copy_io_ttm_page(struct ttm_tt *ttm, void 
*src,
unsigned long page,
pgprot_t prot)
 {
-   struct page *d = ttm_tt_get_page(ttm, page);
+   struct page *d = ttm-pages[page];
void *dst;
 
if (!d)
@@ -281,7 +281,7 @@ static int ttm_copy_ttm_io_page(struct ttm_tt *ttm, void 
*dst,
unsigned long page,
pgprot_t prot)
 {
-   struct page *s = ttm_tt_get_page(ttm, page);
+   struct page *s = ttm-pages[page];
void *src;
 
if (!s)
@@ -342,6 +342,12 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo,
if (old_iomap == NULL  ttm == NULL)
goto out2;
 
+   if (ttm-state == tt_unpopulated) {
+   ret = ttm-bdev-driver-ttm_tt_populate(ttm);
+   if (ret)
+   goto out1;
+   }
+
add = 0;
dir = 1;
 
@@ -502,10 +508,16 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
 {
struct ttm_mem_reg *mem = bo-mem; pgprot_t prot;
struct ttm_tt *ttm = bo-ttm;
-   struct page *d;
-   int i;
+   int ret;
 
BUG_ON(!ttm);
+
+   if (ttm-state == tt_unpopulated) {
+   ret = ttm-bdev-driver-ttm_tt_populate(ttm);
+   if (ret)
+   return ret;
+   }
+
if (num_pages == 1  (mem-placement  TTM_PL_FLAG_CACHED)) {
/*
 * We're mapping a single page, and the desired
@@ -513,18 +525,9 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo,
 */
 
map-bo_kmap_type = ttm_bo_map_kmap;
-   map-page = ttm_tt_get_page(ttm, start_page);
+   map-page = ttm-pages[start_page];
map-virtual = kmap(map-page);
} else {
-   /*
-* Populate the part we're mapping;
-*/
-   for (i = start_page; i  start_page + num_pages; ++i) {
-   d = ttm_tt_get_page(ttm, i);
-   if (!d)
-   return -ENOMEM;
-   }
-
/*
 * We need to use vmap to get the desired page 

[PATCH 08/13] drm/ttm: merge ttm_backend and ttm_tt V4

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

ttm_backend will exist only and only with a ttm_tt, and ttm_tt
will be of interesting use only when bind to a backend. Thus to
avoid code  data duplication btw the two merge them.

V2 Rebase on top of memory accounting overhaul
V3 Rebase on top of more memory accounting changes
V4 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/nouveau/nouveau_bo.c|   14 ++-
 drivers/gpu/drm/nouveau/nouveau_drv.h   |5 +-
 drivers/gpu/drm/nouveau/nouveau_sgdma.c |  188 --
 drivers/gpu/drm/radeon/radeon_ttm.c |  222 ---
 drivers/gpu/drm/ttm/ttm_agp_backend.c   |   88 +
 drivers/gpu/drm/ttm/ttm_bo.c|9 +-
 drivers/gpu/drm/ttm/ttm_tt.c|   59 ++---
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c  |   66 +++--
 include/drm/ttm/ttm_bo_driver.h |  104 ++-
 9 files changed, 295 insertions(+), 460 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 7226f41..b060fa4 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -343,8 +343,10 @@ nouveau_bo_wr32(struct nouveau_bo *nvbo, unsigned index, 
u32 val)
*mem = val;
 }
 
-static struct ttm_backend *
-nouveau_bo_create_ttm_backend_entry(struct ttm_bo_device *bdev)
+static struct ttm_tt *
+nouveau_ttm_tt_create(struct ttm_bo_device *bdev,
+ unsigned long size, uint32_t page_flags,
+ struct page *dummy_read_page)
 {
struct drm_nouveau_private *dev_priv = nouveau_bdev(bdev);
struct drm_device *dev = dev_priv-dev;
@@ -352,11 +354,13 @@ nouveau_bo_create_ttm_backend_entry(struct ttm_bo_device 
*bdev)
switch (dev_priv-gart_info.type) {
 #if __OS_HAS_AGP
case NOUVEAU_GART_AGP:
-   return ttm_agp_backend_init(bdev, dev-agp-bridge);
+   return ttm_agp_tt_create(bdev, dev-agp-bridge,
+size, page_flags, dummy_read_page);
 #endif
case NOUVEAU_GART_PDMA:
case NOUVEAU_GART_HW:
-   return nouveau_sgdma_init_ttm(dev);
+   return nouveau_sgdma_create_ttm(bdev, size, page_flags,
+   dummy_read_page);
default:
NV_ERROR(dev, Unknown GART type %d\n,
 dev_priv-gart_info.type);
@@ -1045,7 +1049,7 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
 }
 
 struct ttm_bo_driver nouveau_bo_driver = {
-   .create_ttm_backend_entry = nouveau_bo_create_ttm_backend_entry,
+   .ttm_tt_create = nouveau_ttm_tt_create,
.invalidate_caches = nouveau_bo_invalidate_caches,
.init_mem_type = nouveau_bo_init_mem_type,
.evict_flags = nouveau_bo_evict_flags,
diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h 
b/drivers/gpu/drm/nouveau/nouveau_drv.h
index 29837da..0c53e39 100644
--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
+++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
@@ -1000,7 +1000,10 @@ extern int nouveau_sgdma_init(struct drm_device *);
 extern void nouveau_sgdma_takedown(struct drm_device *);
 extern uint32_t nouveau_sgdma_get_physical(struct drm_device *,
   uint32_t offset);
-extern struct ttm_backend *nouveau_sgdma_init_ttm(struct drm_device *);
+extern struct ttm_tt *nouveau_sgdma_create_ttm(struct ttm_bo_device *bdev,
+  unsigned long size,
+  uint32_t page_flags,
+  struct page *dummy_read_page);
 
 /* nouveau_debugfs.c */
 #if defined(CONFIG_DRM_NOUVEAU_DEBUG)
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 
b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index b75258a..bc2ab90 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -8,44 +8,23 @@
 #define NV_CTXDMA_PAGE_MASK  (NV_CTXDMA_PAGE_SIZE - 1)
 
 struct nouveau_sgdma_be {
-   struct ttm_backend backend;
+   struct ttm_tt ttm;
struct drm_device *dev;
-
-   dma_addr_t *pages;
-   unsigned nr_pages;
-   bool unmap_pages;
-
u64 offset;
-   bool bound;
 };
 
 static int
-nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages,
-  struct page **pages, struct page *dummy_read_page,
-  dma_addr_t *dma_addrs)
+nouveau_sgdma_dma_map(struct ttm_tt *ttm)
 {
-   struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)be;
+   struct nouveau_sgdma_be *nvbe = (struct nouveau_sgdma_be *)ttm;
struct drm_device *dev = nvbe-dev;
int i;
 
-   NV_DEBUG(nvbe-dev, num_pages = %ld\n, num_pages);
-
-   nvbe-pages = dma_addrs;
-   

[PATCH 10/13] drm/ttm: provide dma aware ttm page pool code V7

2011-11-10 Thread j . glisse
From: Konrad Rzeszutek Wilk konrad.w...@oracle.com

In TTM world the pages for the graphic drivers are kept in three different
pools: write combined, uncached, and cached (write-back). When the pages
are used by the graphic driver the graphic adapter via its built in MMU
(or AGP) programs these pages in. The programming requires the virtual address
(from the graphic adapter perspective) and the physical address (either System 
RAM
or the memory on the card) which is obtained using the pci_map_* calls (which 
does the
virtual to physical - or bus address translation). During the graphic 
application's
life those pages can be shuffled around, swapped out to disk, moved from the
VRAM to System RAM or vice-versa. This all works with the existing TTM pool code
- except when we want to use the software IOTLB (SWIOTLB) code to map the 
physical
addresses to the graphic adapter MMU. We end up programming the bounce buffer's
physical address instead of the TTM pool memory's and get a non-worky driver.
There are two solutions:
1) using the DMA API to allocate pages that are screened by the DMA API, or
2) using the pci_sync_* calls to copy the pages from the bounce-buffer and back.

This patch fixes the issue by allocating pages using the DMA API. The second
is a viable option - but it has performance drawbacks and potential correctness
issues - think of the write cache page being bounced (SWIOTLB-TTM), the
WC is set on the TTM page and the copy from SWIOTLB not making it to the TTM
page until the page has been recycled in the pool (and used by another 
application).

The bounce buffer does not get activated often - only in cases where we have
a 32-bit capable card and we want to use a page that is allocated above the
4GB limit. The bounce buffer offers the solution of copying the contents
of that 4GB page to an location below 4GB and then back when the operation has 
been
completed (or vice-versa). This is done by using the 'pci_sync_*' calls.
Note: If you look carefully enough in the existing TTM page pool code you will
notice the GFP_DMA32 flag is used  - which should guarantee that the provided 
page
is under 4GB. It certainly is the case, except this gets ignored in two cases:
 - If user specifies 'swiotlb=force' which bounces _every_ page.
 - If user is using a Xen's PV Linux guest (which uses the SWIOTLB and the
   underlaying PFN's aren't necessarily under 4GB).

To not have this extra copying done the other option is to allocate the pages
using the DMA API so that there is not need to map the page and perform the
expensive 'pci_sync_*' calls.

This DMA API capable TTM pool requires for this the 'struct device' to
properly call the DMA API. It also has to track the virtual and bus address of
the page being handed out in case it ends up being swapped out or de-allocated -
to make sure it is de-allocated using the proper's 'struct device'.

Implementation wise the code keeps two lists: one that is attached to the
'struct device' (via the dev-dma_pools list) and a global one to be used when
the 'struct device' is unavailable (think shrinker code). The global list can
iterate over all of the 'struct device' and its associated dma_pool. The list
in dev-dma_pools can only iterate the device's dma_pool.
/[struct 
device_pool]\
/---| dev   
 |
   /+---| dma_pool  
 |
 /-+--\/
\/
 |struct device| /--[struct dma_pool for WC]/ /[struct 
device_pool]\
 | dma_pools   ++ /-| dev   
 |
 |  ...|\---[struct dma_pool for uncached]-/--| dma_pool  
 |
 \-+--/ /   
\/
\--/
[Two pools associated with the device (WC and UC), and the parallel list
containing the 'struct dev' and 'struct dma_pool' entries]

The maximum amount of dma pools a device can have is six: write-combined,
uncached, and cached; then there are the DMA32 variants which are:
write-combined dma32, uncached dma32, and cached dma32.

Currently this code only gets activated when any variant of the SWIOTLB IOMMU
code is running (Intel without VT-d, AMD without GART, IBM Calgary and Xen PV
with PCI devices).

Tested-by: Michel Dänzer mic...@daenzer.net
[v1: Using swiotlb_nr_tbl instead of swiotlb_enabled]
[v2: Major overhaul - added 'inuse_list' to seperate used from inuse and reorder
the order of lists to get better performance.]
[v3: Added comments/and some logic based on review, Added Jerome tag]
[v4: rebase on top of ttm_tt  ttm_backend merge]
[v5: rebase on top of ttm memory accounting overhaul]
[v6: New rebase on top of more memory accouting changes]
[v7: well rebase on top of no memory accounting changes]

[PATCH 11/13] drm/radeon/kms: enable the ttm dma pool if swiotlb is on V3

2011-11-10 Thread j . glisse
From: Konrad Rzeszutek Wilk konrad.w...@oracle.com

With the exception that we do not handle the AGP case. We only
deal with PCIe cards such as ATI ES1000 or HD3200 that have been
detected to only do DMA up to 32-bits.

V2 force dma32 if we fail to set bigger dma mask
V3 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

CC: Dave Airlie airl...@redhat.com
CC: Alex Deucher alexdeuc...@gmail.com
Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/radeon/radeon.h|1 -
 drivers/gpu/drm/radeon/radeon_device.c |6 ++
 drivers/gpu/drm/radeon/radeon_gart.c   |   29 +---
 drivers/gpu/drm/radeon/radeon_ttm.c|   83 +--
 4 files changed, 84 insertions(+), 35 deletions(-)

diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index e3170c7..63257ba 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -332,7 +332,6 @@ struct radeon_gart {
union radeon_gart_table table;
struct page **pages;
dma_addr_t  *pages_addr;
-   bool*ttm_alloced;
boolready;
 };
 
diff --git a/drivers/gpu/drm/radeon/radeon_device.c 
b/drivers/gpu/drm/radeon/radeon_device.c
index c33bc91..7c31321 100644
--- a/drivers/gpu/drm/radeon/radeon_device.c
+++ b/drivers/gpu/drm/radeon/radeon_device.c
@@ -765,8 +765,14 @@ int radeon_device_init(struct radeon_device *rdev,
r = pci_set_dma_mask(rdev-pdev, DMA_BIT_MASK(dma_bits));
if (r) {
rdev-need_dma32 = true;
+   dma_bits = 32;
printk(KERN_WARNING radeon: No suitable DMA available.\n);
}
+   r = pci_set_consistent_dma_mask(rdev-pdev, DMA_BIT_MASK(dma_bits));
+   if (r) {
+   pci_set_consistent_dma_mask(rdev-pdev, DMA_BIT_MASK(32));
+   printk(KERN_WARNING radeon: No coherent DMA available.\n);
+   }
 
/* Registers mapping */
/* TODO: block userspace mapping of io register */
diff --git a/drivers/gpu/drm/radeon/radeon_gart.c 
b/drivers/gpu/drm/radeon/radeon_gart.c
index fdc3a9a..18f496c 100644
--- a/drivers/gpu/drm/radeon/radeon_gart.c
+++ b/drivers/gpu/drm/radeon/radeon_gart.c
@@ -149,9 +149,6 @@ void radeon_gart_unbind(struct radeon_device *rdev, 
unsigned offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
for (i = 0; i  pages; i++, p++) {
if (rdev-gart.pages[p]) {
-   if (!rdev-gart.ttm_alloced[p])
-   pci_unmap_page(rdev-pdev, 
rdev-gart.pages_addr[p],
-   PAGE_SIZE, 
PCI_DMA_BIDIRECTIONAL);
rdev-gart.pages[p] = NULL;
rdev-gart.pages_addr[p] = rdev-dummy_page.addr;
page_base = rdev-gart.pages_addr[p];
@@ -181,23 +178,7 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned 
offset,
p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
 
for (i = 0; i  pages; i++, p++) {
-   /* we reverted the patch using dma_addr in TTM for now but this
-* code stops building on alpha so just comment it out for now 
*/
-   if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */
-   rdev-gart.ttm_alloced[p] = true;
-   rdev-gart.pages_addr[p] = dma_addr[i];
-   } else {
-   /* we need to support large memory configurations */
-   /* assume that unbind have already been call on the 
range */
-   rdev-gart.pages_addr[p] = pci_map_page(rdev-pdev, 
pagelist[i],
-   0, PAGE_SIZE,
-   PCI_DMA_BIDIRECTIONAL);
-   if (pci_dma_mapping_error(rdev-pdev, 
rdev-gart.pages_addr[p])) {
-   /* FIXME: failed to map page (return -ENOMEM?) 
*/
-   radeon_gart_unbind(rdev, offset, pages);
-   return -ENOMEM;
-   }
-   }
+   rdev-gart.pages_addr[p] = dma_addr[i];
rdev-gart.pages[p] = pagelist[i];
page_base = rdev-gart.pages_addr[p];
for (j = 0; j  (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) {
@@ -259,12 +240,6 @@ int radeon_gart_init(struct radeon_device *rdev)
radeon_gart_fini(rdev);
return -ENOMEM;
}
-   rdev-gart.ttm_alloced = kzalloc(sizeof(bool) *
-rdev-gart.num_cpu_pages, GFP_KERNEL);
-   if (rdev-gart.ttm_alloced == NULL) {
-   radeon_gart_fini(rdev);
-   return -ENOMEM;
-   }
/* set GART entry 

[PATCH 13/13] drm/ttm: isolate dma data from ttm_tt V2

2011-11-10 Thread j . glisse
From: Jerome Glisse jgli...@redhat.com

Move dma data to a superset ttm_dma_tt structure which herit
from ttm_tt. This allow driver that don't use dma functionalities
to not have to waste memory for it.

V2 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

Signed-off-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/nouveau/nouveau_bo.c |   18 +++--
 drivers/gpu/drm/nouveau/nouveau_sgdma.c  |   22 --
 drivers/gpu/drm/radeon/radeon_ttm.c  |   43 ++--
 drivers/gpu/drm/ttm/ttm_page_alloc.c |  114 +++---
 drivers/gpu/drm/ttm/ttm_page_alloc_dma.c |   35 +
 drivers/gpu/drm/ttm/ttm_tt.c |   58 +---
 drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c   |2 +
 include/drm/ttm/ttm_bo_driver.h  |   31 -
 include/drm/ttm/ttm_page_alloc.h |   33 +
 9 files changed, 202 insertions(+), 154 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 2dc0d83..d6326af 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1052,6 +1052,7 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
 static int
 nouveau_ttm_tt_populate(struct ttm_tt *ttm)
 {
+   struct ttm_dma_tt *ttm_dma = (void *)ttm;
struct drm_nouveau_private *dev_priv;
struct drm_device *dev;
unsigned i;
@@ -1065,7 +1066,7 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)
 
 #ifdef CONFIG_SWIOTLB
if (swiotlb_nr_tbl()) {
-   return ttm_dma_populate(ttm, dev-dev);
+   return ttm_dma_populate((void *)ttm, dev-dev);
}
 #endif
 
@@ -1075,14 +1076,14 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)
}
 
for (i = 0; i  ttm-num_pages; i++) {
-   ttm-dma_address[i] = pci_map_page(dev-pdev, ttm-pages[i],
+   ttm_dma-dma_address[i] = pci_map_page(dev-pdev, ttm-pages[i],
   0, PAGE_SIZE,
   PCI_DMA_BIDIRECTIONAL);
-   if (pci_dma_mapping_error(dev-pdev, ttm-dma_address[i])) {
+   if (pci_dma_mapping_error(dev-pdev, ttm_dma-dma_address[i])) {
while (--i) {
-   pci_unmap_page(dev-pdev, ttm-dma_address[i],
+   pci_unmap_page(dev-pdev, 
ttm_dma-dma_address[i],
   PAGE_SIZE, 
PCI_DMA_BIDIRECTIONAL);
-   ttm-dma_address[i] = 0;
+   ttm_dma-dma_address[i] = 0;
}
ttm_pool_unpopulate(ttm);
return -EFAULT;
@@ -1094,6 +1095,7 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm)
 static void
 nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)
 {
+   struct ttm_dma_tt *ttm_dma = (void *)ttm;
struct drm_nouveau_private *dev_priv;
struct drm_device *dev;
unsigned i;
@@ -1103,14 +1105,14 @@ nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)
 
 #ifdef CONFIG_SWIOTLB
if (swiotlb_nr_tbl()) {
-   ttm_dma_unpopulate(ttm, dev-dev);
+   ttm_dma_unpopulate((void *)ttm, dev-dev);
return;
}
 #endif
 
for (i = 0; i  ttm-num_pages; i++) {
-   if (ttm-dma_address[i]) {
-   pci_unmap_page(dev-pdev, ttm-dma_address[i],
+   if (ttm_dma-dma_address[i]) {
+   pci_unmap_page(dev-pdev, ttm_dma-dma_address[i],
   PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
}
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c 
b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
index ee1eb7c..47f245e 100644
--- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
+++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
@@ -8,7 +8,10 @@
 #define NV_CTXDMA_PAGE_MASK  (NV_CTXDMA_PAGE_SIZE - 1)
 
 struct nouveau_sgdma_be {
-   struct ttm_tt ttm;
+   /* this has to be the first field so populate/unpopulated in
+* nouve_bo.c works properly, otherwise have to move them here
+*/
+   struct ttm_dma_tt ttm;
struct drm_device *dev;
u64 offset;
 };
@@ -20,6 +23,7 @@ nouveau_sgdma_destroy(struct ttm_tt *ttm)
 
if (ttm) {
NV_DEBUG(nvbe-dev, \n);
+   ttm_dma_tt_fini(nvbe-ttm);
kfree(nvbe);
}
 }
@@ -38,7 +42,7 @@ nv04_sgdma_bind(struct ttm_tt *ttm, struct ttm_mem_reg *mem)
nvbe-offset = mem-start  PAGE_SHIFT;
pte = (nvbe-offset  NV_CTXDMA_PAGE_SHIFT) + 2;
for (i = 0; i  ttm-num_pages; i++) {
-   dma_addr_t dma_offset = ttm-dma_address[i];
+   dma_addr_t dma_offset = nvbe-ttm.dma_address[i];
uint32_t offset_l = lower_32_bits(dma_offset);
 
for (j = 0; j  PAGE_SIZE / 

[PATCH 12/13] drm/nouveau: enable the ttm dma pool when swiotlb is active V3

2011-11-10 Thread j . glisse
From: Konrad Rzeszutek Wilk konrad.w...@oracle.com

If the card is capable of more than 32-bit, then use the default
TTM page pool code which allocates from anywhere in the memory.

Note: If the 'ttm.no_dma' parameter is set, the override is ignored
and the default TTM pool is used.

V2 use pci_set_consistent_dma_mask
V3 Rebase on top of no memory account changes (where/when is my
   delorean when i need it ?)

CC: Ben Skeggs bske...@redhat.com
CC: Francisco Jerez curroje...@riseup.net
CC: Dave Airlie airl...@redhat.com
Signed-off-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
Reviewed-by: Jerome Glisse jgli...@redhat.com
---
 drivers/gpu/drm/nouveau/nouveau_bo.c  |   73 -
 drivers/gpu/drm/nouveau/nouveau_debugfs.c |1 +
 drivers/gpu/drm/nouveau/nouveau_mem.c |6 ++
 drivers/gpu/drm/nouveau/nouveau_sgdma.c   |   60 +---
 4 files changed, 79 insertions(+), 61 deletions(-)

diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c 
b/drivers/gpu/drm/nouveau/nouveau_bo.c
index f19ac42..2dc0d83 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -1049,10 +1049,79 @@ nouveau_bo_fence(struct nouveau_bo *nvbo, struct 
nouveau_fence *fence)
nouveau_fence_unref(old_fence);
 }
 
+static int
+nouveau_ttm_tt_populate(struct ttm_tt *ttm)
+{
+   struct drm_nouveau_private *dev_priv;
+   struct drm_device *dev;
+   unsigned i;
+   int r;
+
+   if (ttm-state != tt_unpopulated)
+   return 0;
+
+   dev_priv = nouveau_bdev(ttm-bdev);
+   dev = dev_priv-dev;
+
+#ifdef CONFIG_SWIOTLB
+   if (swiotlb_nr_tbl()) {
+   return ttm_dma_populate(ttm, dev-dev);
+   }
+#endif
+
+   r = ttm_pool_populate(ttm);
+   if (r) {
+   return r;
+   }
+
+   for (i = 0; i  ttm-num_pages; i++) {
+   ttm-dma_address[i] = pci_map_page(dev-pdev, ttm-pages[i],
+  0, PAGE_SIZE,
+  PCI_DMA_BIDIRECTIONAL);
+   if (pci_dma_mapping_error(dev-pdev, ttm-dma_address[i])) {
+   while (--i) {
+   pci_unmap_page(dev-pdev, ttm-dma_address[i],
+  PAGE_SIZE, 
PCI_DMA_BIDIRECTIONAL);
+   ttm-dma_address[i] = 0;
+   }
+   ttm_pool_unpopulate(ttm);
+   return -EFAULT;
+   }
+   }
+   return 0;
+}
+
+static void
+nouveau_ttm_tt_unpopulate(struct ttm_tt *ttm)
+{
+   struct drm_nouveau_private *dev_priv;
+   struct drm_device *dev;
+   unsigned i;
+
+   dev_priv = nouveau_bdev(ttm-bdev);
+   dev = dev_priv-dev;
+
+#ifdef CONFIG_SWIOTLB
+   if (swiotlb_nr_tbl()) {
+   ttm_dma_unpopulate(ttm, dev-dev);
+   return;
+   }
+#endif
+
+   for (i = 0; i  ttm-num_pages; i++) {
+   if (ttm-dma_address[i]) {
+   pci_unmap_page(dev-pdev, ttm-dma_address[i],
+  PAGE_SIZE, PCI_DMA_BIDIRECTIONAL);
+   }
+   }
+
+   ttm_pool_unpopulate(ttm);
+}
+
 struct ttm_bo_driver nouveau_bo_driver = {
.ttm_tt_create = nouveau_ttm_tt_create,
-   .ttm_tt_populate = ttm_pool_populate,
-   .ttm_tt_unpopulate = ttm_pool_unpopulate,
+   .ttm_tt_populate = nouveau_ttm_tt_populate,
+   .ttm_tt_unpopulate = nouveau_ttm_tt_unpopulate,
.invalidate_caches = nouveau_bo_invalidate_caches,
.init_mem_type = nouveau_bo_init_mem_type,
.evict_flags = nouveau_bo_evict_flags,
diff --git a/drivers/gpu/drm/nouveau/nouveau_debugfs.c 
b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
index 8e15923..f52c2db 100644
--- a/drivers/gpu/drm/nouveau/nouveau_debugfs.c
+++ b/drivers/gpu/drm/nouveau/nouveau_debugfs.c
@@ -178,6 +178,7 @@ static struct drm_info_list nouveau_debugfs_list[] = {
{ memory, nouveau_debugfs_memory_info, 0, NULL },
{ vbios.rom, nouveau_debugfs_vbios_image, 0, NULL },
{ ttm_page_pool, ttm_page_alloc_debugfs, 0, NULL },
+   { ttm_dma_page_pool, ttm_dma_page_alloc_debugfs, 0, NULL },
 };
 #define NOUVEAU_DEBUGFS_ENTRIES ARRAY_SIZE(nouveau_debugfs_list)
 
diff --git a/drivers/gpu/drm/nouveau/nouveau_mem.c 
b/drivers/gpu/drm/nouveau/nouveau_mem.c
index 36bec48..37fcaa2 100644
--- a/drivers/gpu/drm/nouveau/nouveau_mem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_mem.c
@@ -407,6 +407,12 @@ nouveau_mem_vram_init(struct drm_device *dev)
ret = pci_set_dma_mask(dev-pdev, DMA_BIT_MASK(dma_bits));
if (ret)
return ret;
+   ret = pci_set_consistent_dma_mask(dev-pdev, DMA_BIT_MASK(dma_bits));
+   if (ret) {
+   /* Reset to default value. */
+   pci_set_consistent_dma_mask(dev-pdev, DMA_BIT_MASK(32));
+   }
+
 
ret = 

Re: [PATCH 05/13] drm/ttm: overhaul memory accounting

2011-11-10 Thread Thomas Hellstrom

On 11/11/2011 12:33 AM, Jerome Glisse wrote:

On Thu, Nov 10, 2011 at 09:05:22PM +0100, Thomas Hellstrom wrote:
   

On 11/10/2011 07:05 PM, Jerome Glisse wrote:
 

On Thu, Nov 10, 2011 at 11:27:33AM +0100, Thomas Hellstrom wrote:
   

On 11/09/2011 09:22 PM, j.gli...@gmail.com wrote:
 

From: Jerome Glissejgli...@redhat.com

This is an overhaul of the ttm memory accounting. This tries to keep
the same global behavior while removing the whole zone concept. It
keeps a distrinction for dma32 so that we make sure that ttm don't
starve the dma32 zone.

There is 3 threshold for memory allocation :
- max_mem is the maximum memory the whole ttm infrastructure is
   going to allow allocation for (exception of system process see
   below)
- emer_mem is the maximum memory allowed for system process, this
   limit isto max_mem
- swap_limit is the threshold at which point ttm will start to
   try to swap object because ttm is getting close the max_mem
   limit
- swap_dma32_limit is the threshold at which point ttm will start
   swap object to try to reduce the pressure on the dma32 zone. Note
   that we don't specificly target object to swap to it might very
   well free more memory from highmem rather than from dma32

Accounting is done through used_memused_dma32_mem, which sum give
the total amount of memory actually accounted by ttm.

Idea is that allocation will fail if (used_mem + used_dma32_mem)
max_mem and if swapping fail to make enough room.

The used_dma32_mem can be updated as a later stage, allowing to
perform accounting test before allocating a whole batch of pages.

   

Jerome, you're removing a fair amount of functionality here, without
justifying
why it could be removed.
 

All this code was overkill.
   

[1] I don't agree, and since it's well tested, thought throught and
working, I see no obvious reason to alter it,
within the context of this patch series unless it's absolutely
required for the functionality.
 

Well one thing i can tell is that it doesn't work on radeon, i pushed
a test to libdrm and here it's the oom that starts doing its beating.
Anyway i won't alter it. Was just trying to make it works, ie be useful
while also being simpler.
   


Well if it doesn't work it should of course be fixed.

I'm not against fixing it nor making it simpler, but I think that 
requires a detailed understanding of what's going wrong and how it needs 
to be fixed. Not as part of a patch series that really tries to 
accomplish something else.


The current code was tested extensively with psb and unichrome.
One good test for drivers with bo-backed textures is to continously 
create fairly large texture images. The end result should be the swap 
space starting to fill up and once there is no more swap space, the OOM 
killer should kill your app, and kmalloc failures should be avoided. It 
should be tricky to get a failure from the global alloc system, but a 
huge amount of small buffer objects or fence objects should probably do it.


Naturally, that requires that all persistent drm objects created from 
user-space are registered with their correct sizes, or at least a really 
good size approximation. That includes things like gem flinks, that 
could otherwise easily be exploited to bring a system down, simply by 
guessing a gem name and create flinks to that name in an infinite loop.


What are the symptoms of the failure you're seeing with Radeon? Any 
suggestions on why it happens?


Thanks,
Thomas

___
dri-devel mailing list
dri-devel@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/dri-devel