Re: [PATCH][next] drm: Replace zero-length array with flexible-array member
Quoting Gustavo A. R. Silva (2020-02-25 14:03:47) > The current codebase makes use of the zero-length array language > extension to the C90 standard, but the preferred mechanism to declare > variable-length types such as these ones is a flexible array member[1][2], > introduced in C99: I remember when gcc didn't support []. For the record, it appears support for flexible arrays landed in gcc-3.0. So passes the minimum compiler spec. That would be useful to mention for old farts with forgetful memories. -Chris ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v2 2/3] drm: plumb attaching dev thru to prime_pin/unpin
Quoting Rob Clark (2019-07-16 18:43:22) > From: Rob Clark > > Needed in the following patch for cache operations. What's the base for this patch? (I'm missing the ancestor for drm_gem.c) -Chris ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [PATCH v2 00/12] remove_conflicting_framebuffers() cleanup
Quoting Daniel Vetter (2018-08-31 10:04:39) > On Thu, Aug 30, 2018 at 11:00:01PM +0200, Michał Mirosław wrote: > > This series cleans up duplicated code for replacing firmware FB > > driver with proper DRI driver and adds handover support to > > Tegra driver. > > > > This is a sligtly updated version of a series sent on 24 Nov 2017. > > > > v2: > > - rebased on current drm-next > > - dropped staging/sm750fb changes > > - added kernel docs for DRM helpers > > > > Michał Mirosław (12): > > fbdev: show fbdev number for debugging > > fbdev: allow apertures == NULL in remove_conflicting_framebuffers() > > fbdev: add remove_conflicting_pci_framebuffers() > > drm/amdgpu: use simpler remove_conflicting_pci_framebuffers() > > drm/bochs: use simpler remove_conflicting_pci_framebuffers() > > drm/cirrus: use simpler remove_conflicting_pci_framebuffers() > > drm/mgag200: use simpler remove_conflicting_pci_framebuffers() > > drm/radeon: use simpler remove_conflicting_pci_framebuffers() > > drm/virtio: use simpler remove_conflicting_pci_framebuffers() > > drm/vc4: use simpler remove_conflicting_framebuffers(NULL) > > drm/sun4i: use simpler remove_conflicting_framebuffers(NULL) > > drm/tegra: kick out simplefb > > Looks very neat. A bit confused about the drm changes in the fbdev-titled > patches 1&3, but I guess we can merge as-is. Up to you whether you want to > split or not I'd say. Ahah, someone is looking at remove_conflicting_framebuffers(). May I interest you in a use-after-free? [ 378.423513] stack segment: [#1] PREEMPT SMP PTI [ 378.423530] CPU: 1 PID: 4338 Comm: pm_rpm Tainted: G U 4.19.0-rc1-CI-CI_DRM_4746+ #1 [ 378.423548] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./J4205-ITX, BIOS P1.10 09/29/2016 [ 378.423570] RIP: 0010:do_remove_conflicting_framebuffers+0x56/0x170 [ 378.423587] Code: 49 8b 45 00 48 85 c0 74 50 f6 40 0a 08 74 4a 4d 85 e4 48 8b a8 78 04 00 00 74 1f 48 85 ed 74 1a 41 8b 0c 24 31 db 85 c9 74 10 <8b> 55 00 85 d2 75 42 83 c3 01 41 39 1c 24 77 f0 48 85 ed 74 1a 45 [ 378.423620] RSP: 0018:c91dfa88 EFLAGS: 00010202 [ 378.423632] RAX: 880274470008 RBX: RCX: 0001 [ 378.423646] RDX: 0001 RSI: a025c634 RDI: 88025cc3b428 [ 378.423660] RBP: 6b6b6b6b6b6b6b6b R08: 1edaddfa R09: a025c634 [ 378.423673] R10: c91dfae8 R11: 820de938 R12: 88025cc3b428 [ 378.423687] R13: 8234ca20 R14: 8234cb20 R15: 0001 [ 378.423701] FS: 7fcf03d0a980() GS:880277e8() knlGS: [ 378.423717] CS: 0010 DS: ES: CR0: 80050033 [ 378.423729] CR2: 7fffece1fdb8 CR3: 0001fe32e000 CR4: 003406e0 [ 378.423742] Call Trace: [ 378.423756] remove_conflicting_framebuffers+0x28/0x40 [ 378.423856] i915_driver_load+0x7f5/0x10c0 [i915] [ 378.423873] ? _raw_spin_unlock_irqrestore+0x4c/0x60 [ 378.423887] ? lockdep_hardirqs_on+0xe0/0x1b0 [ 378.423962] i915_pci_probe+0x29/0xa0 [i915] [ 378.423977] pci_device_probe+0xa1/0x130 [ 378.423990] really_probe+0x25d/0x3c0 [ 378.424002] driver_probe_device+0x10a/0x120 [ 378.424013] __driver_attach+0xdb/0x100 [ 378.424025] ? driver_probe_device+0x120/0x120 [ 378.424037] bus_for_each_dev+0x74/0xc0 [ 378.424048] bus_add_driver+0x15f/0x250 [ 378.424060] ? 0xa069d000 [ 378.424070] driver_register+0x56/0xe0 [ 378.424080] ? 0xa069d000 [ 378.424090] do_one_initcall+0x58/0x2e0 [ 378.424101] ? rcu_lockdep_current_cpu_online+0x8f/0xd0 [ 378.424116] ? do_init_module+0x1d/0x1ea [ 378.424127] ? rcu_read_lock_sched_held+0x6f/0x80 [ 378.424141] ? kmem_cache_alloc_trace+0x264/0x290 [ 378.424154] do_init_module+0x56/0x1ea [ 378.424167] load_module+0x26ba/0x29a0 [ 378.424182] ? vfs_read+0x122/0x140 [ 378.424199] ? __se_sys_finit_module+0xd3/0xf0 [ 378.424210] __se_sys_finit_module+0xd3/0xf0 [ 378.424226] do_syscall_64+0x55/0x190 [ 378.424237] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 378.424249] RIP: 0033:0x7fcf02f9b839 [ 378.424258] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 1f f6 2c 00 f7 d8 64 89 01 48 [ 378.424290] RSP: 002b:7fffece21f58 EFLAGS: 0246 ORIG_RAX: 0139 [ 378.424307] RAX: ffda RBX: 56344e1a4d80 RCX: 7fcf02f9b839 [ 378.424321] RDX: RSI: 7fcf026470e5 RDI: 0003 [ 378.424336] RBP: 7fcf026470e5 R08: R09: [ 378.424349] R10: 0003 R11: 0246 R12: [ 378.424363] R13: 56344e1a R14: R15: 56344e1a4d80 https://intel-gfx-ci.01.org/tree/drm-tip/IGT_4613/fi-bxt-j4205/dmesg0.log -Chris ___ Virtualization mailing list
Re: [patch net-next 0/3] net/sched: Improve getting objects by indexes
Quoting Christian König (2017-08-16 08:49:07) > Am 16.08.2017 um 04:12 schrieb Chris Mi: > > Using current TC code, it is very slow to insert a lot of rules. > > > > In order to improve the rules update rate in TC, > > we introduced the following two changes: > > 1) changed cls_flower to use IDR to manage the filters. > > 2) changed all act_xxx modules to use IDR instead of > > a small hash table > > > > But IDR has a limitation that it uses int. TC handle uses u32. > > To make sure there is no regression, we also changed IDR to use > > unsigned long. All clients of IDR are changed to use new IDR API. > > WOW, wait a second. The idr change is touching a lot of drivers and to > be honest doesn't looks correct at all. > > Just look at the first chunk of your modification: > > @@ -998,8 +999,9 @@ int bsg_register_queue(struct request_queue *q, struct > > device *parent, > > > > mutex_lock(_mutex); > > > > - ret = idr_alloc(_minor_idr, bcd, 0, BSG_MAX_DEVS, GFP_KERNEL); > > - if (ret < 0) { > > + ret = idr_alloc(_minor_idr, bcd, _index, 0, BSG_MAX_DEVS, > > + GFP_KERNEL); > > + if (ret) { > > if (ret == -ENOSPC) { > > printk(KERN_ERR "bsg: too many bsg devices\n"); > > ret = -EINVAL; > The condition "if (ret)" will now always be true after the first > allocation and so we always run into the error handling after that. ret is now purely the error code, so it doesn't look that suspicious. > I've never read the bsg code before, but that's certainly not correct. > And that incorrect pattern repeats over and over again in this code. > > Apart from that why the heck do you want to allocate more than 1<<31 > handles? And more to the point, arbitrarily changing the maximum to ULONG_MAX where the ABI only supports U32_MAX is dangerous. Unless you do the analysis otherwise, you have to replace all the end=0 with end=INT_MAX to maintain existing behaviour. -Chris ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization
Re: [Intel-gfx] [PATCH 01/20] drm/atomic: Fix remaining places where !funcs->best_encoder is valid
On Thu, Jun 02, 2016 at 11:57:02PM +0200, Daniel Vetter wrote: > drm_encoder_find is an idr lookup. That should be plenty fast, > especially for modeset code. Usually what's too expensive even for > modeset code is linear list walks. But Chris just submitted patches to > convert most of them into simple lookups. For the idr_find, I'm tempted to replace the mutex with a rwlock. It helps pathological cases, but in reality there are more crucial locks around the hw that limit concurrency. ;-) -Chris -- Chris Wilson, Intel Open Source Technology Centre ___ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization