Re: [Spice-devel] [PATCH v2 spice-gtk] Adjust to window scaling
Hi On Sun, Mar 17, 2019 at 4:28 PM Snir Sheriber wrote: > > When GDK_SCALE is != 1 and egl is used, the image presented does not > fit to the window (scale of 2 is often used with hidpi monitors). > Usually this is not a problem since all components are adjusted by > gdk/gtk but with egl, pixel-based data is not being scaled. In this > case window's scale value can be used in order to determine whether > to use a pixel resource with higher resolution data. > > In order to reproduce the problem set spice with virgl/Intel-vGPU > and run spice-gtk with GDK_SCALE=2 > --- > Changes from v1: > -commit msg > -replace var naming (ws with win_scale) > > > This patch is kind of RFC, it fixes the issue, but it's a bit hacky > and specific. I didn't come across other scale issues but it is likely > that more of these exist and better and generic fix is needed. > > --- > src/spice-widget-egl.c | 15 +-- > src/spice-widget-priv.h | 1 + > 2 files changed, 14 insertions(+), 2 deletions(-) > > diff --git a/src/spice-widget-egl.c b/src/spice-widget-egl.c > index 43fccd7..600c87a 100644 > --- a/src/spice-widget-egl.c > +++ b/src/spice-widget-egl.c > @@ -326,6 +326,8 @@ static gboolean spice_widget_init_egl_win(SpiceDisplay > *display, GdkWindow *win, > if (d->egl.surface) > return TRUE; > > +d->egl.scale = gdk_window_get_scale_factor(win); Why not use gtk_widget_get_scale_factor() directly from spice_egl_resize_display? > + > #ifdef GDK_WINDOWING_X11 > if (GDK_IS_X11_WINDOW(win)) { > native = (EGLNativeWindowType)GDK_WINDOW_XID(win); > @@ -431,15 +433,17 @@ void spice_egl_resize_display(SpiceDisplay *display, > int w, int h) > { > SpiceDisplayPrivate *d = display->priv; > int prog; > +gint win_scale; > > if (!gl_make_current(display, NULL)) > return; > > +win_scale = d->egl.scale; > glGetIntegerv(GL_CURRENT_PROGRAM, ); > > glUseProgram(d->egl.prog); > -apply_ortho(d->egl.mproj, 0, w, 0, h, -1, 1); > -glViewport(0, 0, w, h); > +apply_ortho(d->egl.mproj, 0, w * win_scale , 0, h * win_scale, -1, 1); > +glViewport(0, 0, w * win_scale, h * win_scale); > > if (d->ready) > spice_egl_update_display(display); > @@ -559,6 +563,13 @@ void spice_egl_update_display(SpiceDisplay *display) > > spice_display_get_scaling(display, , , , , ); > > +// Adjust to gdk scale > +s *= d->egl.scale; > +x *= d->egl.scale; > +y *= d->egl.scale; > +w *= d->egl.scale; > +h *= d->egl.scale; > + > glClearColor(0.0f, 0.0f, 0.0f, 0.0f); > glClear(GL_COLOR_BUFFER_BIT); > > diff --git a/src/spice-widget-priv.h b/src/spice-widget-priv.h > index 65eb404..8f110ac 100644 > --- a/src/spice-widget-priv.h > +++ b/src/spice-widget-priv.h > @@ -149,6 +149,7 @@ struct _SpiceDisplayPrivate { > EGLImageKHR image; > gbooleancall_draw_done; > SpiceGlScanout scanout; > +gintscale; > } egl; > #endif // HAVE_EGL > double scroll_delta_y; > -- > 2.19.1 > > ___ > Spice-devel mailing list > Spice-devel@lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/spice-devel -- Marc-André Lureau ___ Spice-devel mailing list Spice-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/spice-devel
[Spice-devel] [spice 1/2] utils: Remove the LL suffix from NSEC_PER_MILLISEC
This constant fits in a regular 32 bit signed integer so it does not need the suffix. It is also not used in expressions that may overflow. Signed-off-by: Francois Gouget --- server/utils.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/server/utils.h b/server/utils.h index ea0de1529..54bc9d490 100644 --- a/server/utils.h +++ b/server/utils.h @@ -53,7 +53,7 @@ static inline int test_bit(int index, uint32_t val) typedef int64_t red_time_t; #define NSEC_PER_SEC 10LL -#define NSEC_PER_MILLISEC 100LL +#define NSEC_PER_MILLISEC 100 #define NSEC_PER_MICROSEC 1000 /* g_get_monotonic_time() does not have enough precision */ -- 2.20.1 ___ Spice-devel mailing list Spice-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/spice-devel
[Spice-devel] [spice 2/2] utils: Remove the LL suffix from NSEC_PER_SEC
This constant fits in a 32 bit signed integer so it does not need the suffix. However some of the derived constants don't so use an uint64_t cast to avoid the long vs long long confusion (such as in print statements). Also some of the expressions these constants are used in may overflow so perform the appropriate casts in those locations. This makes it clearer that these operations are 64 bit. Signed-off-by: Francois Gouget --- server/common-graphics-channel.h | 2 +- server/dcc.h | 2 +- server/gstreamer-encoder.c | 2 +- server/mjpeg-encoder.c | 2 +- server/utils.h | 4 ++-- server/video-stream.c| 4 ++-- server/video-stream.h| 2 +- 7 files changed, 9 insertions(+), 9 deletions(-) diff --git a/server/common-graphics-channel.h b/server/common-graphics-channel.h index d23f0c695..dcd527718 100644 --- a/server/common-graphics-channel.h +++ b/server/common-graphics-channel.h @@ -27,7 +27,7 @@ G_BEGIN_DECLS bool common_channel_client_config_socket(RedChannelClient *rcc); -#define COMMON_CLIENT_TIMEOUT (NSEC_PER_SEC * 30) +#define COMMON_CLIENT_TIMEOUT (((uint64_t)NSEC_PER_SEC) * 30) #define TYPE_COMMON_GRAPHICS_CHANNEL common_graphics_channel_get_type() diff --git a/server/dcc.h b/server/dcc.h index 76c078bf5..da8697762 100644 --- a/server/dcc.h +++ b/server/dcc.h @@ -66,7 +66,7 @@ GType display_channel_client_get_type(void) G_GNUC_CONST; #define PALETTE_CACHE_HASH_KEY(id) ((id) & PALETTE_CACHE_HASH_MASK) #define CLIENT_PALETTE_CACHE_SIZE 128 -#define DISPLAY_CLIENT_MIGRATE_DATA_TIMEOUT (NSEC_PER_SEC * 10) +#define DISPLAY_CLIENT_MIGRATE_DATA_TIMEOUT (((uint64_t)NSEC_PER_SEC) * 10) #define DISPLAY_CLIENT_RETRY_INTERVAL 1 //micro /* Each drawable can refer to at most 3 images: src, brush and mask */ diff --git a/server/gstreamer-encoder.c b/server/gstreamer-encoder.c index 5f39ed877..91dd475ac 100644 --- a/server/gstreamer-encoder.c +++ b/server/gstreamer-encoder.c @@ -556,7 +556,7 @@ static void update_next_frame_mm_time(SpiceGstEncoder *encoder) /* Figure out how many frames to drop to not exceed the current bit rate. * Use nanoseconds to avoid precision loss. */ -uint64_t delay_ns = -encoder->vbuffer_free * 8 * NSEC_PER_SEC / encoder->bit_rate; +uint64_t delay_ns = ((uint64_t)-encoder->vbuffer_free) * 8 * NSEC_PER_SEC / encoder->bit_rate; uint32_t drops = (delay_ns + period_ns - 1) / period_ns; /* round up */ spice_debug("drops=%u vbuffer %d/%d", drops, encoder->vbuffer_free, encoder->vbuffer_size); diff --git a/server/mjpeg-encoder.c b/server/mjpeg-encoder.c index 4a02e7c8b..d22e7d18b 100644 --- a/server/mjpeg-encoder.c +++ b/server/mjpeg-encoder.c @@ -65,7 +65,7 @@ static const int mjpeg_quality_samples[MJPEG_QUALITY_SAMPLE_NUM] = {20, 30, 40, * are not necessarily related to mis-estimation of the bit rate, and we would * like to wait till the stream stabilizes. */ -#define MJPEG_WARMUP_TIME (NSEC_PER_SEC * 3) +#define MJPEG_WARMUP_TIME (((uint64_t)NSEC_PER_SEC) * 3) /* The compressed buffer initial size. */ #define MJPEG_INITIAL_BUFFER_SIZE (32 * 1024) diff --git a/server/utils.h b/server/utils.h index 54bc9d490..4bbd45dff 100644 --- a/server/utils.h +++ b/server/utils.h @@ -52,7 +52,7 @@ static inline int test_bit(int index, uint32_t val) typedef int64_t red_time_t; -#define NSEC_PER_SEC 10LL +#define NSEC_PER_SEC 10 #define NSEC_PER_MILLISEC 100 #define NSEC_PER_MICROSEC 1000 @@ -62,7 +62,7 @@ static inline red_time_t spice_get_monotonic_time_ns(void) struct timespec time; clock_gettime(CLOCK_MONOTONIC, ); -return NSEC_PER_SEC * time.tv_sec + time.tv_nsec; +return ((uint64_t)NSEC_PER_SEC) * time.tv_sec + time.tv_nsec; } #define MSEC_PER_SEC 1000 diff --git a/server/video-stream.c b/server/video-stream.c index 4ac25af8b..7a2dca7dd 100644 --- a/server/video-stream.c +++ b/server/video-stream.c @@ -415,8 +415,8 @@ static void display_channel_create_stream(DisplayChannel *display, Drawable *dra * the nearest integer, for instance 24 for 23.976. */ uint64_t duration = drawable->creation_time - drawable->first_frame_time; -if (duration > NSEC_PER_SEC * drawable->frames_count / MAX_FPS) { -stream->input_fps = (NSEC_PER_SEC * drawable->frames_count + duration / 2) / duration; +if (duration > ((uint64_t)NSEC_PER_SEC) * drawable->frames_count / MAX_FPS) { +stream->input_fps = (((uint64_t)NSEC_PER_SEC) * drawable->frames_count + duration / 2) / duration; } else { stream->input_fps = MAX_FPS; } diff --git a/server/video-stream.h b/server/video-stream.h index 46b076fd7..6c18d00f7 100644 --- a/server/video-stream.h +++ b/server/video-stream.h @@ -34,7 +34,7 @@ #define RED_STREAM_GRADUAL_FRAMES_START_CONDITION 0.2 #define RED_STREAM_FRAMES_RESET_CONDITION 100 #define RED_STREAM_MIN_SIZE (96 * 96) -#define
[Spice-devel] [spice 0/2] utils: Remove the LL suffix from the NSEC_PER_* constants
The LL suffix forces the expressions these constants are used in to be computed in 64 bit. The good thing is that this avoids overflows. The downside is that, because the LL suffix is hidden in a far away macro, looking at the expression will not reveal whether it operates on 32 or 64 bit integers. This is made even more confusing by the fact that not all the NSEC_PER_* constants have the LL suffix. Then when a developper adds a spice_debug() trace using one of these constants it also makes it hard to know in advance whether %ld or %lld should be used, causing unnecessary compilation errors. Finally, while the main NSEC_PER_* constants fit in 32 bit some of the derived ones such as COMMON_CLIENT_TIMEOUT don't. For those use an uint64_t cast rather than an LL suffix to avoid the %ld vs. %lld confusion (again for debugging). Francois Gouget (2): utils: Remove the LL suffix from NSEC_PER_MILLISEC utils: Remove the LL suffix from NSEC_PER_SEC server/common-graphics-channel.h | 2 +- server/dcc.h | 2 +- server/gstreamer-encoder.c | 2 +- server/mjpeg-encoder.c | 2 +- server/utils.h | 6 +++--- server/video-stream.c| 4 ++-- server/video-stream.h| 2 +- 7 files changed, 10 insertions(+), 10 deletions(-) -- 2.20.1 ___ Spice-devel mailing list Spice-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/spice-devel