Bug#920900: pkgdata in icu-devtools(63.2-2) pkg on deb10 stable still use icu-config

2019-08-29 Thread Gregory Auzanneau

Dear Maintainer,

As you confirm icu-config is now deprecated.

But pkgdata in icu-devtools(63.2-2) on deb10 stable is still referring 
to icu-config :

$ pkgdata -p bin_mkltfs -m static packagelist.txt
sh: 1: icu-config: not found
sh: 1: icu-config: not found
pkgdata: icu-config: No icu-config found. (fix PATH or use -O option)
 required parameter is missing: -O is required for static and shared 
builds.

Run 'pkgdata --help' for help.

icu-63.2/source/tools/pkgdata/pkgdata.cpp :
line 2137 -> 2172 :
/* Try calling icu-config directly to get the option file. */
 static int32_t pkg_getOptionsFromICUConfig(UBool verbose, UOption 
*option) {

#if U_HAVE_POPEN
LocalPipeFilePointer p;
size_t n;
static char buf[512] = "";
icu::CharString cmdBuf;
UErrorCode status = U_ZERO_ERROR;
const char cmd[] = "icu-config --incpkgdatafile";
char dirBuf[1024] = "";
/* #1 try the same path where pkgdata was called from. */
findDirname(progname, dirBuf, UPRV_LENGTHOF(dirBuf), &status);
if(U_SUCCESS(status)) {
  cmdBuf.append(dirBuf, status);
  if (cmdBuf[0] != 0) {
cmdBuf.append( U_FILE_SEP_STRING, status );
  }
  cmdBuf.append( cmd, status );

  if(verbose) {
fprintf(stdout, "# Calling icu-config: %s\n", cmdBuf.data());
  }
  p.adoptInstead(popen(cmdBuf.data(), "r"));
[...]

On http://userguide.icu-project.org/howtouseicu, there is an 
recommendation about that :
pkgdata uses the icu-config script in order to locate pkgdata.inc. If 
you are not building ICU using the supplied tools, you may need to 
modify this file directly to allow static and dll modes to function.


Thanks for the good job, keep up with it !
Grégory



Bug#755545: Add glusterfs/libgfapi

2017-01-06 Thread Gregory Auzanneau

Dear Maintainer,

As finally, glusterfs was integrated to qemu (into qemu-block-extra) on 
12/28/2016 which close blocking bugs #775431 and #787112, is the 
integration of glusterfs to libvirt is possible now ?



Best regards,
Gregory Auzanneau



Bug#843118: seabios: Unable to boot KVM guest with "-display none"

2016-11-03 Thread Gregory Auzanneau
Package: seabios
Version: 1.9.3-2
Severity: normal

Dear Maintainer,

Since the latest update of seabios 1.9.3-2, a guest machine can't start with 
qemu parameter "-display none".
The guest hang immediatly at boot with all vCPU at 100%.

If I revert back to seabios 1.8.2-1, the problem is solved.
If I add a virtual graphic card, the is problem is also solved.

Best regards,
Gregory

-- System Information:
Debian Release: stretch/sid
  APT prefers testing
  APT policy: (400, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 4.7.0-1-amd64 (SMP w/8 CPU cores)
Locale: LANG=fr_FR.UTF-8, LC_CTYPE=fr_FR.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

-- no debconf information



Bug#607327: mount: Performance issues with losetup (and therefore XEN)

2010-12-16 Thread Gregory Auzanneau
Package: mount
Version: 2.17.2-3.3
Severity: important

Hello all,

I'm currently playing with xen and some intensive parallel I/O requests on 
disks.
I've remark some performance issues on XEN bring by losetup which drop the 
NCQ/TCQ/Queuing functionnality (which is really useful in parallel random 
access disk)

I'm using this C program to measure the performance impact : 
http://box.houkouonchi.jp/seeker_baryluk.c (found on this website : 
http://www.linuxinsight.com/how_fast_is_your_disk.html )

Please find some benchmark :

Performance of /dev/dm-2 (lvm) with 1 thread : 210 seeks/secs
r...@srv-xen1:~# ./seeker_baryluk /dev/dm-2 1
[1 threads]
Results: 210 seeks/second, 4.755 ms random access time (33493245 < offsets < 
60740308117)

Performance of /dev/dm-2 with 32 threads : 699 seeks/secs (at least 3x times 
better)
r...@srv-xen1:~# ./seeker_baryluk /dev/dm-2 32
[32 threads]
Results: 699 seeks/second, 1.430 ms random access time (8670248 < offsets < 
60740120558)

We are mapping /dev/dm-2 on /dev/loop0 (Yes, just a mapping of LVM drive 
without any FS interaction)
r...@srv-xen1:~# losetup /dev/loop0 /dev/dm-2 

Performance of /dev/loop0 with 1 thread : exactly the same performance as 
direct random seek (good point here)
r...@srv-xen1:~# ./seeker_baryluk /dev/loop0 1
[1 threads]
Results: 210 seeks/second, 4.757 ms random access time (4255332 < offsets < 
60739140845)

Performance of /dev/loop0 with 32 threads : 211 seeks/mins <- Here we have 
"catastrophic" performance issues because we completly lost performance bring 
by NCQ/TCQ/Queuing !!
r...@srv-xen1:~# ./seeker_baryluk /dev/loop0 32
[32 threads]
Results: 211 seeks/second, 4.735 ms random access time (14948337 < offsets < 
60737675221)


Best regards,

Grégory



-- System Information:
Debian Release: squeeze/sid
  APT prefers testing
  APT policy: (500, 'testing')
Architecture: amd64 (x86_64)

Kernel: Linux 2.6.32-5-xen-amd64 (SMP w/2 CPU cores)
Locale: LANG=fr_LU.UTF-8, LC_CTYPE=fr_LU.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash

Versions of packages mount depends on:
ii  libblkid1 2.17.2-3.3 block device id library
ii  libc6 2.11.2-7   Embedded GNU C Library: Shared lib
ii  libselinux1   2.0.96-1   SELinux runtime shared libraries
ii  libsepol1 2.0.41-1   SELinux library for manipulating b
ii  libuuid1  2.17.2-3.3 Universally Unique ID library

mount recommends no packages.

Versions of packages mount suggests:
ii  nfs-common1:1.2.2-4  NFS support files common to client

-- no debconf information



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmas...@lists.debian.org