Bug#1025162: Re: Bug#1025162: akonadi-server won't start if the home directory is not immediately under /home

2022-12-01 Thread Josep Guerrero Solé
Hi again,

Just for future reference in case someone checks this bug for the same 
problem: I just tried Hefee suggestions (thaks a lot!), and there seems to be 
still a (correctable) problem. If I do as suggested, editing

 /etc/apparmor.d/tunables/home.d/site.local

and uncommenting the "@{HOMEDIRS}" line, I get an error when starting and 
stopping AppArmor:

> ERROR: Values added to a non-existing variable @{HOMEDIRS}: /home/nodens/ in
> tunables/home.d/site.local

(I'm not discarding I did it wrong. I just deleted the comment and added the 
new directory at the end,I leaving the line like this

@{HOMEDIRS}+=/home/nodens/

), but I could edit 

 /etc/apparmor.d/tunables/home

and change the equivalent line:

@{HOMEDIRS}=/home/ /home/nodens/

, which produces no errors. 

Regards,

Josep



Bug#1025162: akonadi-server won't start if the home directory is not immediately under /home

2022-11-30 Thread Josep Guerrero
Package: akonadi-server
Version: 4:20.08.3-3
Severity: important

Dear Maintainer,

   * What led up to the situation?

I updated from debian buster to debian bullseye. When logging in, akonadi 
always produced and error 
("exit code 253 (unknown error)") and couldn't start kmail at all as a 
consequence.

   * What exactly did you do (or not do) that was effective (or
 ineffective)?

I discovered that newly created for users under /home akonadiserver would work. 
For newly created users 
under /home/directory wouldn't. I moved my home from /home/nodens/user to 
/home/user and linked /home/nodens to /home.

   * What was the outcome of this action?

Akonadiserver started working againi for my user, but only when home was 
directly under /home. When it didn't work, dmesg showed some
apparmor errors.

   * What outcome did you expect instead?

I expected it to work wherever the home was.


-- System Information:
Debian Release: 11.5
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 5.10.0-19-amd64 (SMP w/4 CPU threads)
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), 
LANGUAGE=en_US:en
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

Versions of packages akonadi-server depends on:
ii  akonadi-backend-mysql4:20.08.3-3
ii  libaccounts-qt5-11.16-2
ii  libc62.31-13+deb11u5
ii  libgcc-s110.2.1-6
ii  libkf5akonadiprivate5abi2 [libkf5akonadiprivate5-20.08]  4:20.08.3-3
ii  libkf5akonadiwidgets5abi1 [libkf5akonadiwidgets5-20.08]  4:20.08.3-3
ii  libkf5configcore55.78.0-4
ii  libkf5coreaddons55.78.0-4
ii  libkf5crash5 5.78.0-3
ii  libkf5i18n5  5.78.0-2
ii  libqt5core5a 5.15.2+dfsg-9
ii  libqt5dbus5  5.15.2+dfsg-9
ii  libqt5gui5   5.15.2+dfsg-9
ii  libqt5network5   5.15.2+dfsg-9
ii  libqt5sql5   5.15.2+dfsg-9
ii  libqt5widgets5   5.15.2+dfsg-9
ii  libqt5xml5   5.15.2+dfsg-9
ii  libstdc++6   10.2.1-6

akonadi-server recommends no packages.

Versions of packages akonadi-server suggests:
ii  akonadi-backend-mysql   4:20.08.3-3
pn  akonadi-backend-postgresql  
pn  akonadi-backend-sqlite  

-- no debconf information



Bug#892308: Same problem on buster, kmail 5.9.3

2020-05-22 Thread Josep Guerrero
Hi,

I've found the same bug on two different Debian buster systems running kmail 
5.9.3, both of them upgraded from stretch, jessie, etc. In both of them a 
small square, similar to the screenshots provided by the original poster and 
showing what looks like a piece of some other window (web browser, terminal, 
etc.), appears at the bottom left of the list of messages. This square behaves 
as if it was part of the kmail window, moving with it and being repainted if 
covered by another window. The square seems to have no effect, other than the 
cosmetic.

Regards,

Josep



Bug#934218: Sort of fully working workaround

2019-08-14 Thread Josep Guerrero
Hi again,

Finally I've managed to install a fully working grub on the system. The grub 
installation process from the installer still fails, with the output attached 
to the previous message, but I found a way to install a fully functional grub 
in the system following these steps:

1) When the grub installation fails, I accepted installing grub in an external 
removable media (this, I think, makes the USB work a grub bootable device, but 
it's no longer functional as booting installer device. Since I can overwrite 
it again, it's not a big problem).

2) When booting from this new grub USB device, I only get to the grub prompt. 
I need to issue the following commands (with the right parameter values):

root=(lvm/dev/mapper/root-lvm-device)
linux /boot/vmlinuz-image-file root=/dev/mapper/root-lvm-device quiet
initrd /boot/initrd-file
boot

to boot the newly installed system.

3) First thing I did, after booting the system, was:

grub-install /dev/nvme0n1

which seemed to work and provided a new UEFI boot entry (which didn't require 
an external USB device), but still took me only to the grub prompt when 
booting the system.

4) After some googling and reading, I edited the file

/etc/default/grub

to include the line:

GRUB_DISABLE_OS_PROBER=false

and executed:

update-grub2

on the booted system. This finally provided a fully functional grub 
installation (it may be that adding the line in /etc/default/grub is not 
necessary, I didn't check if it worked without that).

Finally, I commented out the added line in /etc/default/grub.

I think that, in this particular case and for some reason, grub installation 
doesn't work when done from the installer (maybe I did something wrong?), but 
works if done from the newly installed system. Of course, you need to find a 
way to boot that system first.

Hope this information helps with the problem!

Josep Guerrero



Bug#934218: cdrom: grub-install fails with "failed to get%0A canonical path of /dev/nvme0np1"

2019-08-10 Thread Josep Guerrero
Hi, Steve

Thanks a lot for your answer! I'll try to answer your questions and I'll add 
some new information on the problem.

> When you tried to run grub-install by hand, did you have /dev and
> /sys mounted ok in the chroot?

I'm not sure, I admit I didn't check for that. Just after I received the grub 
error in the installer (the text one, just in case it's important), I switched 
to a second console with Alt-F2, executed: 

tail /var/log/syslog, 

and I saw a line about executing 

chroot /target grub-install --force "dummy"
 
and after that, the error I mentioned in my previous message:

grub-install: error: failed to get canonical path of `/dev/nvme0n1p1`

I did not mount or umount anything by myself, so the chrooted /target was in 
the state the debian installer had left it after failing with the grub 
installation.

> If you could try again and add a "-v"
> to the grub-install command line that will give us more
> information. It *will* be verbose, but only the last few lines are
> likely to matter.

I'll do that (sorry, but I won't have physical access to that system till 
Monday).

> You don't describe the system hardware that you're working with. What
> do you have? This sounds like *potentially* a firmware issue, but it's
> not 100% clear yet.

Sorry, you are completely right. I'm attaching the lshw output for this 
machine

I've been trying more ideas with this system, and I contacted the seller for 
advice (they suggested changing some BIOS setup options, some of them related 
to firmware, most of them related to UEFI settings).

After changing those options (I'm not completely sure if they had any effect, 
but maybe) I reinstalled again with a simpler configuration (just to be 
faster), without RAID or LVM, and just used standard partitions on the NVMe 
disk (all the system partitions were in that disk). This time it failed again 
on the grub installation, but there wasn't an explicit error message on the 
syslog (just that it failed with status 1) and when tried the the grub 
installation manually on a second console using:

chroot /target grub-install "dummy"

the error message was that grub-install was unable to find the

/usr/lib/grub/i386-pc/modinfo.sh

file. At that point, I tried continuing the installation without the boot 
loader (didn't try that before), and the installer asked if I wanted to 
install grub in a (I think, citing from memory) "external removable media". I 
chose "Yes" and when I rebooted, among the options there was a new UEFI one 
but the installer USB UEFI options were missing. Choosing the new option 
brought me to the grub prompt, which was an improvement, but didn't boot the 
system.

After (again citing from memory) issuing some grub commands, I don't remember 
the exact parameters, but they were:

root (hdX,Y)
linux /boot/vmlinux-X root=/dev/nvme0n1p2
initrd /boot/initrd-X
boot

the system booted, as far as I can tell, correctly. I executed 

grub-install /dev/nkvm0n1

on the newly booted system, which ran without errors, removed the installer 
USB and got again a new UEFI option, but still I only got the grub prompt when 
rebooting (I could boot from there with the same set of instructions as 
before). I was planning on trying to find if I could get grub to execute those 
instructions by itself, so the system can boot normally, and write a followup 
to the bug report, but I just saw your message.

Sorry about the length of the message. Hope some of this information is 
useful. 

Regards,

Josep Guerrerokilimanjaro
description: Computer
product: AS -1013S-MTR (To be filled by O.E.M.)
vendor: Supermicro
version: 0123456789
serial: A324200X9405087
width: 64 bits
capabilities: smbios-3.1.1 dmi-3.1.1 smp vsyscall32
configuration: boot=normal family=To be filled by O.E.M. sku=To be filled 
by O.E.M. uuid=----AC1F6B7CCF82
  *-core
   description: Motherboard
   product: H11SSL-i
   vendor: Supermicro
   physical id: 0
   version: 1.01
   serial: ZM18CS600800
   slot: To be filled by O.E.M.
 *-firmware
  description: BIOS
  vendor: American Megatrends Inc.
  physical id: 0
  version: 1.1
  date: 02/15/2019
  size: 64KiB
  capacity: 16MiB
  capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd 
int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int14serial 
int17printer acpi usb biosbootspecification uefi
 *-memory
  description: System Memory
  physical id: 21
  slot: System board or motherboard
  size: 32GiB
  capacity: 2TiB
  capabilities: ecc
  configuration: errordetection=multi-bit-ecc
*-bank:0
 description: DIMM DDR4 Synchronous Registered (Buffered) 2667 MHz 
(0.4 ns)
 product: 9ASF1G72PZ-2G6D1
 vendor: Micron Technology
 physical id: 0
 serial: 

Bug#934218: cdrom: grub-install fails with "failed to get canonical path of /dev/nvme0np1"

2019-08-08 Thread Josep Guerrero Sole
Package: cdrom
Severity: important
Tags: d-i



-- System Information:
Debian Release: 9.9
  APT prefers oldstable
  APT policy: (500, 'oldstable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.9.0-9-amd64 (SMP w/8 CPU cores)
Locale: LANG=ca_ES.UTF-8, LC_CTYPE=ca_ES.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca:en_US:es:en_GB (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Since the problem prevents me from installing the system, I'm writing the bug 
report from another system, so the automatically gathered data above is not 
correct.

The correct values (those that I know) are:

Debian Release: 10.0
Architecture: amd64 (x86_64)
Kernel: Linux 4.19.0-5-amd64

I'm trying to install Debian Buster on a system with 1 500GB NVMe disk 
(intended to be the system and boot disk) and 4 12TB disks intended to store 
data.

When partitioning the NVME disk, I create a 1GB EPS partition and a 1GB 
grubbios partition. The rest of the disk is configured as a RAID1 partition (it 
will only have one component, but I may be able to add a second disk later, so 
I prefer configuring it as a RAID from the beginning). The RAID partition is 
then configured as an LVM physical volume which ends as the only physical 
volume of a LVM volume group, which is further partitioned into several logical 
volumes to be mounted on some system directories (/, /tmp, /usr, /var, swap, 
...).

The 4 12TB disks are partitioned with just one Linux raid partition, and the 4 
of them are configured as a RAID6 device, that again is configured as an LVM 
physical volume which ends as the only component of another volume group, with 
just one logical volume.

The whole installation seems to work flawlessly, but when installing the grub 
boot loader, I get the error:

grub-install dummy failed

In the syslog, I can find the message

grub-install: error: failed to get canonical path of `/dev/nvme0n1p1`

Executing manually:

chroot /target grub-install --force "dummy"

results in exactly the same message. As a result, I am unable to install the 
system.

This bug seems to have appeared in some Ubuntu forums, and there is a sort of 
workaround in this comment:

https://bugs.launchpad.net/ubuntu/+source/grub-installer/+bug/1507505/comments/25

but I would prefer avoiding creating the eps partition on the 12TB disks (I 
would have to do that on all the disks to keep the raid partitions the same 
size).

Regards,

Josep Guerrero



Bug#909310: [SOLVED] Fujitsu Celsius W580 Power+n, UEFI mode, install screen unreadable

2018-10-18 Thread Josep Guerrero Sole
Dear all,

I can confirm that Peter's tip above does work. Disabling the "above 4GB" 
option in BIOS-> Advanced->PCI Settings solved this problem and I was able to 
install Debian Stretch.

In my case, I used a USB stick as install medium and a USB-to-RJ45 converter 
for network installation (since the integrated network card is unsupported in 
kernel 4.9). Once finished, I installed the backport kernel (4.18 at this 
moment) and removed the USB-to-RJ45 converter. The computer is working 
perfectly.

Thanks!

Josep Guerrero



Bug#909310: Update on this problem

2018-10-16 Thread Josep Guerrero Sole
Still haven't found a solution or a workaround. What I've found till now is:

* All daily testing debian installers (besides debian 8, debian 9 and debian 
buster alpha 3 installers), up to October 15th, produce exactly the same 
result.

* Adding the option fb=false to the kernel options results in the ten 
miniscreens (see image in previous message) just not appearing: I'm left only 
with the grub background.

* All other option changes I've tried, either in grub, kernel or BIOS didn't 
have any effect. For grub this includes: set gfxpayload=keep, set 
gfxmode=1024x768 (several values) and terminal_output gfxterm. For the kernel: 
vga=normal, vga=several modes, video=several modes, text, nofb, nomodeset, 
intel_pstate=no_hwp, intel_pstate=disable, intel_idle.cmax_cstate=0, 
i915.modeset=0 and a few more. For the Fujitsu Celsius w580power+ BIOS, I've 
tried changing most of the options for Graphics, tried reducing the number of 
active CPU cores, disabled hyperthreading, and changed lots of boot related 
options.

* Changing the graphics card, the output port of the graphics card, or the 
monitor (different resolution and aspect) didn't have any effect either.

* Installing Debian in another computer and moving the installed disk to the 
Fujitsu, results in a blank screen on boot.  I suspect the system went to the 
initrd shell on attempting to boot, since I couldn't get a network connection 
but there was some response to the keyboard.

* Linux Mint 18 fails too in this computer (blank screen instead of ten 
miniscreens). Mint version 19 does work.

* Ubuntu 18.04, Server and Live/Desktop, work. Ubuntu 16.04 doesn't (blank 
screen). Ubuntu 18.04 network installer has exactly the same problem as Debian 
(not surprising, since it seems to use some variation of the Debian Installer 
too).

Hope some of this gives some insight into the problem.



Bug#909310: d-i.debian.org: Fujitsu Celsius W580 Power+n, UEFI mode, install screen unreadable

2018-09-21 Thread Josep Guerrero Sole
Package: d-i.debian.org
Severity: important
Tags: d-i



-- System Information:
Debian Release: 9.5
  APT prefers stable
  APT policy: (500, 'stable')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 4.9.0-8-amd64 (SMP w/8 CPU cores)
Locale: LANG=ca_ES.UTF-8, LC_CTYPE=ca_ES.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca:en_US:es:en_GB (charmap=UTF-8)
Shell: /bin/sh linked to /bin/dash
Init: systemd (via /run/systemd/system)

Note: The above system information *does not* refer to the system where I found 
the problem, since I
haven't been able to install or run any Debian live version on that system, and 
so,
haven't been able to execute reportbug in it. If asked, I'll send a list of 
hardware in the Fujitsu Celsius 
W580 Power+n workstation.

--

Problem: When running the Debian installer (tried jessie, stretch, 
buster-alpha-3 and testing), in UEFI mode, 
and reaching the first screen, where I must choose between graphical and text 
install (and other options),
any option I choose results in the new screen appearing on the upper part of 
the monitor, repeated ten
times horizontally, and shrinked so that text lines are around a couple monitor 
pixels tall (the whole band
of repeating screens may be around 60 pixels tall). The rest of the screen 
still displays the grub background. 
The shrinked repeating screen, if choosing the graphical install, shows a black 
screen (some white lines appear
before it gets black, which may be text) and for the text install option,
the new screen does look like the text installation screen, responding as 
expected to the keyboard, but
the letters are so small (around two pixels tall) that I cannot read the 
options. It seems
to be responding correctly to keyboard input but, of course, I cannot see what 
I'm doing.

Changing the graphics card, the monitor and/or graphics related BIOS options 
has had little effect so far
(some combinations removed the band of shrinked screens, leaving just the grub 
background).
I cannot try a Legacy BIOS installation, since that particular BIOS (Aptio 
Setup Utility, AMI, 
v2.20.1271) only allows using UEFI, and the Fujitsu web shows no updates for 
that BIOS.

The Ubuntu 18.04 installer did work (both for the server and desktop editions), 
but trying to copy 
the Ubuntu grub options and edit the Debian grub options accordingly didn't 
have any effect. I've
tried changing the kernel and grub options too, following some googled reports 
about UEFI installation
problems, with no success at all.



Bug#855447: Problem sort of solved

2017-11-30 Thread Josep Guerrero Sole
Dear Mantainer,

I've discovered that I was suffering from two different problems, and I've 
found workarounds for both of them.

The problem with greyed out folders occurred because the subwindow with the 
list of files opens filtering out all non .nbk files. For some reason, folders 
without the extension .nbk are greyed out instead of being shown normally, but 
if the filter is reset to all files, *and the folder reloaded*, then they 
appear normally and notebooks can be loaded normally too. I still think 
greying out non .nbk folders is a bug, but at least it's an easy one to 
circumvent.

The reason I was not able to load a notebook from the command line interface 
(or other interfaces) was that I did the tests with a notebook I copied from 
another machine that was using a different character set and, for some reason, 
keepnote was unable to load files with non standard characters (accents and 
such) in their names that were using that character set. I could solve the 
problem just by renaming the files removing non standard characters.



Bug#855447: keepnote: Open Notebook dialog content greyd out

2017-11-23 Thread Josep Guerrero Sole
Dear Mantainer

I can confirm this bug in the keepnote package for Debian Stretch 9.1 
(keepnote 0.7.8-1.1). I've found that the same bug appears too if using the 
source package from http://keepnote.org. (maybe the cause is some change in 
python?).

As the original report mentions, only folders on the left side of the dialog 
box can be opened, and those folders contents is always greyed out. No error 
appears on the graphical interface nor in the shell session where I started 
keepnote. Attempting to open a notebook passing it as a cli argument to 
keepnote produces the error:

"Could not load notebook notebook-name"

I can create new folders from the dialog box, create new notebooks, add 
content to them and save them. But if I close them and try to open again, the 
folder appears greyed out and cannot be selected. I have not been able to find 
a workaround that allows me opening previous notebooks.



Bug#646892: grub-pc: Grub2 installation fails in two letter drives

2011-10-28 Thread Josep Guerrero
Package: grub-pc
Version: 1.98+20100804-14+squeeze1
Severity: important
Tags: squeeze

This computer (Sun Thumper X4500) can only boot from two of its 48 disks, which 
are identified as

/dev/sdy

and

/dev/sdac

Both disks are identical and have been partitioned in exactly the same way. 
Then those partitions 
have been made into several software RAID 1 devices (md0 through md5). Root 
partition is md0. These
are /proc/mdstat contents for that RAID device:

===
md0 : active raid1 sdac1[0] sdy1[1]
  19534912 blocks [2/2] [UU]
===

If I try to install grub in the /dev/sdy disk I get some errors (I don't know
if they are important or what do they mean), but apparently the process works:

===
kilimanjaro:~# grub-install /dev/sdy
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
Installation finished. No error reported
===

if I try the same with /dev/sdac, it doesn't work:

===
kilimanjaro:~# grub-install /dev/sdac
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
error: found two disks with the number 21.
error: superfluous RAID member (14 found).
/usr/sbin/grub-setup: warn: Attempting to install GRUB to a partitionless disk. 
 This is a BAD idea..
/usr/sbin/grub-setup: error: embedding is not possible, but this is required 
when the root device is on a RAID array or LVM volume.


But /dev/sdac and /dev/sdy are both partitioned, and exactly in the same way:


kilimanjaro:~# fdisk -l /dev/sdy

Disk /dev/sdy: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00055a02

   Device Boot  Start End  Blocks   Id  System
/dev/sdy1   1243219535008+  fd  Linux raid autodetect
/dev/sdy22433   60801   468848992+   5  Extended
/dev/sdy52433486419535008+  fd  Linux raid autodetect
/dev/sdy64865729619535008+  fd  Linux raid autodetect
/dev/sdy772978269 7815591   fd  Linux raid autodetect
/dev/sdy88270   1556458597056   fd  Linux raid autodetect
/dev/sdy9   15565   60801   363366171   fd  Linux raid autodetect
kilimanjaro:~# fdisk -l /dev/sdac

Disk /dev/sdac: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00050f0c

Device Boot  Start End  Blocks   Id  System
/dev/sdac1   1243219535008+  fd  Linux raid autodetect
/dev/sdac22433   60801   468848992+   5  Extended
/dev/sdac52433486419535008+  fd  Linux raid autodetect
/dev/sdac64865729619535008+  fd  Linux raid autodetect
/dev/sdac772978269 7815591   fd  Linux raid autodetect
/dev/sdac88270   1556458597056   fd  Linux raid autodetect
/dev/sdac9   15565   60801   363366171   fd  Linux raid autodetect
kilimanjaro:~#
==

I suspect that grub-setup, as called by grub-install, is trying to install on 
another disk (maybe
/dev/sda, since it may have chopped the last letter of the drive name, and the 
error would make sense
since /dev/sda has no partitions and is in fact part of a RAID 5 device). But I 
have no idea about 
how to test for this without breaking anything.

Thanks for your help and attention.


-- Package-specific info:

*** BEGIN /proc/mounts
/dev/disk/by-uuid/90ec83ed-b197-440e-9e50-d78b0a65ea95 / ext3 
rw,relatime,errors=remount-ro,data=ordered 0 0
/dev/md5 /home/kilimanjaro ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/md1 /usr ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/md4 /usr/local ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/md2 /var ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/md10 /mnt/3TB ext3 rw,relatime,errors=continue,data=ordered 0 0
/dev/md21 /home/kilimanjaro1 ext3 rw,relatime,errors=continue,data=ordered 0 0