[Kernel-packages] [Bug 1639500] Re: Snapshot the system zpool from within the initramfs

2016-11-09 Thread Sam VdE
Small fixes done.

** Attachment removed: "script for /etc/initramfs-tools/scripts/local-bottom"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+attachment/4773061/+files/zfs-boot-snap

** Attachment removed: "zfs-systemd-snap"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+attachment/4773687/+files/zfs-systemd-snap

** Attachment added: "zfs-boot-snap"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+attachment/4774847/+files/zfs-boot-snap

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1639500

Title:
  Snapshot the system zpool from within the initramfs

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Idea/enhancement request.

  I wrote a small initramfs script to snapshot the zpool when booting
  off zfs in Ubuntu. As the pool is not actively used at that moment, it
  is in the perfect consistent state to take snapshots. Perhaps this is
  functionality more people are interested in?

  To use it, you do 2 things:
  - put the script in /etc/initramfs-tools/scripts/local-bottom and update the 
initrd (update-initramfs -ck all)
  - add a boot parameter in the form of ZFSSNAP=xx in /etc/default/grub and run 
update-grub. xx has to be an integer

  As I did not want to tamper with the initrd too much, it does not
  require any additional tools in the initrd image. However it uses the
  zfs-auto-snapshot syntax using "boot" as identifier instead of
  hourly/daily/weekly/monthly/yearly.

  As mentioned above you trigger the script when adding ZFSSNAP=xx as a
  grub parameter, where xx is an integer. The script will keep the last
  xx days that contain valid "boot" snapshots, and delete older ones.
  The number of snapshots on a single day is irrelevant.

  The script goes in /etc/initramfs-tools/scripts/local-bottom, so runs
  right after the root zpool is imported. Snapshot queries are done only
  on the top level of the pool ("-d 1") to avoid the script slowing down
  the boot process too much although this means orphans are possible if
  the cleaning operation is interrupted (system reset for example).

  If this kind of functionality would be supported by the systemd import
  script, other zpools on the system could be snapshotted at import time
  as well. I considered importing my other zpools in the initramfs stage
  temporarily just for snapshotting them, but then decided to keep it
  all simple and refrain from that. Only a zpool that is defined on the
  kernel command line for the rootfs (root=ZFS=zpool/rootfs) will be
  taken into account.

  So basically, if you add ZFSSNAP=4 to grub and put the script in the
  initrd image, you will end up with snapshots in the form 'dataset@zfs-
  auto-snap_boot--MM-DD-HHMM' for MYPOOL and all recursive datasets
  on MYPOOL that don't have com.sun:auto-snapshot set to false. You will
  find these snapshots for every time you have rebooted your machine but
  only the last 4 days (re)boots occured.

  Disclaimer: I am not a developer so the script might not be a piece of
  art. It does the job though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1639500] Re: Snapshot the system zpool from within the initramfs

2016-11-09 Thread Sam VdE
Small fixes done.

** Attachment added: "zfs-systemd-snap"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+attachment/4774848/+files/zfs-systemd-snap

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1639500

Title:
  Snapshot the system zpool from within the initramfs

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Idea/enhancement request.

  I wrote a small initramfs script to snapshot the zpool when booting
  off zfs in Ubuntu. As the pool is not actively used at that moment, it
  is in the perfect consistent state to take snapshots. Perhaps this is
  functionality more people are interested in?

  To use it, you do 2 things:
  - put the script in /etc/initramfs-tools/scripts/local-bottom and update the 
initrd (update-initramfs -ck all)
  - add a boot parameter in the form of ZFSSNAP=xx in /etc/default/grub and run 
update-grub. xx has to be an integer

  As I did not want to tamper with the initrd too much, it does not
  require any additional tools in the initrd image. However it uses the
  zfs-auto-snapshot syntax using "boot" as identifier instead of
  hourly/daily/weekly/monthly/yearly.

  As mentioned above you trigger the script when adding ZFSSNAP=xx as a
  grub parameter, where xx is an integer. The script will keep the last
  xx days that contain valid "boot" snapshots, and delete older ones.
  The number of snapshots on a single day is irrelevant.

  The script goes in /etc/initramfs-tools/scripts/local-bottom, so runs
  right after the root zpool is imported. Snapshot queries are done only
  on the top level of the pool ("-d 1") to avoid the script slowing down
  the boot process too much although this means orphans are possible if
  the cleaning operation is interrupted (system reset for example).

  If this kind of functionality would be supported by the systemd import
  script, other zpools on the system could be snapshotted at import time
  as well. I considered importing my other zpools in the initramfs stage
  temporarily just for snapshotting them, but then decided to keep it
  all simple and refrain from that. Only a zpool that is defined on the
  kernel command line for the rootfs (root=ZFS=zpool/rootfs) will be
  taken into account.

  So basically, if you add ZFSSNAP=4 to grub and put the script in the
  initrd image, you will end up with snapshots in the form 'dataset@zfs-
  auto-snap_boot--MM-DD-HHMM' for MYPOOL and all recursive datasets
  on MYPOOL that don't have com.sun:auto-snapshot set to false. You will
  find these snapshots for every time you have rebooted your machine but
  only the last 4 days (re)boots occured.

  Disclaimer: I am not a developer so the script might not be a piece of
  art. It does the job though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1639500] Re: Snapshot the system zpool from within the initramfs

2016-11-06 Thread Sam VdE
** Attachment added: "zfs-mount.service"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+attachment/4773689/+files/zfs-mount.service

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1639500

Title:
  Snapshot the system zpool from within the initramfs

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Idea/enhancement request.

  I wrote a small initramfs script to snapshot the zpool when booting
  off zfs in Ubuntu. As the pool is not actively used at that moment, it
  is in the perfect consistent state to take snapshots. Perhaps this is
  functionality more people are interested in?

  To use it, you do 2 things:
  - put the script in /etc/initramfs-tools/scripts/local-bottom and update the 
initrd (update-initramfs -ck all)
  - add a boot parameter in the form of ZFSSNAP=xx in /etc/default/grub and run 
update-grub. xx has to be an integer

  As I did not want to tamper with the initrd too much, it does not
  require any additional tools in the initrd image. However it uses the
  zfs-auto-snapshot syntax using "boot" as identifier instead of
  hourly/daily/weekly/monthly/yearly.

  As mentioned above you trigger the script when adding ZFSSNAP=xx as a
  grub parameter, where xx is an integer. The script will keep the last
  xx days that contain valid "boot" snapshots, and delete older ones.
  The number of snapshots on a single day is irrelevant.

  The script goes in /etc/initramfs-tools/scripts/local-bottom, so runs
  right after the root zpool is imported. Snapshot queries are done only
  on the top level of the pool ("-d 1") to avoid the script slowing down
  the boot process too much although this means orphans are possible if
  the cleaning operation is interrupted (system reset for example).

  If this kind of functionality would be supported by the systemd import
  script, other zpools on the system could be snapshotted at import time
  as well. I considered importing my other zpools in the initramfs stage
  temporarily just for snapshotting them, but then decided to keep it
  all simple and refrain from that. Only a zpool that is defined on the
  kernel command line for the rootfs (root=ZFS=zpool/rootfs) will be
  taken into account.

  So basically, if you add ZFSSNAP=4 to grub and put the script in the
  initrd image, you will end up with snapshots in the form 'dataset@zfs-
  auto-snap_boot--MM-DD-HHMM' for MYPOOL and all recursive datasets
  on MYPOOL that don't have com.sun:auto-snapshot set to false. You will
  find these snapshots for every time you have rebooted your machine but
  only the last 4 days (re)boots occured.

  Disclaimer: I am not a developer so the script might not be a piece of
  art. It does the job though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1639500] Re: Snapshot the system zpool from within the initramfs

2016-11-06 Thread Sam VdE
For the additional pools one might have, I adapted the script so it
snapshots everything else.

To trigger the script I added a new service zfs-import-scan.service, and
I adapted the existing zfs-mount.service to include it. This way
additional zpools will be snapshotted right after being imported, and
before mounting takes place. The script doing the work is called zfs-
systemd-snap.

I keep both the initramfs and systemd solution in place on my systems
because the rootfs has been pivoted when systemd runs and the whole goal
of this is to snapshot zpools without them being written to. I did make
sure not to snapshot a zpool twice.

To test the additional systemd stuff:
- add zfs-systemd-snap to /etc/systemd
- add zfs-import-scan.service to /etc/systemd/system and run "systemctl enable 
zfs-import-scan.service"
- add zfs-mount.service to /etc/systemd/system and run "systemctl enable 
zfs-mount.service"

That's basically it, it should run based on the same GRUB parameter
mentioned before behaving the same way.

Disclaimer: when executing the steps above you are effectively
overriding the default zfs-mount.service script provided by the Ubuntu-
provided zfs packages. If the unit file gets updated in the future, this
will no longer be reflected on your system.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1639500

Title:
  Snapshot the system zpool from within the initramfs

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Idea/enhancement request.

  I wrote a small initramfs script to snapshot the zpool when booting
  off zfs in Ubuntu. As the pool is not actively used at that moment, it
  is in the perfect consistent state to take snapshots. Perhaps this is
  functionality more people are interested in?

  To use it, you do 2 things:
  - put the script in /etc/initramfs-tools/scripts/local-bottom and update the 
initrd (update-initramfs -ck all)
  - add a boot parameter in the form of ZFSSNAP=xx in /etc/default/grub and run 
update-grub. xx has to be an integer

  As I did not want to tamper with the initrd too much, it does not
  require any additional tools in the initrd image. However it uses the
  zfs-auto-snapshot syntax using "boot" as identifier instead of
  hourly/daily/weekly/monthly/yearly.

  As mentioned above you trigger the script when adding ZFSSNAP=xx as a
  grub parameter, where xx is an integer. The script will keep the last
  xx days that contain valid "boot" snapshots, and delete older ones.
  The number of snapshots on a single day is irrelevant.

  The script goes in /etc/initramfs-tools/scripts/local-bottom, so runs
  right after the root zpool is imported. Snapshot queries are done only
  on the top level of the pool ("-d 1") to avoid the script slowing down
  the boot process too much although this means orphans are possible if
  the cleaning operation is interrupted (system reset for example).

  If this kind of functionality would be supported by the systemd import
  script, other zpools on the system could be snapshotted at import time
  as well. I considered importing my other zpools in the initramfs stage
  temporarily just for snapshotting them, but then decided to keep it
  all simple and refrain from that. Only a zpool that is defined on the
  kernel command line for the rootfs (root=ZFS=zpool/rootfs) will be
  taken into account.

  So basically, if you add ZFSSNAP=4 to grub and put the script in the
  initrd image, you will end up with snapshots in the form 'dataset@zfs-
  auto-snap_boot--MM-DD-HHMM' for MYPOOL and all recursive datasets
  on MYPOOL that don't have com.sun:auto-snapshot set to false. You will
  find these snapshots for every time you have rebooted your machine but
  only the last 4 days (re)boots occured.

  Disclaimer: I am not a developer so the script might not be a piece of
  art. It does the job though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1639500] Re: Snapshot the system zpool from within the initramfs

2016-11-06 Thread Sam VdE
** Attachment added: "zfs-import-snap.service"
   
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+attachment/4773688/+files/zfs-import-snap.service

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1639500

Title:
  Snapshot the system zpool from within the initramfs

Status in zfs-linux package in Ubuntu:
  New

Bug description:
  Idea/enhancement request.

  I wrote a small initramfs script to snapshot the zpool when booting
  off zfs in Ubuntu. As the pool is not actively used at that moment, it
  is in the perfect consistent state to take snapshots. Perhaps this is
  functionality more people are interested in?

  To use it, you do 2 things:
  - put the script in /etc/initramfs-tools/scripts/local-bottom and update the 
initrd (update-initramfs -ck all)
  - add a boot parameter in the form of ZFSSNAP=xx in /etc/default/grub and run 
update-grub. xx has to be an integer

  As I did not want to tamper with the initrd too much, it does not
  require any additional tools in the initrd image. However it uses the
  zfs-auto-snapshot syntax using "boot" as identifier instead of
  hourly/daily/weekly/monthly/yearly.

  As mentioned above you trigger the script when adding ZFSSNAP=xx as a
  grub parameter, where xx is an integer. The script will keep the last
  xx days that contain valid "boot" snapshots, and delete older ones.
  The number of snapshots on a single day is irrelevant.

  The script goes in /etc/initramfs-tools/scripts/local-bottom, so runs
  right after the root zpool is imported. Snapshot queries are done only
  on the top level of the pool ("-d 1") to avoid the script slowing down
  the boot process too much although this means orphans are possible if
  the cleaning operation is interrupted (system reset for example).

  If this kind of functionality would be supported by the systemd import
  script, other zpools on the system could be snapshotted at import time
  as well. I considered importing my other zpools in the initramfs stage
  temporarily just for snapshotting them, but then decided to keep it
  all simple and refrain from that. Only a zpool that is defined on the
  kernel command line for the rootfs (root=ZFS=zpool/rootfs) will be
  taken into account.

  So basically, if you add ZFSSNAP=4 to grub and put the script in the
  initrd image, you will end up with snapshots in the form 'dataset@zfs-
  auto-snap_boot--MM-DD-HHMM' for MYPOOL and all recursive datasets
  on MYPOOL that don't have com.sun:auto-snapshot set to false. You will
  find these snapshots for every time you have rebooted your machine but
  only the last 4 days (re)boots occured.

  Disclaimer: I am not a developer so the script might not be a piece of
  art. It does the job though.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1639500/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1571241] Re: ZFS initrd script does not import zpool using /dev/disk/by-id device paths

2016-05-08 Thread Sam VdE
No errors encountered using the PPA.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1571241

Title:
  ZFS initrd script does not import zpool using /dev/disk/by-id device
  paths

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  Ubuntu 16.04 includes initrd zfs support, but the provided script does
  not allow zpools to be imported using the /dev/disk/by-id paths. As a
  result, the pool will be imported using "/dev/sdX" device names, which
  is not the preferred way.

  Tested and validated solution/proof of concept:

   - extract system generated initrd
   - edit scripts/zfs file: replace "zpool import -o readonly=on -N" with 
"zpool import -o readonly=on -d /dev/disk/by-id -N"
   - manually generate initrd using the altered script (using cpio)
   - replace system generated initrd with altered one

  I'd suggest adding a variable to /etc/default/zfs that would be
  reflected in the initrd zfs script, but perhaps the ZoL folks are
  better suited to give advice on this.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1571241/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1577057] Re: zfs initrd script fails when rootdelay boot option is set

2016-05-08 Thread Sam VdE
No errors encountered using the PPA.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1577057

Title:
  zfs initrd script fails when rootdelay boot option is set

Status in zfs-linux package in Ubuntu:
  In Progress

Bug description:
  It looks like that, when booting off zfs (zfs holds /boot) with the
  rootdelay boot option set, the boot process fails in the initrd fase,
  asking to manually import the pool using zpool import -f -R / -N. I
  only had one system with that parameter set, which I seldom reboot.

  I did not find an upstream reference of this bug or behavior.

  The error is caused by the fact the pool is already imported: "zpool
  status" executed on the initramfs prompt will correctly list the pool
  and all devices online. To continue, one has to export the pool, re-
  import it and exit the initramfs prompt after which regular booting
  continues. Not exporting and reimporting it leaves the pool readonly
  leading to boot errors further down the road (systemd units failing).

  I noticed zfs_autoimport_disable is set to 1 in the initramfs
  environment, so looking at /usr/share/initramfs-tools/scripts/zfs this
  section might be the issue (zpool import succeeding, but $ZFS_HEALTH
  never returning with the correct status (I'm not a programmer but
  perhaps ZFS_HEALTH is a local variable in the zfs_test_import
  function)):

   delay=${ROOTDELAY:-0}

   if [ "$delay" -gt 0 ]
   then
    # Try to import the pool read-only.  If it does not import with
    # the ONLINE status, wait and try again.  The pool could be
    # DEGRADED because a drive is really missing, or it might just
    # be slow to be detected.
    zfs_test_import
    retry_nr=0
    while [ "$retry_nr" -lt "$delay" ] && [ "$ZFS_HEALTH" != "ONLINE" ]
    do
     [ "$quiet" != "y" ] && log_begin_msg "Retrying ZFS read-only import"
     /bin/sleep 1
     zfs_test_import
     retry_nr=$(( $retry_nr + 1 ))
     [ "$quiet" != "y" ] && log_end_msg
    done
    unset retry_nr
    unset ZFS_HEALTH
   fi
   unset delay

  
  Edit: to be clear: I removed the rootdelay parameter, regenerated the initrd, 
and was able to boot successfully afterwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1577057/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1577057] Re: zfs initrd script fails when rootdelay boot option is set

2016-05-04 Thread Sam VdE
Here are the test results:

- no datasets mounted when it breaks
- A delay of 1 second is not enough: breaks on first try
- Retested with 5 seconds delay: ok

- Retested with new zfs script: notok, again the pool is busy error.

I had a quick look at the script and I found the remaining problem: the
command ZFS_STDERR=$(zpool export "$ZFS_RPOOL" >/dev/null) will not
capture stderr messages. I adapted this to ZFS_STDERR=$(zpool export
"$ZFS_RPOOL" 2>&1  >/dev/null) and was able to boot correctly using
rootdelay=10. So I think that solves it.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to zfs-linux in Ubuntu.
https://bugs.launchpad.net/bugs/1577057

Title:
  zfs initrd script fails when rootdelay boot option is set

Status in zfs-linux package in Ubuntu:
  Confirmed

Bug description:
  It looks like that, when booting off zfs (zfs holds /boot) with the
  rootdelay boot option set, the boot process fails in the initrd fase,
  asking to manually import the pool using zpool import -f -R / -N. I
  only had one system with that parameter set, which I seldom reboot.

  I did not find an upstream reference of this bug or behavior.

  The error is caused by the fact the pool is already imported: "zpool
  status" executed on the initramfs prompt will correctly list the pool
  and all devices online. To continue, one has to export the pool, re-
  import it and exit the initramfs prompt after which regular booting
  continues. Not exporting and reimporting it leaves the pool readonly
  leading to boot errors further down the road (systemd units failing).

  I noticed zfs_autoimport_disable is set to 1 in the initramfs
  environment, so looking at /usr/share/initramfs-tools/scripts/zfs this
  section might be the issue (zpool import succeeding, but $ZFS_HEALTH
  never returning with the correct status (I'm not a programmer but
  perhaps ZFS_HEALTH is a local variable in the zfs_test_import
  function)):

   delay=${ROOTDELAY:-0}

   if [ "$delay" -gt 0 ]
   then
    # Try to import the pool read-only.  If it does not import with
    # the ONLINE status, wait and try again.  The pool could be
    # DEGRADED because a drive is really missing, or it might just
    # be slow to be detected.
    zfs_test_import
    retry_nr=0
    while [ "$retry_nr" -lt "$delay" ] && [ "$ZFS_HEALTH" != "ONLINE" ]
    do
     [ "$quiet" != "y" ] && log_begin_msg "Retrying ZFS read-only import"
     /bin/sleep 1
     zfs_test_import
     retry_nr=$(( $retry_nr + 1 ))
     [ "$quiet" != "y" ] && log_end_msg
    done
    unset retry_nr
    unset ZFS_HEALTH
   fi
   unset delay

  
  Edit: to be clear: I removed the rootdelay parameter, regenerated the initrd, 
and was able to boot successfully afterwards.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/zfs-linux/+bug/1577057/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1222197] Re: Dell XPS 13 screen always on lowest dim with 3.8.0-30 kernel

2014-12-02 Thread Sam VdE
Dear

As far as I'm concerned it is ok to close this bug. I indeed upgraded to
14.04 in the meantime.


Kind regards
Sam

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1222197

Title:
  Dell XPS 13 screen always on lowest dim with 3.8.0-30 kernel

Status in linux package in Ubuntu:
  Incomplete

Bug description:
  Since installing the latest kernel 3.8.0-30 on my Dell XPS 13, I can
  only use the screen on lowest dim/brightness. Hardware keys as well as
  System Settings have no effect on the dim level.

  I already had issues with the brightness settings before. To make
  hardware keys and System Settings brightness settings work I added

  echo 0  /sys/class/backlight/intel_backlight/brightness

  to /etc/rc.local and a custom /etc/pm/sleep.d/resume_brightness
  script.

  
  Things tried
  - remove echo 0  /sys/class/backlight/intel_backlight/brightness from 
previously mentioned scripts
  - add grub boot options acpi_osi=Linux acpi_backlight=vendor 
acpi_osi='!Windows 2012'

  Workaround:

  - keep utilizing echo 0  /sys/class/backlight/intel_backlight/brightness 
in both scripts
  - boot using the 3.8.0-29 kernel

  This is a regression.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.04
  Package: linux-signed-image-3.8.0-30-generic 3.8.0-30.44
  ProcVersionSignature: Ubuntu 3.8.0-29.42-generic 3.8.13.5
  Uname: Linux 3.8.0-29-generic x86_64
  ApportVersion: 2.9.2-0ubuntu8.3
  Architecture: amd64
  Date: Sat Sep  7 18:09:08 2013
  InstallationDate: Installed on 2013-03-28 (162 days ago)
  InstallationMedia: Ubuntu 13.04 Raring Ringtail - Alpha amd64 (20130328)
  MarkForUpload: True
  SourcePackage: linux-signed
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1222197/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1234546] Re: Plymouth screen refresh is bad when Live USB is started in UEFI mode

2013-10-20 Thread Sam VdE
I am definitely booting in EFI, as my laptop does not have a Bios
enabled bootloader installed.

Installation from USB will probably bring you in Bios mode though
meaning you will have to fix the EFI afterwards.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1234546

Title:
  Plymouth screen refresh is bad when Live USB is started in UEFI mode

Status in “linux” package in Ubuntu:
  Confirmed

Bug description:
  Test computer : Dell XPS 13 L322x with Intel Graphics HD4000.

  I burnt a Live Ubuntu Gnome 13.10 USB, first with the Beta 1, and with
  the 2013-09-30 daily image. This USB key has been created with Rufus
  under Windows and is compliant for both UEFI and Bios mode.

  If I boot on the key in Bios mode, no problem, I see the first screen
  in text mode (Install, test only...). After having chosen test only,
  I see the Gnome footprint and everything runs.

  If I boot the key in UEFI mode (SecureBoot is disabled), I see the
  first screen. After having chosen test only, plymouth starts but the
  screen refresh frequency is bad, so I have many footprints on the
  screen, etc.

  Everything seems to run fine also, except this screen refresh
  frequency.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: plymouth 0.8.8-0ubuntu8
  ProcVersionSignature: Ubuntu 3.11.0-9.16-generic 3.11.2
  Uname: Linux 3.11.0-9-generic x86_64
  ApportVersion: 2.12.5-0ubuntu1
  Architecture: amd64
  Date: Thu Oct  3 09:00:43 2013
  DefaultPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-logo/ubuntu-gnome-logo.plymouth
  InstallationDate: Installed on 2013-09-22 (10 days ago)
  InstallationMedia: Ubuntu-GNOME 13.10 Saucy Salamander - Alpha amd64 
(20130903)
  MachineType: Dell Inc. Dell System XPS L322X
  MarkForUpload: True
  ProcCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  SourcePackage: plymouth
  TextPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-text/ubuntu-gnome-text.plymouth
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 05/15/2013
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: A09
  dmi.board.name: 0PJHXN
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A00
  dmi.chassis.type: 8
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: 0.1
  dmi.modalias: 
dmi:bvnDellInc.:bvrA09:bd05/15/2013:svnDellInc.:pnDellSystemXPSL322X:pvr:rvnDellInc.:rn0PJHXN:rvrA00:cvnDellInc.:ct8:cvr0.1:
  dmi.product.name: Dell System XPS L322X
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1234546/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1234546] Re: Plymouth screen refresh is bad when Live USB is started in UEFI mode

2013-10-19 Thread Sam VdE
Hi Steve

Thank you for this workaround, I confirm this also works for me!

I _think_ this brings you in USB bios mode per default when installing?

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1234546

Title:
  Plymouth screen refresh is bad when Live USB is started in UEFI mode

Status in “linux” package in Ubuntu:
  Confirmed

Bug description:
  Test computer : Dell XPS 13 L322x with Intel Graphics HD4000.

  I burnt a Live Ubuntu Gnome 13.10 USB, first with the Beta 1, and with
  the 2013-09-30 daily image. This USB key has been created with Rufus
  under Windows and is compliant for both UEFI and Bios mode.

  If I boot on the key in Bios mode, no problem, I see the first screen
  in text mode (Install, test only...). After having chosen test only,
  I see the Gnome footprint and everything runs.

  If I boot the key in UEFI mode (SecureBoot is disabled), I see the
  first screen. After having chosen test only, plymouth starts but the
  screen refresh frequency is bad, so I have many footprints on the
  screen, etc.

  Everything seems to run fine also, except this screen refresh
  frequency.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: plymouth 0.8.8-0ubuntu8
  ProcVersionSignature: Ubuntu 3.11.0-9.16-generic 3.11.2
  Uname: Linux 3.11.0-9-generic x86_64
  ApportVersion: 2.12.5-0ubuntu1
  Architecture: amd64
  Date: Thu Oct  3 09:00:43 2013
  DefaultPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-logo/ubuntu-gnome-logo.plymouth
  InstallationDate: Installed on 2013-09-22 (10 days ago)
  InstallationMedia: Ubuntu-GNOME 13.10 Saucy Salamander - Alpha amd64 
(20130903)
  MachineType: Dell Inc. Dell System XPS L322X
  MarkForUpload: True
  ProcCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  SourcePackage: plymouth
  TextPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-text/ubuntu-gnome-text.plymouth
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 05/15/2013
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: A09
  dmi.board.name: 0PJHXN
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A00
  dmi.chassis.type: 8
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: 0.1
  dmi.modalias: 
dmi:bvnDellInc.:bvrA09:bd05/15/2013:svnDellInc.:pnDellSystemXPSL322X:pvr:rvnDellInc.:rn0PJHXN:rvrA00:cvnDellInc.:ct8:cvr0.1:
  dmi.product.name: Dell System XPS L322X
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1234546/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1234546] Re: Plymouth screen refresh is bad when Live USB is started in UEFI mode

2013-10-18 Thread Sam VdE
I confirm the bug still exists in latest mainline kernel
3.12.0-031200rc5.

Can we please increase the priority on this? If I would have followed
the 13.04 dist upgrade suggestion, I would have ended up with a non-
functional system. This is an Ubuntu promoted machine!

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1234546

Title:
  Plymouth screen refresh is bad when Live USB is started in UEFI mode

Status in “linux” package in Ubuntu:
  Confirmed

Bug description:
  Test computer : Dell XPS 13 L322x with Intel Graphics HD4000.

  I burnt a Live Ubuntu Gnome 13.10 USB, first with the Beta 1, and with
  the 2013-09-30 daily image. This USB key has been created with Rufus
  under Windows and is compliant for both UEFI and Bios mode.

  If I boot on the key in Bios mode, no problem, I see the first screen
  in text mode (Install, test only...). After having chosen test only,
  I see the Gnome footprint and everything runs.

  If I boot the key in UEFI mode (SecureBoot is disabled), I see the
  first screen. After having chosen test only, plymouth starts but the
  screen refresh frequency is bad, so I have many footprints on the
  screen, etc.

  Everything seems to run fine also, except this screen refresh
  frequency.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: plymouth 0.8.8-0ubuntu8
  ProcVersionSignature: Ubuntu 3.11.0-9.16-generic 3.11.2
  Uname: Linux 3.11.0-9-generic x86_64
  ApportVersion: 2.12.5-0ubuntu1
  Architecture: amd64
  Date: Thu Oct  3 09:00:43 2013
  DefaultPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-logo/ubuntu-gnome-logo.plymouth
  InstallationDate: Installed on 2013-09-22 (10 days ago)
  InstallationMedia: Ubuntu-GNOME 13.10 Saucy Salamander - Alpha amd64 
(20130903)
  MachineType: Dell Inc. Dell System XPS L322X
  MarkForUpload: True
  ProcCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  SourcePackage: plymouth
  TextPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-text/ubuntu-gnome-text.plymouth
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 05/15/2013
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: A09
  dmi.board.name: 0PJHXN
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A00
  dmi.chassis.type: 8
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: 0.1
  dmi.modalias: 
dmi:bvnDellInc.:bvrA09:bd05/15/2013:svnDellInc.:pnDellSystemXPSL322X:pvr:rvnDellInc.:rn0PJHXN:rvrA00:cvnDellInc.:ct8:cvr0.1:
  dmi.product.name: Dell System XPS L322X
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1234546/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1234546] Re: Plymouth screen refresh is bad when Live USB is started in UEFI mode

2013-10-18 Thread Sam VdE
** Changed in: linux (Ubuntu)
   Status: Incomplete = Confirmed

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1234546

Title:
  Plymouth screen refresh is bad when Live USB is started in UEFI mode

Status in “linux” package in Ubuntu:
  Confirmed

Bug description:
  Test computer : Dell XPS 13 L322x with Intel Graphics HD4000.

  I burnt a Live Ubuntu Gnome 13.10 USB, first with the Beta 1, and with
  the 2013-09-30 daily image. This USB key has been created with Rufus
  under Windows and is compliant for both UEFI and Bios mode.

  If I boot on the key in Bios mode, no problem, I see the first screen
  in text mode (Install, test only...). After having chosen test only,
  I see the Gnome footprint and everything runs.

  If I boot the key in UEFI mode (SecureBoot is disabled), I see the
  first screen. After having chosen test only, plymouth starts but the
  screen refresh frequency is bad, so I have many footprints on the
  screen, etc.

  Everything seems to run fine also, except this screen refresh
  frequency.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: plymouth 0.8.8-0ubuntu8
  ProcVersionSignature: Ubuntu 3.11.0-9.16-generic 3.11.2
  Uname: Linux 3.11.0-9-generic x86_64
  ApportVersion: 2.12.5-0ubuntu1
  Architecture: amd64
  Date: Thu Oct  3 09:00:43 2013
  DefaultPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-logo/ubuntu-gnome-logo.plymouth
  InstallationDate: Installed on 2013-09-22 (10 days ago)
  InstallationMedia: Ubuntu-GNOME 13.10 Saucy Salamander - Alpha amd64 
(20130903)
  MachineType: Dell Inc. Dell System XPS L322X
  MarkForUpload: True
  ProcCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  SourcePackage: plymouth
  TextPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-text/ubuntu-gnome-text.plymouth
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 05/15/2013
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: A09
  dmi.board.name: 0PJHXN
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A00
  dmi.chassis.type: 8
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: 0.1
  dmi.modalias: 
dmi:bvnDellInc.:bvrA09:bd05/15/2013:svnDellInc.:pnDellSystemXPSL322X:pvr:rvnDellInc.:rn0PJHXN:rvrA00:cvnDellInc.:ct8:cvr0.1:
  dmi.product.name: Dell System XPS L322X
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1234546/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1234546] Re: Plymouth screen refresh is bad when Live USB is started in UEFI mode

2013-10-17 Thread Sam VdE
I confirm this bug and I would like to extend it: it is not limited to
the installer LiveUSB. When using the Dell XPS 13 and booting using
UEFI, the screen goes crazy with lots of ghost images etc. as the OP
describes.

I unfortunately use EFI to boot several instances of Ubuntu, making
saucy unusable for me for the time being. Which is unfortunate on a
linux branded machine.

I installed using BIOS and migrated to EFI afterwards.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1234546

Title:
  Plymouth screen refresh is bad when Live USB is started in UEFI mode

Status in “linux” package in Ubuntu:
  Incomplete

Bug description:
  Test computer : Dell XPS 13 L322x with Intel Graphics HD4000.

  I burnt a Live Ubuntu Gnome 13.10 USB, first with the Beta 1, and with
  the 2013-09-30 daily image. This USB key has been created with Rufus
  under Windows and is compliant for both UEFI and Bios mode.

  If I boot on the key in Bios mode, no problem, I see the first screen
  in text mode (Install, test only...). After having chosen test only,
  I see the Gnome footprint and everything runs.

  If I boot the key in UEFI mode (SecureBoot is disabled), I see the
  first screen. After having chosen test only, plymouth starts but the
  screen refresh frequency is bad, so I have many footprints on the
  screen, etc.

  Everything seems to run fine also, except this screen refresh
  frequency.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.10
  Package: plymouth 0.8.8-0ubuntu8
  ProcVersionSignature: Ubuntu 3.11.0-9.16-generic 3.11.2
  Uname: Linux 3.11.0-9-generic x86_64
  ApportVersion: 2.12.5-0ubuntu1
  Architecture: amd64
  Date: Thu Oct  3 09:00:43 2013
  DefaultPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-logo/ubuntu-gnome-logo.plymouth
  InstallationDate: Installed on 2013-09-22 (10 days ago)
  InstallationMedia: Ubuntu-GNOME 13.10 Saucy Salamander - Alpha amd64 
(20130903)
  MachineType: Dell Inc. Dell System XPS L322X
  MarkForUpload: True
  ProcCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  ProcFB: 0 inteldrmfb
  ProcKernelCmdLine: BOOT_IMAGE=/boot/vmlinuz-3.11.0-9-generic.efi.signed 
root=UUID=529ee009-cfab-4b00-832d-5390f382f988 ro quiet splash vt.handoff=7
  SourcePackage: plymouth
  TextPlymouth: 
/lib/plymouth/themes/ubuntu-gnome-text/ubuntu-gnome-text.plymouth
  UpgradeStatus: No upgrade log present (probably fresh install)
  dmi.bios.date: 05/15/2013
  dmi.bios.vendor: Dell Inc.
  dmi.bios.version: A09
  dmi.board.name: 0PJHXN
  dmi.board.vendor: Dell Inc.
  dmi.board.version: A00
  dmi.chassis.type: 8
  dmi.chassis.vendor: Dell Inc.
  dmi.chassis.version: 0.1
  dmi.modalias: 
dmi:bvnDellInc.:bvrA09:bd05/15/2013:svnDellInc.:pnDellSystemXPSL322X:pvr:rvnDellInc.:rn0PJHXN:rvrA00:cvnDellInc.:ct8:cvr0.1:
  dmi.product.name: Dell System XPS L322X
  dmi.sys.vendor: Dell Inc.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1234546/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp


[Kernel-packages] [Bug 1222197] [NEW] Dell XPS 13 screen always on lowest dim with 3.8.0-30 kernel

2013-09-07 Thread Sam VdE
Public bug reported:

Since installing the latest kernel 3.8.0-30 on my Dell XPS 13, I can
only use the screen on lowest dim/brightness. Hardware keys as well as
System Settings have no effect on the dim level.

I already had issues with the brightness settings before. To make
hardware keys and System Settings brightness settings work I added

echo 0  /sys/class/backlight/intel_backlight/brightness

to /etc/rc.local and a custom /etc/pm/sleep.d/resume_brightness script.


Things tried
- remove echo 0  /sys/class/backlight/intel_backlight/brightness from 
previously mentioned scripts
- add grub boot options acpi_osi=Linux acpi_backlight=vendor 
acpi_osi='!Windows 2012'

Workaround:

- keep utilizing echo 0  /sys/class/backlight/intel_backlight/brightness in 
both scripts
- boot using the 3.8.0-29 kernel

This is a regression.

ProblemType: Bug
DistroRelease: Ubuntu 13.04
Package: linux-signed-image-3.8.0-30-generic 3.8.0-30.44
ProcVersionSignature: Ubuntu 3.8.0-29.42-generic 3.8.13.5
Uname: Linux 3.8.0-29-generic x86_64
ApportVersion: 2.9.2-0ubuntu8.3
Architecture: amd64
Date: Sat Sep  7 18:09:08 2013
InstallationDate: Installed on 2013-03-28 (162 days ago)
InstallationMedia: Ubuntu 13.04 Raring Ringtail - Alpha amd64 (20130328)
MarkForUpload: True
SourcePackage: linux-signed
UpgradeStatus: No upgrade log present (probably fresh install)

** Affects: linux-signed (Ubuntu)
 Importance: Undecided
 Status: New


** Tags: amd64 apport-bug raring

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux-signed in Ubuntu.
https://bugs.launchpad.net/bugs/1222197

Title:
  Dell XPS 13 screen always on lowest dim with 3.8.0-30 kernel

Status in “linux-signed” package in Ubuntu:
  New

Bug description:
  Since installing the latest kernel 3.8.0-30 on my Dell XPS 13, I can
  only use the screen on lowest dim/brightness. Hardware keys as well as
  System Settings have no effect on the dim level.

  I already had issues with the brightness settings before. To make
  hardware keys and System Settings brightness settings work I added

  echo 0  /sys/class/backlight/intel_backlight/brightness

  to /etc/rc.local and a custom /etc/pm/sleep.d/resume_brightness
  script.

  
  Things tried
  - remove echo 0  /sys/class/backlight/intel_backlight/brightness from 
previously mentioned scripts
  - add grub boot options acpi_osi=Linux acpi_backlight=vendor 
acpi_osi='!Windows 2012'

  Workaround:

  - keep utilizing echo 0  /sys/class/backlight/intel_backlight/brightness 
in both scripts
  - boot using the 3.8.0-29 kernel

  This is a regression.

  ProblemType: Bug
  DistroRelease: Ubuntu 13.04
  Package: linux-signed-image-3.8.0-30-generic 3.8.0-30.44
  ProcVersionSignature: Ubuntu 3.8.0-29.42-generic 3.8.13.5
  Uname: Linux 3.8.0-29-generic x86_64
  ApportVersion: 2.9.2-0ubuntu8.3
  Architecture: amd64
  Date: Sat Sep  7 18:09:08 2013
  InstallationDate: Installed on 2013-03-28 (162 days ago)
  InstallationMedia: Ubuntu 13.04 Raring Ringtail - Alpha amd64 (20130328)
  MarkForUpload: True
  SourcePackage: linux-signed
  UpgradeStatus: No upgrade log present (probably fresh install)

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux-signed/+bug/1222197/+subscriptions

-- 
Mailing list: https://launchpad.net/~kernel-packages
Post to : kernel-packages@lists.launchpad.net
Unsubscribe : https://launchpad.net/~kernel-packages
More help   : https://help.launchpad.net/ListHelp