Re: Tracker as a security risks

2016-12-05 Thread Michael Catanzaro
On Mon, 2016-12-05 at 21:31 +0100, Carlos Garnacho wrote:
> Thanks for the tip :), worth a look indeed, although I'm looking into
> using seccomp directly.

Strongly consider using libseccomp for this!
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Tracker as a security risks

2016-12-05 Thread Carlos Garnacho
Hey Philip :),

On Mon, Dec 5, 2016 at 6:28 PM, Philip Withnall  wrote:
> Hey,
>
> On Mon, 2016-12-05 at 16:42 +0100, Carlos Garnacho wrote:
>> On Mon, Dec 5, 2016 at 3:01 PM, Hanno Böck  wrote:
>> > On Mon, 5 Dec 2016 13:44:40 +
>> > Sam Thursfield  wrote:
>> >
>> > > The design of Tracker takes the risks into account. Metadata
>> > > extraction is isolated in its own process (tracker-extract) which
>> > > can
>> > > crash without (theoretically) causing any harm.
>> >
>> > I don't see how that helps against security vulnerabilities.
>> >
>> > Having an isolated process probably helps in a way that a crash
>> > won't
>> > cause the whole tracker service to malfunction. Thus parsing broken
>> > files won't cause a service disruption. But as long as this process
>> > runs with normal user rights this doesn't protect in a security
>> > sense.
>> >
>> > > > I think there needs to be a wider discussion about this and the
>> > > > fundamental design choices done here need to be questioned.
>> > >
>> > > What questions do you have in particular?
>> >
>> > Quite frankly, I don't claim to have all the answers here, that's
>> > why I
>> > formulated it in an open "needs discussion" way.
>> >
>> > I think sandboxing the tracker parser (which you already indicated
>> > in your mail) is probably the most reasonable way to go forward.
>> > This
>> > isn't exactly my area of expertise, so I can't comment on which
>> > technique here is most promising.
>>
>> It indeed sounds possible to lift extraction into a separate process
>> with limited access to the filesystem, we essentially need to pass an
>> fd to mmap() and an output one to receive sparql back. There's just
>> two things to consider:
>>
>> - The extraction process sometimes needs access to misc files (eg.
>> CUE
>> files, XMP sidecar files, ...), those might be passed along too, but
>> then we need detecting those cases beforehand.
>>
>> - Ideally we wouldn't spawn one process per file being extracted,
>> although if we go to defcon 1 level of paranoia, that's probably what
>> should happen.
>
> I would suggest a single sandboxed extraction process, which has read-
> only access to the whole of ~/, and write access to the Tracker
> database. No network access. That means that regardless of whether or

All changes to the database must be managed by a single thread, which
currently resides in tracker-store, update requests are received via
DBus. So we wouldn't need write access to the database, but we'd need
a connection to the tracker-store process, direct or indirect.

> how the extraction process gets compromised, it cannot compromise the
> integrity of any of the files in your home directory (except the
> Tracker database, which I assume people aren’t too precious about), and
> it can’t compromise the confidentiality of any of your data (except by
> leaking it through the Tracker database — can we assume the database
> format is sufficiently prescribed to be able to prevent this?).

Well... it isn't. If you have readonly/readwrite access to
~/.cache/tracker/meta.db, you have such access to all of it, if the
attacker can do sqlite3_open(), it will bypass any artificial security
restriction we may set (eg.
https://www.sqlite.org/capi3ref.html#sqlite3_set_authorizer). Now,
readonly access to the database seems moot if all of ~/ is also
readonly, and with write access all you'd get is to tamper with the
database.

Readonly access to ~/ kind of concerns me, although might not be used
for any ransomware after all if the targeted file can't be encrypted
nor sent over the network, there seems to be little to do with the
knowledge that might be gained if it can't get out of the process
except over the pipe feeding tracker-store.

>
> This should be easily accomplished by using an AppArmor (or
> equivalently, SELinux) profile for tracker-extract. seccomp-bpf could
> also be used to achieve much the same thing, regardless of whether an
> LSM is enabled.
>
> The Apertis project has such an AppArmor file already, and it would be
> great if that were pulled upstream:
>
> https://git.apertis.org/cgit/packaging/tracker.git/tree/debian/apparmor
> .d/usr.lib.tracker
>
> (I am not claiming this profile is perfect, but it’s a start.)

Thanks for the tip :), worth a look indeed, although I'm looking into
using seccomp directly.

>
>> Anyway, this goes IMHO too much on the technical side for this ML, we
>> already have https://bugzilla.gnome.org/show_bug.cgi?id=764786 filed
>> to Tracker, and it's already high in my list for fixing on 1.12, feel
>> free to join there.
>>
>> And I should add... Tracker is not alone here, if it's not Tracker
>> stumbling on infected content, with varying but still rather low
>> levels of interaction it may be a thumbnailer, a previewer like
>> sushi,
>> or the web browser itself streaming content which hit this. So
>> there's
>> more places in need of further isolation when dealing with 

Re: Tracker as a security risks

2016-12-05 Thread Philip Withnall
Hey,

On Mon, 2016-12-05 at 16:42 +0100, Carlos Garnacho wrote:
> On Mon, Dec 5, 2016 at 3:01 PM, Hanno Böck  wrote:
> > On Mon, 5 Dec 2016 13:44:40 +
> > Sam Thursfield  wrote:
> > 
> > > The design of Tracker takes the risks into account. Metadata
> > > extraction is isolated in its own process (tracker-extract) which
> > > can
> > > crash without (theoretically) causing any harm.
> > 
> > I don't see how that helps against security vulnerabilities.
> > 
> > Having an isolated process probably helps in a way that a crash
> > won't
> > cause the whole tracker service to malfunction. Thus parsing broken
> > files won't cause a service disruption. But as long as this process
> > runs with normal user rights this doesn't protect in a security
> > sense.
> > 
> > > > I think there needs to be a wider discussion about this and the
> > > > fundamental design choices done here need to be questioned.
> > > 
> > > What questions do you have in particular?
> > 
> > Quite frankly, I don't claim to have all the answers here, that's
> > why I
> > formulated it in an open "needs discussion" way.
> > 
> > I think sandboxing the tracker parser (which you already indicated
> > in your mail) is probably the most reasonable way to go forward.
> > This
> > isn't exactly my area of expertise, so I can't comment on which
> > technique here is most promising.
> 
> It indeed sounds possible to lift extraction into a separate process
> with limited access to the filesystem, we essentially need to pass an
> fd to mmap() and an output one to receive sparql back. There's just
> two things to consider:
> 
> - The extraction process sometimes needs access to misc files (eg.
> CUE
> files, XMP sidecar files, ...), those might be passed along too, but
> then we need detecting those cases beforehand.
> 
> - Ideally we wouldn't spawn one process per file being extracted,
> although if we go to defcon 1 level of paranoia, that's probably what
> should happen.

I would suggest a single sandboxed extraction process, which has read-
only access to the whole of ~/, and write access to the Tracker
database. No network access. That means that regardless of whether or
how the extraction process gets compromised, it cannot compromise the
integrity of any of the files in your home directory (except the
Tracker database, which I assume people aren’t too precious about), and
it can’t compromise the confidentiality of any of your data (except by
leaking it through the Tracker database — can we assume the database
format is sufficiently prescribed to be able to prevent this?).

This should be easily accomplished by using an AppArmor (or
equivalently, SELinux) profile for tracker-extract. seccomp-bpf could
also be used to achieve much the same thing, regardless of whether an
LSM is enabled.

The Apertis project has such an AppArmor file already, and it would be
great if that were pulled upstream:

https://git.apertis.org/cgit/packaging/tracker.git/tree/debian/apparmor
.d/usr.lib.tracker

(I am not claiming this profile is perfect, but it’s a start.)

> Anyway, this goes IMHO too much on the technical side for this ML, we
> already have https://bugzilla.gnome.org/show_bug.cgi?id=764786 filed
> to Tracker, and it's already high in my list for fixing on 1.12, feel
> free to join there.
> 
> And I should add... Tracker is not alone here, if it's not Tracker
> stumbling on infected content, with varying but still rather low
> levels of interaction it may be a thumbnailer, a previewer like
> sushi,
> or the web browser itself streaming content which hit this. So
> there's
> more places in need of further isolation when dealing with untrusted
> content.
> 
> And still, the chain is only as strong as its weakest link, as soon
> as
> there is anything opening that file with wide enough permissions to
> cause any harm, you're essentially screwed. This might sound like an
> argument to running every app through flatpak, although I think the
> long term answer always is "fix the vulnerability!".

Agreed. Thumbnailers are another big target here.

> > The other issue I think is that the quality of huge parts of the
> > foss
> > ecosystem needs to be improved. The good news here is that we got
> > some
> > powerful tools in terms of fuzzing (afl, libfuzzer) and memory
> > safety
> > bug detection (asan) in the past years. Ideally all free software
> > devs
> > should be aware of those tools and use them in their development
> > process. I'm trying to help here where I can, see e.g. also my
> > recent
> > post on this list [1]. If our libraries would be better tested we
> > could
> > be more comfortable feeding it with untrusted inputs.
> 
> I agree some more active prevention would be positive, sounds like
> something to tackle in the libraries dealing with file formats
> though,
> Tracker is a strawman here, in the sense that filesystem extraction
> it's only exploitable through its tracker-extract's modules, and
> those
> are 

Re: Tracker as a security risks

2016-12-05 Thread Tobias Mueller
Hi.

On Mo, 2016-12-05 at 16:42 +0100, Carlos Garnacho wrote:
> And I should add... Tracker is not alone here, if it's not Tracker
> stumbling on infected content, with varying but still rather low
> levels of interaction it may be a thumbnailer, a previewer like sushi,
> or the web browser itself streaming content which hit this. So there's
> more places in need of further isolation when dealing with untrusted
> content.
> 
> And still, the chain is only as strong as its weakest link, as soon as
> there is anything opening that file with wide enough permissions to
> cause any harm, you're essentially screwed.
True. Which is why operating on untrusted input with regular privileges
is a bad idea™.  The cases you've listed require some degree of user
intervention though. The blog post described a way which described very
little user intervention which makes is more scary than the attacks
that you've just described.

>  This might sound like an
> argument to running every app through flatpak, although I think the
> long term answer always is "fix the vulnerability!".
Hah! That'd be great! Let's work hard on making that happen. However, I
think by now it's safe to assume that we cannot fix all the C code
there is. We've tried for the last decade or so.

I like the engagement reg. Rust. I hope it'll be successful.

Cheers,
  Tobi
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Tracker as a security risks

2016-12-05 Thread Carlos Garnacho
Hi,

On Mon, Dec 5, 2016 at 3:01 PM, Hanno Böck  wrote:
> On Mon, 5 Dec 2016 13:44:40 +
> Sam Thursfield  wrote:
>
>> The design of Tracker takes the risks into account. Metadata
>> extraction is isolated in its own process (tracker-extract) which can
>> crash without (theoretically) causing any harm.
>
> I don't see how that helps against security vulnerabilities.
>
> Having an isolated process probably helps in a way that a crash won't
> cause the whole tracker service to malfunction. Thus parsing broken
> files won't cause a service disruption. But as long as this process
> runs with normal user rights this doesn't protect in a security sense.
>
>> > I think there needs to be a wider discussion about this and the
>> > fundamental design choices done here need to be questioned.
>>
>> What questions do you have in particular?
>
> Quite frankly, I don't claim to have all the answers here, that's why I
> formulated it in an open "needs discussion" way.
>
> I think sandboxing the tracker parser (which you already indicated
> in your mail) is probably the most reasonable way to go forward. This
> isn't exactly my area of expertise, so I can't comment on which
> technique here is most promising.

It indeed sounds possible to lift extraction into a separate process
with limited access to the filesystem, we essentially need to pass an
fd to mmap() and an output one to receive sparql back. There's just
two things to consider:

- The extraction process sometimes needs access to misc files (eg. CUE
files, XMP sidecar files, ...), those might be passed along too, but
then we need detecting those cases beforehand.

- Ideally we wouldn't spawn one process per file being extracted,
although if we go to defcon 1 level of paranoia, that's probably what
should happen.

Anyway, this goes IMHO too much on the technical side for this ML, we
already have https://bugzilla.gnome.org/show_bug.cgi?id=764786 filed
to Tracker, and it's already high in my list for fixing on 1.12, feel
free to join there.

And I should add... Tracker is not alone here, if it's not Tracker
stumbling on infected content, with varying but still rather low
levels of interaction it may be a thumbnailer, a previewer like sushi,
or the web browser itself streaming content which hit this. So there's
more places in need of further isolation when dealing with untrusted
content.

And still, the chain is only as strong as its weakest link, as soon as
there is anything opening that file with wide enough permissions to
cause any harm, you're essentially screwed. This might sound like an
argument to running every app through flatpak, although I think the
long term answer always is "fix the vulnerability!".

>
>
> The other issue I think is that the quality of huge parts of the foss
> ecosystem needs to be improved. The good news here is that we got some
> powerful tools in terms of fuzzing (afl, libfuzzer) and memory safety
> bug detection (asan) in the past years. Ideally all free software devs
> should be aware of those tools and use them in their development
> process. I'm trying to help here where I can, see e.g. also my recent
> post on this list [1]. If our libraries would be better tested we could
> be more comfortable feeding it with untrusted inputs.

I agree some more active prevention would be positive, sounds like
something to tackle in the libraries dealing with file formats though,
Tracker is a strawman here, in the sense that filesystem extraction
it's only exploitable through its tracker-extract's modules, and those
are for the most part implemented using external libraries.

Cheers,
  Carlos
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Tracker as a security risks

2016-12-05 Thread Emilio Pozuelo Monfort
On 05/12/16 14:03, Hanno Böck wrote:
> Hi,
> 
> I wanted to point out a recent blogpost by IT security export Chris
> Evans:
> https://scarybeastsecurity.blogspot.dk/2016/11/0day-poc-risky-design-decisions-in.html
> 
> The short version: Chrome automatically downloads files without a file
> dialog, tracker (part of the GNOME desktop) subsequently automatically
> indexes these files with a wide variety of parsers (including
> gstreamer, but also others like imagemagick).
> 
> While the bugs that evans points out have been fixed (and the gstreamer
> team has fixed a whole bunch of other potential security issues I
> reported in the past days, thanks!), the whole design of Tracker seems
> incredibly risky. It is certainly worthwhile trying to make the
> underlying software more secure, but having tried to do that before
> I find it unlikely that projects like gstreamer or imagemagick will
> ever be in a state where we can feel comfortable feeding them with
> untrusted files.
> 
> The core problem here is that tracker automatically parses files of
> potentially unknown origin with parsers that haven't been built with
> security in mind. This happens without any sandboxing.
> 
> I think there needs to be a wider discussion about this and the
> fundamental design choices done here need to be questioned.

Thanks for starting this discussion.

I think these questions also apply to the thumbnailer service and to
gtk+/gdk-pixbuf APIs, e.g. the filechooser. See e.g.
http://www.openwall.com/lists/oss-security/2016/07/13/11 aka CVE-2016-6352,
and http://seclists.org/oss-sec/2016/q3/7 aka CVE-2016-6163.

Cheers,
Emilio
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Github's pull requests and GNOME

2016-12-05 Thread Andrea Veri
Hey,

added nevimer on the excludes list, your permissions to that
repository have been granted.

cheers,

2016-12-05 15:01 GMT+01:00 Hubert Figuière :
> On 29/11/16 10:00 AM, Andrea Veri wrote:
>
>> Finally the GNOME Infrastructure Team is going to introduce a daily
>> cronjob (first run is scheduled next week, enough time for collecting
>> excludes) that will close all the pull requests for each repository
>> hosted under the GNOME organization umbrella. The closure message will
>> look like this:
>
>
> How does one mark the PR as "don't close" ? I was working with a new
> contributor in a PR that I would have taken to merge the usual
> non-github way. But now the cronjob has closed it. Can't reopen it...
>
> This is the PR:
>https://github.com/GNOME/nemiver/pull/2
>
> Maybe the cronjob should check whether there was activity from more than
> one person and act on that.
>
>> If you don't want the script to actually run against any of your
>> maintained products, modules, components please drop me an e-mail and
>> I'll make sure proper excludes will be set.
>
> Given that I'm not the official maintainer at all for the module. Just
> MITMing contributions...
> Can we have it reopen? Can I be granted permissions to do so ? (username
> on github is @hfiguiere)
>
> Thanks,
>
> Hub
>



-- 
Cheers,

Andrea

Debian Developer,
Fedora / EPEL packager,
GNOME Infrastructure Team Coordinator,
GNOME Foundation Board of Directors Secretary,
GNOME Foundation Membership & Elections Committee Chairman

Homepage: http://www.gnome.org/~av
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Tracker as a security risks

2016-12-05 Thread Hanno Böck
On Mon, 5 Dec 2016 13:44:40 +
Sam Thursfield  wrote:

> The design of Tracker takes the risks into account. Metadata
> extraction is isolated in its own process (tracker-extract) which can
> crash without (theoretically) causing any harm.

I don't see how that helps against security vulnerabilities.

Having an isolated process probably helps in a way that a crash won't
cause the whole tracker service to malfunction. Thus parsing broken
files won't cause a service disruption. But as long as this process
runs with normal user rights this doesn't protect in a security sense.

> > I think there needs to be a wider discussion about this and the
> > fundamental design choices done here need to be questioned.  
> 
> What questions do you have in particular?

Quite frankly, I don't claim to have all the answers here, that's why I
formulated it in an open "needs discussion" way.

I think sandboxing the tracker parser (which you already indicated
in your mail) is probably the most reasonable way to go forward. This
isn't exactly my area of expertise, so I can't comment on which
technique here is most promising.


The other issue I think is that the quality of huge parts of the foss
ecosystem needs to be improved. The good news here is that we got some
powerful tools in terms of fuzzing (afl, libfuzzer) and memory safety
bug detection (asan) in the past years. Ideally all free software devs
should be aware of those tools and use them in their development
process. I'm trying to help here where I can, see e.g. also my recent
post on this list [1]. If our libraries would be better tested we could
be more comfortable feeding it with untrusted inputs.


[1]
https://www.mail-archive.com/desktop-devel-list@gnome.org/msg28161.html

-- 
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Github's pull requests and GNOME

2016-12-05 Thread Hubert Figuière
On 29/11/16 10:00 AM, Andrea Veri wrote:

> Finally the GNOME Infrastructure Team is going to introduce a daily
> cronjob (first run is scheduled next week, enough time for collecting
> excludes) that will close all the pull requests for each repository
> hosted under the GNOME organization umbrella. The closure message will
> look like this:


How does one mark the PR as "don't close" ? I was working with a new
contributor in a PR that I would have taken to merge the usual
non-github way. But now the cronjob has closed it. Can't reopen it...

This is the PR:
   https://github.com/GNOME/nemiver/pull/2

Maybe the cronjob should check whether there was activity from more than
one person and act on that.

> If you don't want the script to actually run against any of your
> maintained products, modules, components please drop me an e-mail and
> I'll make sure proper excludes will be set.

Given that I'm not the official maintainer at all for the module. Just
MITMing contributions...
Can we have it reopen? Can I be granted permissions to do so ? (username
on github is @hfiguiere)

Thanks,

Hub

___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list


Re: Tracker as a security risks

2016-12-05 Thread Sam Thursfield
[cc'ing tracker-l...@gnome.org]

Hi Hanno

On Mon, Dec 5, 2016 at 1:03 PM, Hanno Böck  wrote:
> I wanted to point out a recent blogpost by IT security export Chris
> Evans:
> https://scarybeastsecurity.blogspot.dk/2016/11/0day-poc-risky-design-decisions-in.html

Thanks for the link.

...
> While the bugs that evans points out have been fixed (and the gstreamer
> team has fixed a whole bunch of other potential security issues I
> reported in the past days, thanks!), the whole design of Tracker seems
> incredibly risky. It is certainly worthwhile trying to make the
> underlying software more secure, but having tried to do that before
> I find it unlikely that projects like gstreamer or imagemagick will
> ever be in a state where we can feel comfortable feeding them with
> untrusted files.
>
> The core problem here is that tracker automatically parses files of
> potentially unknown origin with parsers that haven't been built with
> security in mind. This happens without any sandboxing.

The design of Tracker takes the risks into account. Metadata
extraction is isolated in its own process (tracker-extract) which can
crash without (theoretically) causing any harm.

We could and should add more limits on that process to protect against
exploits in the libraries it uses for parsing. Here's a sketch from
memory of what the tracker-extract process needs access to:

* read access to the file it is parsing
* read/write access to the media-art cache (~/.local/cache/media-art)

It doesn't need network access, hardware access, or (I think) any
other filesystem access than what's listed above.

It does need read & write access to the Tracker database over D-Bus
(org.freedesktop.Tracker1), that's a bit of a risk which perhaps the
tracker-extract process could mitigate by running the actual parsing
in another separate process. This second process could be fed the
input file on stdin and could write the metadata it finds on stdout,
which would mean it needs no filesystem access other than the
media-art cache.

Are there any volunteers who have time to look at this? I'm not sure
the best way to implement it; SELinux, AppArmor or Capsicum would all
work but none of those are available on all *nix platforms. Or the
`bubblewrap` tool could be used, although I think that's also
Linux-only at the moment.

> I think there needs to be a wider discussion about this and the
> fundamental design choices done here need to be questioned.

What questions do you have in particular?

Sam
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Re: Tracker as a security risks

2016-12-05 Thread Tobias Mueller
Hi Hanno.

Thanks for bringing it up.

On Mo, 2016-12-05 at 14:03 +0100, Hanno Böck wrote:
> The core problem here is that tracker automatically parses files of
> potentially unknown origin with parsers that haven't been built with
> security in mind. This happens without any sandboxing.
Right.  But sandboxing the parsers properly would mitigate most of the
problems, right?

I know too little about Tracker's architecture to be able to estimate
how much of a problem it would be to have the parsers run in a sandbox.
I hope it's an easy change to make and it may be even planned already.
Let's hope someone from the Tracker team can comment.

Cheers,
  Tobi
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list

Tracker as a security risks

2016-12-05 Thread Hanno Böck
Hi,

I wanted to point out a recent blogpost by IT security export Chris
Evans:
https://scarybeastsecurity.blogspot.dk/2016/11/0day-poc-risky-design-decisions-in.html

The short version: Chrome automatically downloads files without a file
dialog, tracker (part of the GNOME desktop) subsequently automatically
indexes these files with a wide variety of parsers (including
gstreamer, but also others like imagemagick).

While the bugs that evans points out have been fixed (and the gstreamer
team has fixed a whole bunch of other potential security issues I
reported in the past days, thanks!), the whole design of Tracker seems
incredibly risky. It is certainly worthwhile trying to make the
underlying software more secure, but having tried to do that before
I find it unlikely that projects like gstreamer or imagemagick will
ever be in a state where we can feel comfortable feeding them with
untrusted files.

The core problem here is that tracker automatically parses files of
potentially unknown origin with parsers that haven't been built with
security in mind. This happens without any sandboxing.

I think there needs to be a wider discussion about this and the
fundamental design choices done here need to be questioned.

-- 
Hanno Böck
https://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: FE73757FA60E4E21B937579FA5880072BBB51E42
___
desktop-devel-list mailing list
desktop-devel-list@gnome.org
https://mail.gnome.org/mailman/listinfo/desktop-devel-list