Re: Potential DateTime DOS Attack
Zefram wrote: Michael G Schwern wrote: A) Have DT::TZ be a wrapper around DT::TZ::Tzfile B) Have DT::TZ ship with (or download) v2 Olson data I'd do this in a finer-grained way. One module encapsulating the Olson tzfiles, with no logic for interpreting them; this can be used independently of DT:TZ:Tzfile, and is the only module that requires regular updating. Another module tying that together with DT:TZ:Tzfile, to provide full Olson timezone service without any of the DT:TZ-specific exceptions. Then DT:TZ wraps that, along with supplying its own special cases. Gotcha. C) Write special case .pm files for the special cases These already exist, if I understand you correctly. Oh good. My worry was that it would be necessary to keep a full, parallel implementation of DT::TZ around just to deal with the special cases. How big is the compiled Olson data? Looking here at a Debian installation, excluding the "posix" and "right" subdirectories, here are some stats: * 23 directories * 581 filenames for regular files * 823 kB apparent total file size (multi-linked files counting multiply) * 513 distinct regular files * 729 kB real total file size (multi-linked files counting once) * 287 kB compressed tarball of unique files For kicks, compare against the DateTime/TimeZone directory, excluding DT:TZ infrastructure: * 14 directories * 388 filenames for regular files * 2127 kB apparent total file size * 388 distinct regular files * 2127 kB real total file size * 315 kB compressed tarball of unique files Good, no big expansion of install size. -- THIS I COMMAND!
Re: Potential DateTime DOS Attack
Michael G Schwern wrote: >A) Have DT::TZ be a wrapper around DT::TZ::Tzfile >B) Have DT::TZ ship with (or download) v2 Olson data I'd do this in a finer-grained way. One module encapsulating the Olson tzfiles, with no logic for interpreting them; this can be used independently of DT:TZ:Tzfile, and is the only module that requires regular updating. Another module tying that together with DT:TZ:Tzfile, to provide full Olson timezone service without any of the DT:TZ-specific exceptions. Then DT:TZ wraps that, along with supplying its own special cases. >C) Write special case .pm files for the special cases These already exist, if I understand you correctly. >How big is the compiled Olson data? Looking here at a Debian installation, excluding the "posix" and "right" subdirectories, here are some stats: * 23 directories * 581 filenames for regular files * 823 kB apparent total file size (multi-linked files counting multiply) * 513 distinct regular files * 729 kB real total file size (multi-linked files counting once) * 287 kB compressed tarball of unique files For kicks, compare against the DateTime/TimeZone directory, excluding DT:TZ infrastructure: * 14 directories * 388 filenames for regular files * 2127 kB apparent total file size * 388 distinct regular files * 2127 kB real total file size * 315 kB compressed tarball of unique files -zefram
Re: Potential DateTime DOS Attack
Zefram wrote: These issues affect about ten zones in the current Olson database (depending on how you count them). In principle you'd get better results by using the DT:TZ approach and just implementing it in a less dippy way. But the difference in results is not very much, and it's probably easier to get a working implementation from DT:TZ:Tzfile. It'd also avoid the need for those enormous DateTime/TimeZone/$a/$b.pm files. It sounds like what might be simplest is to: A) Have DT::TZ be a wrapper around DT::TZ::Tzfile B) Have DT::TZ ship with (or download) v2 Olson data C) Write special case .pm files for the special cases How big is the compiled Olson data? Do all those individual time zone .pm files have to be preserved for backwards compat? ie. Does "use DateTime::TimeZone::Africa::Addis_Ababa" have to work? It would be nice if they could be totally removed. Their existence doesn't appear to be documented in DT::TZ. Dave Rolsky has previously objected to the idea of the default-installed DT:TZ depend on DT:TZ:Tzfile or DT:TZ:SystemV. He said in : No one (that I recall) has asked for Posix or binary file support, so making them dependencies seems like overkill. I'm happy to implement more timezone logic on top of DT:TZ:Tzfile, up to a complete drop-in DT:TZ replacement, but you'll have to argue with Dave about actually replacing the existing DT:TZ. That sounds like the objection was when it would be in addition to DT::TZ and just to get support to read the Olsen files. If this is replacing the guts of DT::TZ that seems a different matter. Dave? -- emacs -- THAT'S NO EDITOR... IT'S AN OPERATING SYSTEM!
Re: Potential DateTime DOS Attack
Dave Rolsky wrote: >I don't think you need auto-downloads. People download a new DT::TZ when >I release one, or they don't. No one's complained about that. It'll only rarely bite people. That doesn't make it OK. But that's an orthogonal discussion that I don't want to stray into here. >The real question for me is whether the generated binary data will be >cross-platform compatible. Are there any 32/64 bit issues? Endianness? The tzfile format is fully defined: the same binaries apply on any platform. See the DT:TZ:Tzfile code. (You'll see that it uses both 32-bit and 64-bit integer fields, all big-endian.) There's potential difficulty in using extremely far future dates on a 32-bit Perl, but that's inherent to the way DateTime uses RD. No Y2038 problem. >As long as they are cross-platform and the files can be distributed via >CPAN, I think switching to this approach would be fine. OK, I'll work on it. -zefram
Re: Potential DateTime DOS Attack
On Thu, 17 Dec 2009, Zefram wrote: Almost. In principle, the DT:TZ way of working handles far future dates slightly better. I'd actually say this doesn't matter. Look at how often time zone definitions change. Looking at America/Chicago, I see changes in 1974, 1975, 1976, 1987, and 2007. That tells me that I should not count on _anything_ being the same 30+ years from now. And the US is stable compared to some places. To get a full DT:TZ-like service from DT:TZ:Tzfile, you need a layer or two over it. I don't want to bundle the Olson files with DT:TZ:Tzfile itself: the module is for the single job of interpreting existing tzfiles. But it'd be easy to produce a module that encapsulates a full set of compiled tzfiles from Olson. Slightly harder to produce a cleverer module that automatically downloads the latest tzdata. A module layered over that could then use the tzfiles together with DT:TZ:Tzfile to provide the full Olson timezone service. And DT:TZ can then be reimplemented to use *that* as the basis for the geographic timezones. I don't think you need auto-downloads. People download a new DT::TZ when I release one, or they don't. No one's complained about that. The real question for me is whether the generated binary data will be cross-platform compatible. Are there any 32/64 bit issues? Endianness? As long as they are cross-platform and the files can be distributed via CPAN, I think switching to this approach would be fine. No one (that I recall) has asked for Posix or binary file support, so making them dependencies seems like overkill. Regarding asking for binary file support, I meant no one seemed to care about using their existing binary files. If DT::TZ depended on DT::TZ::OlsonData that would be fine. -dave /* http://VegGuide.org http://blog.urth.org Your guide to all that's veg House Absolute(ly Pointless) */
Re: Potential DateTime DOS Attack
Michael G Schwern wrote: >That looks like a great place to start. If it matches the DT::TZ >interface, and shipped a default time zone database, could DT::TZ simply >be replaced with it? Almost. In principle, the DT:TZ way of working handles far future dates slightly better. DT:TZ:Tzfile can only work from what's in the compiled tzfile. When the explicitly-listed observances run out (typically at 2038), the tzfile can only express future rules by means of a SystemV-style $TZ string. For most zones this is perfectly adequate: for example, my Debian-supplied copy of Europe/London says "GMT0BST,M3.5.0/1,M10.5.0", which accurately expresses the rule laid down by the Summer Time Order 2002. DT:TZ:Tzfile passes this string on to DT:TZ:SystemV, which does the work, and it's really DT:TZ:SystemV which is responsible for there being no silly delay for linear calculation. A handful of zones' rules cannot be adequately described in the System V style. For example, America/Santiago starts DST on whichever Sunday falls between October 9 and October 15, and has a similar rule for ending DST. This is properly described in the Olson tzdata source file, but because it can't be expressed in System V style it can't be described by a compiled tzfile. DT:TZ isn't limited by the tzfile format, because it works from the tzdata source. In the case of America/Santiago, it can therefore give correct answers for future years where the tzfile says nothing and so DT:TZ:Tzfile is forced to give up. (My copy of the compiled America/Santiago has explicitly-listed observances out to 2409, instead of the usual 2038, presumably to compensate for this limitation.) There are also some rules that DT:TZ won't handle correctly. For example, Asia/Jerusalem uses a DST-ending rule that is based on the Jewish calendar. Not only can this not be expressed in System V style, it can't be expressed in tzdata source either. Whoever wrote the tzdata source has included an explicit list of ending dates, up to 2037, and both DT:TZ and DT:TZ:Tzfile are limited to that list. (However, while DT:TZ:Tzfile recognises that its knowledge has run out, DT:TZ incorrectly extends the last indicated observance into the infinite future.) These issues affect about ten zones in the current Olson database (depending on how you count them). In principle you'd get better results by using the DT:TZ approach and just implementing it in a less dippy way. But the difference in results is not very much, and it's probably easier to get a working implementation from DT:TZ:Tzfile. It'd also avoid the need for those enormous DateTime/TimeZone/$a/$b.pm files. To get a full DT:TZ-like service from DT:TZ:Tzfile, you need a layer or two over it. I don't want to bundle the Olson files with DT:TZ:Tzfile itself: the module is for the single job of interpreting existing tzfiles. But it'd be easy to produce a module that encapsulates a full set of compiled tzfiles from Olson. Slightly harder to produce a cleverer module that automatically downloads the latest tzdata. A module layered over that could then use the tzfiles together with DT:TZ:Tzfile to provide the full Olson timezone service. And DT:TZ can then be reimplemented to use *that* as the basis for the geographic timezones. Dave Rolsky has previously objected to the idea of the default-installed DT:TZ depend on DT:TZ:Tzfile or DT:TZ:SystemV. He said in : No one (that I recall) has asked for Posix or binary file support, so making them dependencies seems like overkill. I'm happy to implement more timezone logic on top of DT:TZ:Tzfile, up to a complete drop-in DT:TZ replacement, but you'll have to argue with Dave about actually replacing the existing DT:TZ. -zefram
Re: Potential DateTime DOS Attack
Zefram wrote: J. Shirley wrote: Do not try to use named time zones (like "America/Chicago") with dates very far in the future (thousands of years). The current implementation ofDateTime::TimeZone?will use a huge amount of memory calculating all the DST changes from now until the future date. You could instead use DateTime::TimeZone::Tzfile, which does not suffer from this problem. Just requires that you have the compiled Olson files (which are freely available if you don't already have them). $zone = DateTime::TimeZone::Tzfile->new( filename => "/usr/share/zoneinfo/America/Chicago", ); $dt = DateTime->new( year => 3, month => 1, day => 1, time_zone => $zone, ); That looks like a great place to start. If it matches the DT::TZ interface, and shipped a default time zone database, could DT::TZ simply be replaced with it? DT::TZ appears to be duplicating a lot of the work that's already in a tzfile. -- Reality is that which, when you stop believing in it, doesn't go away. -- Phillip K. Dick
Re: Potential DateTime DOS Attack
On Wed, 16 Dec 2009, Michael G Schwern wrote: Give me something to work with here. Some insight into what and why DateTime is doing what its doing. Is there a reason that DST info has to be generated linearly? Would it be difficult to hold off on generating time zone info until its needed? Are there instructions somewhere for dealing with the Olsen database? SOME sort of discussion about how to solve the problem rather than the ways to paper over it. I've thought about this a bit, and one solution I came up with was something like this ... When generating the time zone data, we know that after a certain point, there is either one rule in effect (zone without DST or with permanent DST), or two rules in fixed alternation. Either way, we generate a subroutine that determines the current time zone data (which is referred to as a "span" internally, as in a span of time during which a specific rule is in effect). For the one-rule zones, this is trivial. Just return the data for the last rule. For the two-rule zones, this should be calculable mathematically. In both cases, we can simply not store the generated data, ever. The generated files already include pre-calculated data for the next 10 years. We could increase that to 20, and simply leave it at that. -dave /* http://VegGuide.org http://blog.urth.org Your guide to all that's veg House Absolute(ly Pointless) */
Re: Potential DateTime DOS Attack
On Wed, Dec 16, 2009 at 4:16 PM, Michael G Schwern wrote: > I am, quite frankly, appalled at the response I've gotten to this report. > > No, this is not something the user should be guarding against. No, > documenting it does not make it go away. No, you shouldn't put an arbitrary > upper bound in. No, I should not have been using UTC. That is all > accepting low-quality. We don't do that in Perl. > > This isn't just an annoyance, its a wide spread security hole triggered by > totally innocent use. Its like finding out my doorbell will cause my house > to explode if somebody buzzes too long. I do not want to program in an > environment where the assumption is that everything is dangerous. This > isn't even a problem one should imagine encountering. > > The reaction I was expecting was more along the lines of "oh fuck, that's > really bad, let's figure out how to fix this". Yes, its ok to report a > problem without a ready solution. Instead, I'm told its documented and to > cough up a patch. > > This is not a simple problem to solve. I'm not even sure what the efficient > DST algorithm is, though I'm looking for it. I've looked into the > DateTime::TimeZone code before and this is not going to be a simple patch. > I'm willing to try and fix it, but not if the DateTime folks, the folks who > know the code, are reacting with something between lethargy and hostility. > It going to be hard enough without dragging everyone along. > > Give me something to work with here. Some insight into what and why > DateTime is doing what its doing. Is there a reason that DST info has to be > generated linearly? Would it be difficult to hold off on generating time > zone info until its needed? Are there instructions somewhere for dealing > with the Olsen database? SOME sort of discussion about how to solve the > problem rather than the ways to paper over it. > I don't want to be insensitive, but I think this response is a touch melodramatic. First, Perl has plenty of dangerous behaviors. CGI's param(...) returning a list can cause a lot of problems and exploits if a user isn't careful and sanitize their input properly. To claim that "Perl doesn't do that" doesn't make it true. Programming is hard. Second, I think you grossly misconstrued the responses to this. I think everybody suggesting things in the thread did it with the mentality of an intermediate fix rather than a long term one. Very few people are comfortable with the internals of DateTime to offer up proper solutions. Third, you aren't going to get an "oh fuck" reaction out of a problem that is documented any more than you would expect Congress recalling all vehicles because if you hold the accelerator down you could crash into a brickwall. Analogies suck, lets not use them. The point is that raising a known issue and expecting something different than, "Yes, we know about it what are other ways to handle it" seems a little on the unreasonable size. On a constructive note, DateTime::TimeZone has Olsen timezone information. I had no idea that it didn't suffer the same problem (like I said, I don't have a clue about the Internals, and I just extrapolated that to "Very few people"... busted). I think that at the very least an *intermediate* fix would be to update the pod warning to be more prominent and a mention that Olsen tz files do not suffer the same consequences. Thanks, -J PS., Schwern, you're the man. If you can't do it, who can!? I still have your tea, btw (probably long stale now). If you can get this fixed, I'll throw in a pitcher of beer with it.
Re: Potential DateTime DOS Attack
Schwern++. On Wed, Dec 16, 2009 at 4:16 PM, Michael G Schwern wrote: > I am, quite frankly, appalled at the response I've gotten to this report. > > No, this is not something the user should be guarding against. No, > documenting it does not make it go away. No, you shouldn't put an arbitrary > upper bound in. No, I should not have been using UTC. That is all > accepting low-quality. We don't do that in Perl. > > This isn't just an annoyance, its a wide spread security hole triggered by > totally innocent use. Its like finding out my doorbell will cause my house > to explode if somebody buzzes too long. I do not want to program in an > environment where the assumption is that everything is dangerous. This > isn't even a problem one should imagine encountering. > > The reaction I was expecting was more along the lines of "oh fuck, that's > really bad, let's figure out how to fix this". Yes, its ok to report a > problem without a ready solution. Instead, I'm told its documented and to > cough up a patch. > > This is not a simple problem to solve. I'm not even sure what the efficient > DST algorithm is, though I'm looking for it. I've looked into the > DateTime::TimeZone code before and this is not going to be a simple patch. > I'm willing to try and fix it, but not if the DateTime folks, the folks who > know the code, are reacting with something between lethargy and hostility. > It going to be hard enough without dragging everyone along. > > Give me something to work with here. Some insight into what and why > DateTime is doing what its doing. Is there a reason that DST info has to be > generated linearly? Would it be difficult to hold off on generating time > zone info until its needed? Are there instructions somewhere for dealing > with the Olsen database? SOME sort of discussion about how to solve the > problem rather than the ways to paper over it. >
Re: Potential DateTime DOS Attack
I am, quite frankly, appalled at the response I've gotten to this report. No, this is not something the user should be guarding against. No, documenting it does not make it go away. No, you shouldn't put an arbitrary upper bound in. No, I should not have been using UTC. That is all accepting low-quality. We don't do that in Perl. This isn't just an annoyance, its a wide spread security hole triggered by totally innocent use. Its like finding out my doorbell will cause my house to explode if somebody buzzes too long. I do not want to program in an environment where the assumption is that everything is dangerous. This isn't even a problem one should imagine encountering. The reaction I was expecting was more along the lines of "oh fuck, that's really bad, let's figure out how to fix this". Yes, its ok to report a problem without a ready solution. Instead, I'm told its documented and to cough up a patch. This is not a simple problem to solve. I'm not even sure what the efficient DST algorithm is, though I'm looking for it. I've looked into the DateTime::TimeZone code before and this is not going to be a simple patch. I'm willing to try and fix it, but not if the DateTime folks, the folks who know the code, are reacting with something between lethargy and hostility. It going to be hard enough without dragging everyone along. Give me something to work with here. Some insight into what and why DateTime is doing what its doing. Is there a reason that DST info has to be generated linearly? Would it be difficult to hold off on generating time zone info until its needed? Are there instructions somewhere for dealing with the Olsen database? SOME sort of discussion about how to solve the problem rather than the ways to paper over it.
Re: Potential DateTime DOS Attack
J. Shirley wrote: >Do not try to use named time zones (like "America/Chicago") with dates >very far in the future (thousands of years). The current >implementation ofDateTime::TimeZone?will use a huge amount of memory >calculating all the DST changes from now until the future date. You could instead use DateTime::TimeZone::Tzfile, which does not suffer from this problem. Just requires that you have the compiled Olson files (which are freely available if you don't already have them). $zone = DateTime::TimeZone::Tzfile->new( filename => "/usr/share/zoneinfo/America/Chicago", ); $dt = DateTime->new( year => 3, month => 1, day => 1, time_zone => $zone, ); -zefram
Re: Potential DateTime DOS Attack
On Tue, Dec 15, 2009 at 9:21 PM, J. Shirley wrote: > > Perhaps I'm cynical, but in my mind the type of people who write bad > applications not only wouldn't care about potential DateTime DoS > attacks, but they would have many more egregious offenses. That's true, but you are wrong that people with poorly written code don't care about DoS or security issues. The fact is that there are poorly written apps that people use and depend on that don't have central validation. A decade or so of constant development by a dozen or two different programmers doesn't always result in the most idle code. DateTime might be the most central place to put in central validation on year input. > The best solution is that Schwern gets a patch in ;) > Yes, that would be a good solution, too. ;) -- Bill Moseley mose...@hank.org
Re: Potential DateTime DOS Attack
> Applicable snippet: > Do not try to use named time zones (like "America/Chicago") with dates > very far in the future (thousands of years). The current > implementation ofDateTime::TimeZone will use a huge amount of memory > calculating all the DST changes from now until the future date. Use > UTC or the floating time zone and you will be safe. I knew this prior to you posting the snippet, but I think to be prudent there is merit in explicitly highlighting the need to validate user input. I'm not going lie -- I never thought of this as a sizable vector for a DOS attack before the Schwern brought it up. -- Evan Carroll System Lord of the Internets http://www.evancarroll.com
Re: Potential DateTime DOS Attack
On Tue, Dec 15, 2009 at 8:54 PM, Bill Moseley wrote: > > On Tue, Dec 15, 2009 at 7:58 PM, J. Shirley wrote: >> >> My vote goes for no changes, as it is in the POD as a warning and has >> existing for a very long time. The better fix is to write better >> applications. > > Wise words. It's about time all those existing organizations and people > that earn their livelihood running massive legacy applications got off their > butts and rewrote them. Today. ;) > Perhaps I'm cynical, but in my mind the type of people who write bad applications not only wouldn't care about potential DateTime DoS attacks, but they would have many more egregious offenses. Bad applications are bad, but sullying up good code to accomodate them is an even worse idea in my book. The best solution is that Schwern gets a patch in ;) Schwern++ # gitpan is done, now you need something else! -J
Re: Potential DateTime DOS Attack
On Tue, Dec 15, 2009 at 7:58 PM, J. Shirley wrote: > > My vote goes for no changes, as it is in the POD as a warning and has > existing for a very long time. The better fix is to write better > applications. > Wise words. It's about time all those existing organizations and people that earn their livelihood running massive legacy applications got off their butts and rewrote them. Today. ;) -- Bill Moseley mose...@hank.org
Re: Potential DateTime DOS Attack
On Tue, 15 Dec 2009, Michael G Schwern wrote: I know efficient 64 bit local time calculations are possible because the standard C library does it. Its not because its written in C, its because its using a non-O(n) algorithm. Fantastic. So I can expect a patch some time soon then? -dave /* http://VegGuide.org http://blog.urth.org Your guide to all that's veg House Absolute(ly Pointless) */
Re: Potential DateTime DOS Attack
On Tue, Dec 15, 2009 at 7:03 PM, Bill Moseley wrote: > On Tue, Dec 15, 2009 at 6:12 PM, Lyle wrote: > >> Michael G Schwern wrote: >> >>> Clever watchdogs can prevent this from bringing down a server, but I think >>> we can all agree that a date library should not be the source of DOS >>> attacks. >>> >> >> Maybe a warning of this in the POD would be enough? Or a more active built >> in restriction on future dates that users of DataTime must manually >> override... >> > > Would a global be too ugly for a short-term fix? $DateTime::MaxFutureYears > = 20; # no dates more than 20 years from current year. > > It's documented in the POD already. If your application is sane, you already verify user input, right? Just an extra filter on the validation. Moose and Data::Verifier ftw: subtype ValidYear, as Int, where { $_ > 1900 && $_ < ((localtime)[5] + 1930) }, message { "Valid years for this input must be after 1900 and within 30 years" }; My vote goes for no changes, as it is in the POD as a warning and has existing for a very long time. The better fix is to write better applications. -J
Re: Potential DateTime DOS Attack
On Tue, Dec 15, 2009 at 6:12 PM, Lyle wrote: > Michael G Schwern wrote: > >> Clever watchdogs can prevent this from bringing down a server, but I think >> we can all agree that a date library should not be the source of DOS >> attacks. >> > > Maybe a warning of this in the POD would be enough? Or a more active built > in restriction on future dates that users of DataTime must manually > override... > Would a global be too ugly for a short-term fix? $DateTime::MaxFutureYears = 20; # no dates more than 20 years from current year. -- Bill Moseley mose...@hank.org
Re: Potential DateTime DOS Attack
Michael G Schwern wrote: Clever watchdogs can prevent this from bringing down a server, but I think we can all agree that a date library should not be the source of DOS attacks. Maybe a warning of this in the POD would be enough? Or a more active built in restriction on future dates that users of DataTime must manually override... Lyle
Re: Potential DateTime DOS Attack
On Tue, Dec 15, 2009 at 4:54 PM, Michael G Schwern wrote: > > I have discovered a potential DOS attack for all Perl applications which use > DateTime and time zones. > > Last July I reported that getting a localized date in the future was very > slow. > http://rt.cpan.org/Public/Bug/Display.html?id=47671 > > $ time perl -wle 'use DateTime; print DateTime->new( year => 3058, time_zone > => "local" );' > 3058-01-01T00:00:00 > > real 0m7.820s > user 0m7.747s > sys 0m0.047s > > At the time this just an oddity of my playing around with y2038 and far > future dates. Later I realized that because DateTime is chewing up CPU and > memory during the calculation this can be used as a DOS attack on a server > running DateTime. A lot of things use DateTime and a lot of things use time > zones. > > I'll let somebody else get into the details why it happens, but if I can get > a Perl web app to accept a year of, say, 3 I can get it to consume the > CPU for a few hundred seconds (YMMV) and about 150 megs of memory. If the > process remains in memory subsequent calls will be fast, but the memory will > still be consumed. This provides a very low-bandwidth way to swamp a server > using Perl and DateTime. > > Clever watchdogs can prevent this from bringing down a server, but I think we > can all agree that a date library should not be the source of DOS attacks. > > I know efficient 64 bit local time calculations are possible because the > standard C library does it. Its not because its written in C, its because > its using a non-O(n) algorithm. > > This upgrades the problem from "kooky date wonk problem" to "cross-site > attack". > > > -- > Whip me, beat me, make my code compatible with VMS! Hi Schwern, What you've mentioned is actually in the DateTime pod in the "Time Zone Warnings" section: http://search.cpan.org/~drolsky/DateTime-0.53/lib/DateTime.pm#Time_Zone_Warnings Applicable snippet: Do not try to use named time zones (like "America/Chicago") with dates very far in the future (thousands of years). The current implementation ofDateTime::TimeZone will use a huge amount of memory calculating all the DST changes from now until the future date. Use UTC or the floating time zone and you will be safe. -J -- J. Shirley :: jshir...@gmail.com :: Killing two stones with one bird. http://our.coldhardcode.com/jshirley/
Potential DateTime DOS Attack
I have discovered a potential DOS attack for all Perl applications which use DateTime and time zones. Last July I reported that getting a localized date in the future was very slow. http://rt.cpan.org/Public/Bug/Display.html?id=47671 $ time perl -wle 'use DateTime; print DateTime->new( year => 3058, time_zone => "local" );' 3058-01-01T00:00:00 real0m7.820s user0m7.747s sys 0m0.047s At the time this just an oddity of my playing around with y2038 and far future dates. Later I realized that because DateTime is chewing up CPU and memory during the calculation this can be used as a DOS attack on a server running DateTime. A lot of things use DateTime and a lot of things use time zones. I'll let somebody else get into the details why it happens, but if I can get a Perl web app to accept a year of, say, 3 I can get it to consume the CPU for a few hundred seconds (YMMV) and about 150 megs of memory. If the process remains in memory subsequent calls will be fast, but the memory will still be consumed. This provides a very low-bandwidth way to swamp a server using Perl and DateTime. Clever watchdogs can prevent this from bringing down a server, but I think we can all agree that a date library should not be the source of DOS attacks. I know efficient 64 bit local time calculations are possible because the standard C library does it. Its not because its written in C, its because its using a non-O(n) algorithm. This upgrades the problem from "kooky date wonk problem" to "cross-site attack". -- Whip me, beat me, make my code compatible with VMS!