On Tuesday, 6 August 2013 at 12:32:13 UTC, jicman wrote:
On Tuesday, 6 August 2013 at 04:10:57 UTC, Andre Artus wrote:
On Monday, 5 August 2013 at 13:59:24 UTC, jicman wrote:
Greetings!
I have this code,
foreach (...)
{
if (std.string.tolower(fext[0]) == "doc" ||
std.string.tolower(fext[0]) == "docx" ||
std.string.tolower(fext[0]) == "xls" ||
std.string.tolower(fext[0]) == "xlsx" ||
std.string.tolower(fext[0]) == "ppt" ||
std.string.tolower(fext[0]) == "pptx")
continue;
}
foreach (...)
{
if (std.string.tolower(fext[0]) == "doc")
continue;
if (std.string.tolower(fext[0]) == "docx")
continue;
if (std.string.tolower(fext[0]) == "xls")
continue;
if (std.string.tolower(fext[0]) == "xlsx")
continue;
if (std.string.tolower(fext[0]) == "ppt")
continue;
if (std.string.tolower(fext[0]) == "pptx")
continue;
...
...
}
thanks.
josé
What exactly are you trying to do with this? I get the
impression that there is an attempt at "local optimization"
when broader approach could lead to better results.
For instance. Using the OS's facilities to filter (six
requests, one each for "*.doc", "*.docx") could actually end
up being a lot faster.
If you could give more detail about what you are trying to
achieve then it could be possible to get better results.
The files are in a network drive and doing a list foreach
*.doc, *.docx, etc. will be more expensive than getting the
list of all the files at once and then processing them
accordingly.
Again, what are you trying to achieve?
Your statement is not necessarily true, for a myriad of reasons,
but it entirely depends on what you want to do.
I would reiterate Dennis Luehring's reply, why are you not
benching? It seems like you are guessing at what the problems
are, that's hardly ever useful.
One of the first rules of network optimization is to reduce the
amount od data, that normally means filtering.at the server, the
next thing is coarse grained is better than fine (BOCTAOE/L).