Re: [Vo]:Why Scientists Must Share Their Failures
Yes Jed, and the more advanced the technology generally, the narrow the range of success becomes. This failure sharing idea might work ok if we were designing plows or wagons, but even something as basic as the internal combustion engine is too complex and has too narrow a range of success for collecting data of failures to be a success :) John Berry On Tue, Apr 18, 2017 at 2:42 AM, Jed Rothwellwrote: > John Berry wrote: > > >> It might have limited application, but mostly, I don't see it, too often >> success and failure is just an inch apart. >> > > Yes! That is an important point. Unfortunately, failure is a more likely > outcome. There are countless way to make an experiment fail, but only a > narrow range of ways to make it work. > > A minor change to an experiment makes it go off the rails and no one > notices. The example I often point to was Shockley's initial refusal to > look at zone refining purification. If he had continued to refuse, I doubt > Bell Labs could have made practical transistors when they did. > > Another recent example is the use of computer neural networks in > artificial intelligence. Going back to the 1950s people had an intuitive > feeling this should work. It resembles actual biological brains, which we > know are capable of intelligence. But little progress was made, and the > approach was ignored or even denigrated during the "AI winter" eras. > Finally, about 10 years ago, the method was revived and greatly improved by > using multi-level networks, where one network feed results into another. > This, finally, produced outstanding results, unlike anything previously > seen. This is the basis for the program that beat one of the world's best > go players, and it is the basis for remarkable recent improvements in > Google translate. See: > > https://blog.google/products/translate/found-translation- > more-accurate-fluent-sentences-google-translate/ > > This progress also came about because computer hardware is so much faster > and cheaper. There are many examples of experiments that failed because > they done before their time. They worked later on after better instruments > were devised. > > - Jed > >
[Vo]:LENR INFO - SHORTISSIMO!
http://egooutpeters.blogspot.ro/2017/04/apr-17-2017-lenr-info-shortissimo.html peter -- Dr. Peter Gluck Cluj, Romania http://egooutpeters.blogspot.com
Re: [Vo]:Another article about sloppy research and academic corruption
Good eye, Nigel. ... almost calls for an abbott and costello shtick ;-) Nigel Dyer wrote: At this point I perhaps ouught to point out my own article in Nature Genetics. If you have access to the full article you will find it says that a Nature Genetics paper a year earlier is substantially flawed because they had based their conclusions on what is in fact an artefact in the data. http://www.nature.com/ng/journal/v48/n1/full/ng.3392.html The original authors would have spotted the artefact if they had looked at the raw data. If you dont look at the data the paper appears fine, which is why it got through peer review. You cant expect the unpaid peer reviewers to load and re process the raw data. I only checked it because the papers conclusions conflicted with the results that we were getting
Re: [Vo]:Why Scientists Must Share Their Failures
in fact in my school (ESIEE), multilevel neuronal network were fashion (Yann Lecun was a reference as ancient from the school). what was limiting was compute power (we were thinking about specialized hardware mimicking life)... Experts systems were more applicable, like natural language processing by deterministic methods (semantic graphs, my colleague worked on that in the 90s until 2k bubble )... with low computation power, testing was hard too, and small networks don't work. hard to get popular this way. finally I get to think solution of statistical methods like google translate was the future. it came back as a surprise for me, like AI fashion, and strangely I rediscover Yann's name. don't tell me it is new... it is renewed. It remind me Jed booklet on the future, telling LENR will make robotic evolve because much of the engineering will be simplified, helping to focus on AI. 2017-04-17 16:42 GMT+02:00 Jed Rothwell: > John Berry wrote: > > >> It might have limited application, but mostly, I don't see it, too often >> success and failure is just an inch apart. >> > > Yes! That is an important point. Unfortunately, failure is a more likely > outcome. There are countless way to make an experiment fail, but only a > narrow range of ways to make it work. > > A minor change to an experiment makes it go off the rails and no one > notices. The example I often point to was Shockley's initial refusal to > look at zone refining purification. If he had continued to refuse, I doubt > Bell Labs could have made practical transistors when they did. > > Another recent example is the use of computer neural networks in > artificial intelligence. Going back to the 1950s people had an intuitive > feeling this should work. It resembles actual biological brains, which we > know are capable of intelligence. But little progress was made, and the > approach was ignored or even denigrated during the "AI winter" eras. > Finally, about 10 years ago, the method was revived and greatly improved by > using multi-level networks, where one network feed results into another. > This, finally, produced outstanding results, unlike anything previously > seen. This is the basis for the program that beat one of the world's best > go players, and it is the basis for remarkable recent improvements in > Google translate. See: > > https://blog.google/products/translate/found-translation- > more-accurate-fluent-sentences-google-translate/ > > This progress also came about because computer hardware is so much faster > and cheaper. There are many examples of experiments that failed because > they done before their time. They worked later on after better instruments > were devised. > > - Jed > >
Re: [Vo]:Another article about sloppy research and academic corruption
At this point I perhaps ouught to point out my own article in Nature Genetics. If you have access to the full article you will find it says that a Nature Genetics paper a year earlier is substantially flawed because they had based their conclusions on what is in fact an artefact in the data. http://www.nature.com/ng/journal/v48/n1/full/ng.3392.html The original authors would have spotted the artefact if they had looked at the raw data. If you dont look at the data the paper appears fine, which is why it got through peer review. You cant expect the unpaid peer reviewers to load and re process the raw data. I only checked it because the papers conclusions conflicted with the results that we were getting Nigel On 17/04/2017 16:04, Jed Rothwell wrote: "The Impostor Cell Line That Set Back Breast Cancer Research It’s but one example of a major problem in cancer science." http://www.slate.com/articles/technology/future_tense/2017/04/the_impostor_cell_line_that_set_back_breast_cancer_research.html A reader comment: "If people knew what researchers were really like they would be stunned. Their personality type is very ruthless and dishonest work and conclusion is the norm. I work at a famous university medical center and we have a few of the 'stars' here. Most of the time it's the postdocs who do the work and the researcher is nowhere near it. Their name is on the paper but that's about it. The pressure external and from themselves to publish and succeed is insane." - Jed
[Vo]:Another article about sloppy research and academic corruption
"The Impostor Cell Line That Set Back Breast Cancer Research It’s but one example of a major problem in cancer science." http://www.slate.com/articles/technology/future_tense/2017/04/the_impostor_cell_line_that_set_back_breast_cancer_research.html A reader comment: "If people knew what researchers were really like they would be stunned. Their personality type is very ruthless and dishonest work and conclusion is the norm. I work at a famous university medical center and we have a few of the 'stars' here. Most of the time it's the postdocs who do the work and the researcher is nowhere near it. Their name is on the paper but that's about it. The pressure external and from themselves to publish and succeed is insane." - Jed
Re: [Vo]:Why Scientists Must Share Their Failures
John Berrywrote: > It might have limited application, but mostly, I don't see it, too often > success and failure is just an inch apart. > Yes! That is an important point. Unfortunately, failure is a more likely outcome. There are countless way to make an experiment fail, but only a narrow range of ways to make it work. A minor change to an experiment makes it go off the rails and no one notices. The example I often point to was Shockley's initial refusal to look at zone refining purification. If he had continued to refuse, I doubt Bell Labs could have made practical transistors when they did. Another recent example is the use of computer neural networks in artificial intelligence. Going back to the 1950s people had an intuitive feeling this should work. It resembles actual biological brains, which we know are capable of intelligence. But little progress was made, and the approach was ignored or even denigrated during the "AI winter" eras. Finally, about 10 years ago, the method was revived and greatly improved by using multi-level networks, where one network feed results into another. This, finally, produced outstanding results, unlike anything previously seen. This is the basis for the program that beat one of the world's best go players, and it is the basis for remarkable recent improvements in Google translate. See: https://blog.google/products/translate/found-translation-more-accurate-fluent-sentences-google-translate/ This progress also came about because computer hardware is so much faster and cheaper. There are many examples of experiments that failed because they done before their time. They worked later on after better instruments were devised. - Jed