I turned on debug logging; however, it seems to only confirm the query and
files included in the job.

17/02/27 02:25:33 DEBUG mr.PcapJob: Executing query protocol_==_6 on
timerange February 27, 2017 12:00:00 AM UTC to February 27, 2017 2:25:31 AM
UTC
17/02/27 02:25:33 DEBUG mr.PcapJob: Including files
hdfs://node1:8020/apps/metron/pcap/pcap_pcap_1488154851638515000_0_pcap-9-1488153894,hdfs://node1:8020/apps/metron/pcap/pcap_pcap_1488154982405815000_0_pcap-9-1488153894,hdfs://node1:8020/apps/metron/pcap/pcap_pcap_1488155123884215000_0_pcap-9-1488153894,hdfs://node1:8020/apps/metron/pcap/pcap_pcap_1488155239282312000_0_pcap-9-1488153894,hdfs://node1:8020/apps/metron/pcap/pcap_pcap_1488155395170343000_0_pcap-9-1488153894

It feels like I'm missing something stupid.


On Sun, Feb 26, 2017 at 8:53 PM, Kyle Richardson <[email protected]>
wrote:

> Ok... Seems there is a dark cloud hanging over me. Trying to test
> METRON-743 (PR#467) on quick-dev. Followed Casey's instructions to a tee
> but am getting no results. I did have to stop HBase to free up enough
> resources for the job to run but didn't think that would be an issue. Any
> ideas?
>
> [root@node1 ~]# hadoop fs -ls -R /apps/metron/pcap
> -rw-r--r--   1 storm hadoop     458718 2017-02-27 00:23
> /apps/metron/pcap/pcap_pcap_1488154851638515000_0_pcap-9-1488153894
> -rw-r--r--   1 storm hadoop     472857 2017-02-27 00:25
> /apps/metron/pcap/pcap_pcap_1488154982405815000_0_pcap-9-1488153894
> -rw-r--r--   1 storm hadoop     451711 2017-02-27 00:28
> /apps/metron/pcap/pcap_pcap_1488155123884215000_0_pcap-9-1488153894
> -rw-r--r--   1 storm hadoop     447685 2017-02-27 00:30
> /apps/metron/pcap/pcap_pcap_1488155239282312000_0_pcap-9-1488153894
> -rw-r--r--   1 storm hadoop     432695 2017-02-27 00:49
> /apps/metron/pcap/pcap_pcap_1488155395170343000_0_pcap-9-1488153894
> [root@node1 ~]# /usr/metron/0.3.1/bin/pcap_inspector.sh -i
> /apps/metron/pcap/pcap_pcap_1488155123884215000_0_pcap-9-1488153894 -n 10
> TS: February 27, 2017 12:25:23 AM UTC,ip_src_addr:
> 192.168.1.128,ip_src_port: 54126,ip_dst_addr: 192.168.1.11,ip_dst_port:
> 8080,protocol: 6
> TS: February 27, 2017 12:25:24 AM UTC,ip_src_addr:
> 192.168.1.128,ip_src_port: 53212,ip_dst_addr: 192.168.1.11,ip_dst_port:
> 8080,protocol: 6
> TS: February 27, 2017 12:25:24 AM UTC,ip_src_addr:
> 192.168.1.11,ip_src_port: 8080,ip_dst_addr: 192.168.1.128,ip_dst_port:
> 53212,protocol: 6
> TS: February 27, 2017 12:25:24 AM UTC,ip_src_addr:
> 192.168.1.11,ip_src_port: 8080,ip_dst_addr: 192.168.1.128,ip_dst_port:
> 53212,protocol: 6
> TS: February 27, 2017 12:25:24 AM UTC,ip_src_addr:
> 192.168.1.128,ip_src_port: 53212,ip_dst_addr: 192.168.1.11,ip_dst_port:
> 8080,protocol: 6
> TS: February 27, 2017 12:25:25 AM UTC,ip_src_addr:
> 192.168.1.128,ip_src_port: 53212,ip_dst_addr: 192.168.1.11,ip_dst_port:
> 8080,protocol: 6
> TS: February 27, 2017 12:25:25 AM UTC,ip_src_addr:
> 192.168.1.11,ip_src_port: 8080,ip_dst_addr: 192.168.1.128,ip_dst_port:
> 53212,protocol: 6
> TS: February 27, 2017 12:25:25 AM UTC,ip_src_addr:
> 192.168.1.11,ip_src_port: 8080,ip_dst_addr: 192.168.1.128,ip_dst_port:
> 53212,protocol: 6
> TS: February 27, 2017 12:25:25 AM UTC,ip_src_addr:
> 192.168.1.128,ip_src_port: 53212,ip_dst_addr: 192.168.1.11,ip_dst_port:
> 8080,protocol: 6
> TS: February 27, 2017 12:25:25 AM UTC,ip_src_addr:
> 192.168.1.128,ip_src_port: 53212,ip_dst_addr: 192.168.1.11,ip_dst_port:
> 8080,protocol: 6
> [root@node1 ~]# /usr/metron/0.3.1/bin/pcap_query.sh query -st "20170227"
> -df "yyyyMMdd" --query "protocol == '6'" -rpf 500
> 17/02/27 01:42:11 INFO impl.TimelineClientImpl: Timeline service address:
> http://node1:8188/ws/v1/timeline/
> 17/02/27 01:42:11 INFO client.RMProxy: Connecting to ResourceManager at
> node1/127.0.0.1:8050
> 17/02/27 01:42:12 INFO client.AHSProxy: Connecting to Application History
> server at node1/127.0.0.1:10200
> 17/02/27 01:42:14 INFO input.FileInputFormat: Total input paths to process
> : 5
> 17/02/27 01:42:15 INFO mapreduce.JobSubmitter: number of splits:5
> 17/02/27 01:42:16 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1488159654301_0001
> 17/02/27 01:42:17 INFO impl.YarnClientImpl: Submitted application
> application_1488159654301_0001
> 17/02/27 01:42:17 INFO mapreduce.Job: The url to track the job:
> http://node1:8088/proxy/application_1488159654301_0001/
> 17/02/27 01:42:17 INFO mapreduce.Job: Running job: job_1488159654301_0001
> 17/02/27 01:42:26 INFO mapreduce.Job: Job job_1488159654301_0001 running
> in uber mode : false
> 17/02/27 01:42:26 INFO mapreduce.Job:  map 0% reduce 0%
> 17/02/27 01:42:56 INFO mapreduce.Job:  map 40% reduce 0%
> 17/02/27 01:43:10 INFO mapreduce.Job:  map 60% reduce 0%
> 17/02/27 01:43:11 INFO mapreduce.Job:  map 80% reduce 0%
> 17/02/27 01:43:18 INFO mapreduce.Job:  map 100% reduce 0%
> 17/02/27 01:43:19 INFO mapreduce.Job:  map 100% reduce 10%
> 17/02/27 01:43:24 INFO mapreduce.Job:  map 100% reduce 30%
> 17/02/27 01:43:29 INFO mapreduce.Job:  map 100% reduce 40%
> 17/02/27 01:43:30 INFO mapreduce.Job:  map 100% reduce 50%
> 17/02/27 01:43:37 INFO mapreduce.Job:  map 100% reduce 60%
> 17/02/27 01:43:38 INFO mapreduce.Job:  map 100% reduce 70%
> 17/02/27 01:43:45 INFO mapreduce.Job:  map 100% reduce 90%
> 17/02/27 01:43:51 INFO mapreduce.Job:  map 100% reduce 100%
> 17/02/27 01:43:53 INFO mapreduce.Job: Job job_1488159654301_0001 completed
> successfully
> 17/02/27 01:43:54 INFO mapreduce.Job: Counters: 49
> File System Counters
> FILE: Number of bytes read=60
> FILE: Number of bytes written=2123630
> FILE: Number of read operations=0
> FILE: Number of large read operations=0
> FILE: Number of write operations=0
> HDFS: Number of bytes read=2264411
> HDFS: Number of bytes written=950
> HDFS: Number of read operations=50
> HDFS: Number of large read operations=0
> HDFS: Number of write operations=20
> Job Counters
> Launched map tasks=5
> Launched reduce tasks=10
> Data-local map tasks=5
> Total time spent by all maps in occupied slots (ms)=262119
> Total time spent by all reduces in occupied slots (ms)=185817
> Total time spent by all map tasks (ms)=87373
> Total time spent by all reduce tasks (ms)=61939
> Total vcore-milliseconds taken by all map tasks=87373
> Total vcore-milliseconds taken by all reduce tasks=61939
> Total megabyte-milliseconds taken by all map tasks=107381417
> Total megabyte-milliseconds taken by all reduce tasks=76123031
> Map-Reduce Framework
> Map input records=4855
> Map output records=0
> Map output bytes=0
> Map output materialized bytes=300
> Input split bytes=745
> Combine input records=0
> Combine output records=0
> Reduce input groups=0
> Reduce shuffle bytes=300
> Reduce input records=0
> Reduce output records=0
> Spilled Records=0
> Shuffled Maps =50
> Failed Shuffles=0
> Merged Map outputs=50
> GC time elapsed (ms)=6188
> CPU time spent (ms)=72320
> Physical memory (bytes) snapshot=5344145408
> Virtual memory (bytes) snapshot=45443485696
> Total committed heap usage (bytes)=4214226944
> Shuffle Errors
> BAD_ID=0
> CONNECTION=0
> IO_ERROR=0
> WRONG_LENGTH=0
> WRONG_MAP=0
> WRONG_REDUCE=0
> File Input Format Counters
> Bytes Read=2263666
> File Output Format Counters
> Bytes Written=950
> No results returned.
>
>
> On Sun, Feb 26, 2017 at 9:54 AM, Kyle Richardson <
> [email protected]> wrote:
>
>> Thanks! You guys are awesome! It's nice to know I'm not completely crazy
>> ;).
>>
>> I definitely needed another set of eyes on this. Special thanks to Otto
>> for identifying the root cause so quickly.
>>
>> I can confirm all of the integration tests pass for me with this PR.
>> Working on running through the rest of Casey's test plan now.
>>
>> -Kyle
>>
>> On Feb 25, 2017, at 11:57 PM, Casey Stella <[email protected]> wrote:
>>
>> METRON-743 (https://github.com/apache/incubator-metron/pull/467) for
>> reference.
>>
>> On Sat, Feb 25, 2017 at 11:51 PM, Casey Stella <[email protected]>
>> wrote:
>>
>>> Hmm, that's a very good catch if it's the issue.  I was able to verify
>>> that if you botch the sort order of the files that it fails.
>>>
>>> Would you mind sorting the files on PcapJob line 199 by filename?
>>> Something like Collections.sort(files, (o1,o2) ->
>>> o1.getName().compareTo(o2.getName()));
>>>
>>> I'm going to submit a PR regardless because we should own the
>>> assumptions here, but I suspect that for the HDFS filesystem this works as
>>> expected.  That being said, it's better to be safe than sorry.
>>>
>>> Casey
>>>
>>> On Sat, Feb 25, 2017 at 11:35 PM, Otto Fowler <[email protected]>
>>> wrote:
>>>
>>>> /**
>>>>  * List the statuses and block locations of the files in the given path.
>>>>  * Does not guarantee to return the iterator that traverses statuses
>>>>  * of the files in a sorted order.
>>>>  * <pre>
>>>>  * If the path is a directory,
>>>>  *   if recursive is false, returns files in the directory;
>>>>  *   if recursive is true, return files in the subtree rooted at the
>>>> path.
>>>>  * If the path is a file, return the file's status and block locations.
>>>>  * </pre>
>>>>  * @param f is the path
>>>>  * @param recursive if the subdirectories need to be traversed
>>>> recursively
>>>>  *
>>>>  * @return an iterator that traverses statuses of the files
>>>>  *
>>>>  * @throws FileNotFoundException when the path does not exist;
>>>>  * @throws IOException see specific implementation
>>>>  */
>>>> public RemoteIterator<LocatedFileStatus> listFiles(
>>>>
>>>>
>>>> So if we depend on this returning something sorted, it is only working
>>>> accidentally?
>>>>
>>>>
>>>> On February 25, 2017 at 23:10:59, Otto Fowler ([email protected])
>>>> wrote:
>>>>
>>>> https://issues.apache.org/jira/browse/HADOOP-12009  makes it seem like
>>>> there is no order
>>>>
>>>>
>>>> On February 25, 2017 at 23:06:37, Otto Fowler ([email protected])
>>>> wrote:
>>>>
>>>> Maybe Hadoop Local FileSystem returns different things from ListFiles()
>>>> on
>>>> different platforms?
>>>> That would be something to check?
>>>>
>>>> Sorry that is all I got right now
>>>>
>>>>
>>>>
>>>> On February 25, 2017 at 22:57:49, Otto Fowler ([email protected])
>>>> wrote:
>>>>
>>>> There are also some if Log.isDebugEnabled() outputs, so maybe try
>>>> changing
>>>> the logging level, maybe running just this test?
>>>>
>>>>
>>>>
>>>> On February 25, 2017 at 22:39:02, Otto Fowler ([email protected])
>>>> wrote:
>>>>
>>>> There are multiple “tests” within the test, with different parameters.
>>>> If
>>>> you look at where this is breaking, it is at
>>>>
>>>> {
>>>>   //make sure I get them all.
>>>>   Iterable<byte[]> results =
>>>>           job.query(new Path(outDir.getAbsolutePath())
>>>>                   , new Path(queryDir.getAbsolutePath())
>>>>                   , getTimestamp(0, pcapEntries)
>>>>                   , getTimestamp(pcapEntries.size()-1, pcapEntries) + 1
>>>>                   , 10
>>>>                   , new EnumMap<>(Constants.Fields.class)
>>>>                   , new Configuration()
>>>>                   , FileSystem.get(new Configuration())
>>>>                   , new FixedPcapFilter.Configurator()
>>>>           );
>>>>   assertInOrder(results);
>>>>   Assert.assertEquals(Iterables.size(results), pcapEntries.size());
>>>>
>>>>
>>>>
>>>> Which is the 7th test job run against the data.  I am not familiar with
>>>> this test or code, but
>>>> that has to be significant.
>>>>
>>>> Maybe you should enable and print out the information of the results -
>>>> and
>>>> we can see a pattern there?
>>>>
>>>> On February 25, 2017 at 22:19:00, Kyle Richardson (
>>>> [email protected])
>>>> wrote:
>>>>
>>>> mvn integration-test
>>>>
>>>> Although I have also tried...
>>>> mvn clean install && mvn integration-test
>>>> mvn clean package && mvn integration-test
>>>> mvn install && mvn surefire-test@unit-tests && mvn
>>>> surefire-test@integration-tests
>>>>
>>>> -Kyle
>>>>
>>>> On Feb 25, 2017, at 8:34 PM, Otto Fowler <[email protected]>
>>>> wrote:
>>>>
>>>> What command are you using to build?
>>>>
>>>>
>>>>
>>>> On February 25, 2017 at 17:40:20, Kyle Richardson (
>>>> [email protected])
>>>> wrote:
>>>>
>>>> Tried with Oracle JDK and got the same result. I went as far as trying
>>>> to
>>>> run it through the debugger but am not that familiar with this part of
>>>> the
>>>> code. The timestamps of the packets are definitely not coming back in
>>>> the
>>>> expected order, but I'm not sure why. Could it be related to something
>>>> filesystem specific?
>>>>
>>>> Apologies if I'm just being dense but I'd really like to understand why
>>>> this consistently fails on some platforms and not others.
>>>>
>>>> -Kyle
>>>>
>>>> > On Feb 25, 2017, at 9:07 AM, Kyle Richardson <
>>>> [email protected]>
>>>> wrote:
>>>> >
>>>> > Ok, I've tried this so many times I may be going crazy, so thought I'd
>>>> ask the community for a sanity check.
>>>> >
>>>> > I'm trying to verify RC5 and I keep running into the same integration
>>>> test failures but only on my Fedora (24 and 25) and CentOS 7 systems. It
>>>> passes fine on my Macbook.
>>>> >
>>>> > It always fails on the PcapTopologyIntegrationTest (test results
>>>> pasted
>>>> below). Anyone have any ideas? I'm using the exact same version of
>>>> maven in
>>>> all cases (v3.3.9). The only difference I can think of is the
>>>> Fedora/CentOS
>>>> systems are using OpenJDK whereas the Macbook is running Sun/Oracle JDK.
>>>> >
>>>> > -------------------------------------------------------
>>>> > T E S T S
>>>> > -------------------------------------------------------
>>>> > Running org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st
>>>> > Formatting using clusterid: testClusterID
>>>> > Formatting using clusterid: testClusterID
>>>> > Sent pcap data: 20
>>>> > Wrote 20 to kafka
>>>> > Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 42.011
>>>> sec <<< FAILURE! - in
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTest
>>>> >
>>>> testTimestampInPacket(org.apache.metron.pcap.integration.Pca
>>>> pTopologyIntegrationTest)
>>>> Time elapsed: 26.968 sec <<< FAILURE!
>>>> > java.lang.AssertionError
>>>> > at org.junit.Assert.fail(Assert.java:86)
>>>> > at org.junit.Assert.assertTrue(Assert.java:41)
>>>> > at org.junit.Assert.assertTrue(Assert.java:52)
>>>> > at
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st.assertInOrder(PcapTopologyIntegrationTest.java:537)
>>>> > at
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st.testTopology(PcapTopologyIntegrationTest.java:383)
>>>> > at
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st.testTimestampInPacket(PcapTopologyIntegrationTest.java:135)
>>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> > at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:62)
>>>> > at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>>> thodAccessorImpl.java:43)
>>>> > at java.lang.reflect.Method.invoke(Method.java:498)
>>>> > at
>>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>>> FrameworkMethod.java:50)
>>>> > at
>>>> org.junit.internal.runners.model.ReflectiveCallable.run(Refl
>>>> ectiveCallable.java:12)
>>>> > at
>>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(Fr
>>>> ameworkMethod.java:47)
>>>> > at
>>>> org.junit.internal.runners.statements.InvokeMethod.evaluate(
>>>> InvokeMethod.java:17)
>>>> > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>>>> > at
>>>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit
>>>> 4ClassRunner.java:78)
>>>> > at
>>>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit
>>>> 4ClassRunner.java:57)
>>>> > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>>>> > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>>>> > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>>>> > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>>>> > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>>>> > at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUni
>>>> t4Provider.java:283)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithR
>>>> erun(JUnit4Provider.java:173)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestS
>>>> et(JUnit4Provider.java:153)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit
>>>> 4Provider.java:128)
>>>> > at
>>>> org.apache.maven.surefire.booter.ForkedBooter.invokeProvider
>>>> InSameClassLoader(ForkedBooter.java:203)
>>>> > at
>>>> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInPro
>>>> cess(ForkedBooter.java:155)
>>>> > at
>>>> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBoo
>>>> ter.java:103)
>>>> >
>>>> >
>>>> testTimestampInKey(org.apache.metron.pcap.integration.PcapTo
>>>> pologyIntegrationTest)
>>>> Time elapsed: 15.038 sec <<< FAILURE!
>>>> > java.lang.AssertionError
>>>> > at org.junit.Assert.fail(Assert.java:86)
>>>> > at org.junit.Assert.assertTrue(Assert.java:41)
>>>> > at org.junit.Assert.assertTrue(Assert.java:52)
>>>> > at
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st.assertInOrder(PcapTopologyIntegrationTest.java:537)
>>>> > at
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st.testTopology(PcapTopologyIntegrationTest.java:383)
>>>> > at
>>>> org.apache.metron.pcap.integration.PcapTopologyIntegrationTe
>>>> st.testTimestampInKey(PcapTopologyIntegrationTest.java:152)
>>>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>> > at
>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>>> ssorImpl.java:62)
>>>> > at
>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>>> thodAccessorImpl.java:43)
>>>> > at java.lang.reflect.Method.invoke(Method.java:498)
>>>> > at
>>>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(
>>>> FrameworkMethod.java:50)
>>>> > at
>>>> org.junit.internal.runners.model.ReflectiveCallable.run(Refl
>>>> ectiveCallable.java:12)
>>>> > at
>>>> org.junit.runners.model.FrameworkMethod.invokeExplosively(Fr
>>>> ameworkMethod.java:47)
>>>> > at
>>>> org.junit.internal.runners.statements.InvokeMethod.evaluate(
>>>> InvokeMethod.java:17)
>>>> > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>>>> > at
>>>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit
>>>> 4ClassRunner.java:78)
>>>> > at
>>>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit
>>>> 4ClassRunner.java:57)
>>>> > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>>>> > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>>>> > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>>>> > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>>>> > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>>>> > at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUni
>>>> t4Provider.java:283)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithR
>>>> erun(JUnit4Provider.java:173)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestS
>>>> et(JUnit4Provider.java:153)
>>>> > at
>>>> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit
>>>> 4Provider.java:128)
>>>> > at
>>>> org.apache.maven.surefire.booter.ForkedBooter.invokeProvider
>>>> InSameClassLoader(ForkedBooter.java:203)
>>>> > at
>>>> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInPro
>>>> cess(ForkedBooter.java:155)
>>>> > at
>>>> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBoo
>>>> ter.java:103)
>>>> >
>>>> >
>>>> > Results :
>>>> >
>>>> > Failed tests:
>>>> >
>>>> PcapTopologyIntegrationTest.testTimestampInKey:152->testTopo
>>>> logy:383->assertInOrder:537
>>>> null
>>>> >
>>>> PcapTopologyIntegrationTest.testTimestampInPacket:135->testT
>>>> opology:383->assertInOrder:537
>>>> null
>>>> >
>>>> >
>>>> >
>>>> > Tests run: 2, Failures: 2, Errors: 0, Skipped: 0
>>>> >
>>>> > [ERROR] Failed to execute goal
>>>> org.apache.maven.plugins:maven-surefire-plugin:2.18:test
>>>> (integration-tests) on project metron-pcap-backend: There are test
>>>> failures.
>>>> > [ERROR]
>>>> > [ERROR] Please refer to
>>>> /home/kyle/projects/metron-fork/metron-platform/metron-pcap-
>>>> backend/target/surefire-reports
>>>> for the individual test results.
>>>> > [ERROR] -> [Help 1]
>>>> > [ERROR]
>>>> > [ERROR] To see the full stack trace of the errors, re-run Maven with
>>>> the
>>>> -e switch.
>>>> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>>> > [ERROR]
>>>> > [ERROR] For more information about the errors and possible solutions,
>>>> please read the following articles:
>>>> > [ERROR] [Help 1]
>>>> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
>>>> > [ERROR]
>>>> > [ERROR] After correcting the problems, you can resume the build with
>>>> the
>>>> command
>>>> > [ERROR] mvn <goals> -rf :metron-pcap-backend
>>>> >
>>>> >
>>>>
>>>
>>>
>>
>

Reply via email to