Hi All,

I am working on uploading RC1, to address the metastore_db/temp folder in
the source structure, and the .iml files removed.
Please consider RC0 cancelled, will update soon with RC1.

Thanks,
Rufus

On Sun, May 3, 2015 at 6:38 AM, Johny Rufus <[email protected]> wrote:

> Deleting the metastore_db directory gets me past the hive sink error
> rm -rf
> /Users/jrufus/dev/temp/apache-flume-1.6.0-src/flume-ng-sinks/flume-hive-sink/metastore_db
>
> metastore_db/temp is present in the source structure, which should not be
> present i presume.
>
> I will work with Hari on uploading a new jar that doesnot have the .iml
> files and the metastore_db/temp directory.
>
>
> Thanks,
> Rufus
>
> On Sun, May 3, 2015 at 12:17 AM, Ashish <[email protected]> wrote:
>
>> Not able to debug the Zk issue. Ran the build from within the Kafka
>> Sink and it worked, so must be something else.
>>
>> So far not able to get a clean build. Deleted m2 repository to avoid
>> issues. Now running into Hive Sink Test issue. . Any suggestions on
>> Hive sink test error?
>>
>> I was expecting a few votes by now. Folks, please do vote on the release.
>> The source also contains Intellij iml files which are not needed IMHO
>>
>> Error trace below
>>
>>
>> Running org.apache.flume.sink.hive.TestHiveSink
>>
>> Tests run: 5, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 2.277
>> sec <<< FAILURE!
>>
>>
>> testSingleWriterSimplePartitionedTable(org.apache.flume.sink.hive.TestHiveSink)
>>  Time elapsed: 4 sec  <<< ERROR!
>>
>> java.sql.SQLException: Failed to create database 'metastore_db', see
>> the next exception for details.
>>
>> at
>> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
>> Source)
>>
>> at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source)
>>
>> at org.apache.derby.impl.jdbc.Util.seeNextException(Unknown Source)
>>
>> at org.apache.derby.impl.jdbc.EmbedConnection.createDatabase(Unknown
>> Source)
>>
>> at org.apache.derby.impl.jdbc.EmbedConnection.<init>(Unknown Source)
>>
>> at org.apache.derby.impl.jdbc.EmbedConnection30.<init>(Unknown Source)
>>
>> at org.apache.derby.impl.jdbc.EmbedConnection40.<init>(Unknown Source)
>>
>> at org.apache.derby.jdbc.Driver40.getNewEmbedConnection(Unknown Source)
>>
>> at org.apache.derby.jdbc.InternalDriver.connect(Unknown Source)
>>
>> at org.apache.derby.jdbc.EmbeddedDriver.connect(Unknown Source)
>>
>> at
>> org.apache.hadoop.hive.metastore.txn.TxnDbUtil.getConnection(TxnDbUtil.java:216)
>>
>> at
>> org.apache.hadoop.hive.metastore.txn.TxnDbUtil.cleanDb(TxnDbUtil.java:129)
>>
>> at org.apache.flume.sink.hive.TestHiveSink.<init>(TestHiveSink.java:105)
>>
>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>>
>> at
>> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
>>
>> at
>> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>>
>> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>>
>> at
>> org.junit.runners.BlockJUnit4ClassRunner.createTest(BlockJUnit4ClassRunner.java:187)
>>
>> at
>> org.junit.runners.BlockJUnit4ClassRunner$1.runReflectiveCall(BlockJUnit4ClassRunner.java:236)
>>
>> at
>> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>>
>> at
>> org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:233)
>>
>> at
>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
>>
>> at
>> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
>>
>> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
>>
>> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
>>
>> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
>>
>> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
>>
>> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
>>
>> at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>>
>> at
>> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>>
>> at
>> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>>
>> at
>> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>
>> at java.lang.reflect.Method.invoke(Method.java:597)
>>
>> at
>> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>>
>> at
>> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>>
>> at
>> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>>
>> at
>> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>>
>> at
>> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
>>
>> Caused by: java.sql.SQLException: Failed to create database
>> 'metastore_db', see the next exception for details.
>>
>> at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
>> Source)
>>
>> ... 41 more
>>
>> Caused by: java.sql.SQLException: The database directory
>>
>> '/Users/ashishpaliwal/work/trash/apache-flume-1.6.0-src/flume-ng-sinks/flume-hive-sink/metastore_db'
>> exists. However, it does not contain the expected 'service.properties'
>> file. Perhaps Derby was brought down in the middle of creating this
>> database. You may want to delete this directory and try creating the
>> database again.
>>
>> at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown
>> Source)
>>
>> at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
>>
>> at
>> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown
>> Source)
>>
>> at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown
>> Source)
>>
>> ... 38 more
>>
>> Caused by: ERROR XBM0A: The database directory
>>
>> '/Users/ashishpaliwal/work/trash/apache-flume-1.6.0-src/flume-ng-sinks/flume-hive-sink/metastore_db'
>> exists. However, it does not contain the expected 'service.properties'
>> file. Perhaps Derby was brought down in the middle of creating this
>> database. You may want to delete this directory and try creating the
>> database again.
>>
>> at org.apache.derby.iapi.error.StandardException.newException(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.services.monitor.StorageFactoryService.vetService(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.services.monitor.StorageFactoryService.access$600(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.services.monitor.StorageFactoryService$9.run(Unknown
>> Source)
>>
>> at java.security.AccessController.doPrivileged(Native Method)
>>
>> at
>> org.apache.derby.impl.services.monitor.StorageFactoryService.createServiceRoot(Unknown
>> Source)
>>
>> at org.apache.derby.impl.services.monitor.BaseMonitor.bootService(Unknown
>> Source)
>>
>> at
>> org.apache.derby.impl.services.monitor.BaseMonitor.createPersistentService(Unknown
>> Source)
>>
>> at
>> org.apache.derby.iapi.services.monitor.Monitor.createPersistentService(Unknown
>> Source)
>>
>> ... 38 more
>>
>> On Sat, May 2, 2015 at 8:26 AM, Ashish <[email protected]> wrote:
>> > No luck with Java 7 either.
>> >
>> > With Java 6 and running into this error  (running on Macbook Air with
>> 4G RAM)
>> >
>> > org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to
>> > zookeeper server within timeout: 1000
>> >
>> > at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:880)
>> >
>> > at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:98)
>> >
>> > at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:84)
>> >
>> > at
>> kafka.consumer.ZookeeperConsumerConnector.connectZk(ZookeeperConsumerConnector.scala:156)
>> >
>> > at
>> kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:114)
>> >
>> > at
>> kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:65)
>> >
>> > at
>> kafka.javaapi.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:67)
>> >
>> > at
>> kafka.consumer.Consumer$.createJavaConsumerConnector(ConsumerConnector.scala:100)
>> >
>> > at
>> kafka.consumer.Consumer.createJavaConsumerConnector(ConsumerConnector.scala)
>> >
>> > at
>> org.apache.flume.sink.kafka.util.KafkaConsumer.<init>(KafkaConsumer.java:51)
>> >
>> > at
>> org.apache.flume.sink.kafka.util.TestUtil.getKafkaConsumer(TestUtil.java:120)
>> >
>> > at org.apache.flume.sink.kafka.util.TestUtil.prepare(TestUtil.java:145)
>> >
>> > at
>> org.apache.flume.sink.kafka.TestKafkaSink.setup(TestKafkaSink.java:51)
>> >
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >
>> > at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >
>> > at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >
>> > at java.lang.reflect.Method.invoke(Method.java:597)
>> >
>> > at
>> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
>> >
>> > at
>> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
>> >
>> > at
>> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
>> >
>> > at
>> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
>> >
>> > at
>> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
>> >
>> > at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
>> >
>> > at
>> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:252)
>> >
>> > at
>> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:141)
>> >
>> > at
>> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:112)
>> >
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >
>> > at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>> >
>> > at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> >
>> > at java.lang.reflect.Method.invoke(Method.java:597)
>> >
>> > at
>> org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:189)
>> >
>> > at
>> org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:165)
>> >
>> > at
>> org.apache.maven.surefire.booter.ProviderFactory.invokeProvider(ProviderFactory.java:85)
>> >
>> > at
>> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:115)
>> >
>> > at
>> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:75)
>> >
>> >
>> >
>> > Results :
>> >
>> >
>> > Tests in error:
>> >
>> >   org.apache.flume.sink.kafka.TestKafkaSink: Unable to connect to
>> > zookeeper server within timeout: 1000
>> >
>> > On Thu, Apr 30, 2015 at 10:11 PM, Hari Shreedharan
>> > <[email protected]> wrote:
>> >> We don't really support or test against Java 8. I think there are
>> several
>> >> jiras open to fix Flume to work properly against Java 8, though I don't
>> >> think all of them have any work on it yet. Can you test against Java 7?
>> >>
>> >> On Thu, Apr 30, 2015 at 5:11 AM, Ashish <[email protected]>
>> wrote:
>> >>
>> >>> Have trouble with the build. I don't think it's specific to this
>> >>> release. I have seen this in past.
>> >>>
>> >>> Building with  jdk1.8.0_25
>> >>> Encountering the following error (tried 4-5 times), so far haven't
>> >>> been able to get a working build
>> >>>
>> >>> log4j:WARN No appenders could be found for logger
>> >>> (org.apache.flume.channel.file.FileChannel).
>> >>> log4j:WARN Please initialize the log4j system properly.
>> >>> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig
>> >>> for more info.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> Attempting to shutdown background worker.
>> >>> src : [ 1106 ms ].
>> >>> sink : [ 2232 ms ].
>> >>> main : [ 2234 ms ].
>> >>> Max Queue size 428169
>> >>> Attempting to shutdown background worker.
>> >>> src : [ 1 ms ].
>> >>> sink : [ 6 ms ].
>> >>> src : [ 168 ms ].
>> >>> sink : [ 10 ms ].
>> >>> sink1 : [ 24 ms ].        {  items = 5000, attempts = 6 }
>> >>> src1 : [ 43 ms ].
>> >>> src2 : [ 43 ms ].
>> >>> sink3 : [ 105 ms ].        {  items = 2500, attempts = 3 }
>> >>> sink2 : [ 105 ms ].        {  items = 2500, attempts = 5 }
>> >>> Attempting to shutdown background worker.
>> >>> Exception in thread "src5" Exception in thread "src3"
>> >>> org.apache.flume.ChannelFullException: The channel has reached it's
>> >>> capacity. This might be the result of a sink on the channel having too
>> >>> low of batch size, a downstream system running slower than normal, or
>> >>> that the channel capacity is just too low.
>> >>> [channel=spillChannel-907f6e39-ffe1-4953-abc1-265fa339934a]
>> >>> at
>> >>>
>> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:465)
>> >>> at
>> >>>
>> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.commitPutsToOverflow(SpillableMemoryChannel.java:490)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.putCommit(SpillableMemoryChannel.java:480)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.doCommit(SpillableMemoryChannel.java:401)
>> >>> at
>> >>>
>> org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel.transactionalPutN(TestSpillableMemoryChannel.java:157)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel.access$000(TestSpillableMemoryChannel.java:43)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel$1.run(TestSpillableMemoryChannel.java:230)
>> >>> Exception in thread "src2" org.apache.flume.ChannelFullException: The
>> >>> channel has reached it's capacity. This might be the result of a sink
>> >>> on the channel having too low of batch size, a downstream system
>> >>> running slower than normal, or that the channel capacity is just too
>> >>> low. [channel=spillChannel-907f6e39-ffe1-4953-abc1-265fa339934a]
>> >>> at
>> >>>
>> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:465)
>> >>> at
>> >>>
>> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.commitPutsToOverflow(SpillableMemoryChannel.java:490)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.putCommit(SpillableMemoryChannel.java:480)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.doCommit(SpillableMemoryChannel.java:401)
>> >>> at
>> >>>
>> org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel.transactionalPutN(TestSpillableMemoryChannel.java:157)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel.access$000(TestSpillableMemoryChannel.java:43)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel$1.run(TestSpillableMemoryChannel.java:230)
>> >>> org.apache.flume.ChannelFullException: The channel has reached it's
>> >>> capacity. This might be the result of a sink on the channel having too
>> >>> low of batch size, a downstream system running slower than normal, or
>> >>> that the channel capacity is just too low.
>> >>> [channel=spillChannel-907f6e39-ffe1-4953-abc1-265fa339934a]
>> >>> at
>> >>>
>> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:465)
>> >>> at
>> >>>
>> org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.commitPutsToOverflow(SpillableMemoryChannel.java:490)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.putCommit(SpillableMemoryChannel.java:480)
>> >>> at
>> >>>
>> org.apache.flume.channel.SpillableMemoryChannel$SpillableMemoryTransaction.doCommit(SpillableMemoryChannel.java:401)
>> >>> at
>> >>>
>> org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel.transactionalPutN(TestSpillableMemoryChannel.java:157)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel.access$000(TestSpillableMemoryChannel.java:43)
>> >>> at
>> >>>
>> org.apache.flume.channel.TestSpillableMemoryChannel$1.run(TestSpillableMemoryChannel.java:230)
>> >>>
>> >>> On Thu, Apr 30, 2015 at 1:56 AM, Johny Rufus <[email protected]>
>> wrote:
>> >>> > Hi All,
>> >>> >
>> >>> > This is the ninth release for Apache Flume as a top-level project,
>> >>> > version 1.6.0. We are voting on release candidate RC0.
>> >>> >
>> >>> > It fixes the following issues:
>> >>> >
>> >>> >
>> >>>
>> https://git-wip-us.apache.org/repos/asf?p=flume.git;a=blob;f=CHANGELOG;h=774aced731de1e49043c179a722e55feb69f1b29;hb=493976e20dfe14b0b611c92f3e160d4336d10af2
>> >>> >
>> >>> > *** Please cast your vote within the next 72 hours ***
>> >>> >
>> >>> > The tarball (*.tar.gz), signature (*.asc), and checksums (*.md5,
>> *.sha1)
>> >>> > for the source and binary artifacts can be found here:
>> >>> >
>> >>> > http://people.apache.org/~hshreedharan/apache-flume-1.6.0-rc0/
>> >>> >
>> >>> > Maven staging repo:
>> >>> >
>> >>> > *
>> https://repository.apache.org/content/repositories/orgapacheflume-1013/
>> >>> > <
>> https://repository.apache.org/content/repositories/orgapacheflume-1013/
>> >>> >*
>> >>> >
>> >>> > The tag to be voted on:
>> >>> >
>> >>> >
>> >>>
>> https://git-wip-us.apache.org/repos/asf?p=flume.git;a=commit;h=30af6e90603d476aa058d2736c2ae154acff4af9
>> >>> >
>> >>> >
>> >>> > Flume's KEYS file containing PGP keys we use to sign the release:
>> >>> >   http://www.apache.org/dist/flume/KEYS
>> >>> >
>> >>> >
>> >>> > Thanks,
>> >>> > Rufus
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> thanks
>> >>> ashish
>> >>>
>> >>> Blog: http://www.ashishpaliwal.com/blog
>> >>> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>> >>>
>> >
>> >
>> >
>> > --
>> > thanks
>> > ashish
>> >
>> > Blog: http://www.ashishpaliwal.com/blog
>> > My Photo Galleries: http://www.pbase.com/ashishpaliwal
>>
>>
>>
>> --
>> thanks
>> ashish
>>
>> Blog: http://www.ashishpaliwal.com/blog
>> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>>
>
>

Reply via email to