[ansible-project] Updating Windows 10

2020-07-25 Thread Kiran Patil
Disable windows updates from os end if you plan to do it from ansible

os end more likely the output is of last scan 


So if you scan post ansible deployment you try scan on os end it will show no 
updates needed 


Also depends on what all categories you selected from ansible end


Reboot is neeeded typically post deployment 


So test those things 


-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/e1f17394-f224-467b-a446-0ad226bb4c64o%40googlegroups.com.


Re: Request permission to use memcached logo

2020-07-14 Thread Kiran Patil
Thank a lot.

-- Kiran P.

On Tuesday, July 14, 2020 at 2:24:54 PM UTC-7, Dormando wrote:
>
> Thanks, 
>
> I'll allow this for a one-time use. 
>
> Thanks, 
> -Dormando 
>
> On Tue, 14 Jul 2020, Kiran Patil wrote: 
>
> > 
> > 
> > Hi Dormondo. 
> > 
> > 
> > Sorry I was not able to spend time on that patch due to internal 
> priority change and my focus got shifted meanwhile. 
> > 
> > 
> > Now we have assigned resource (my co-workder: Sridhar Samudrala) to work 
> on this patch and address those outstanding comments. 
> > 
> > 
> > Hence can we use the logo meanwhile as one time exception? 
> > 
> > 
> > Thanks, 
> > 
> > -- Kiran P. 
> > 
> > 
> > 
> > On Thursday, July 9, 2020 at 2:20:22 PM UTC-7, Dormando wrote: 
> >   Hey, 
> > 
> >   I think we never got that patch merged? I'd love to allow this but 
> I'm 
> >   worried people might get confused since it's not something you can 
> do with 
> >   the released version of memcached? 
> > 
> >   Thanks, 
> >   -Dormando 
> > 
> >   On Thu, 9 Jul 2020, Kiran Patil wrote: 
> > 
> >   > 
> >   > Hello Dormando, 
> >   > 
> >   >   
> >   > 
> >   > I’ll be discussing support for memcached with Application 
> Devices Queues (ADQ) in a presentation at the Netdev virtual conference in 
> August. 
> >   > 
> >   >   
> >   > 
> >   > I would like to request permission to use the memcached logo for 
> the Netdev presentation, on the intel.com website, other collateral and 
> >   presentations. 
> >   > 
> >   > 
> >   > If you are OK giving us the permission to use the memcached logo 
> for NetDev Presentation, on intel.com, other related collateral , 
> >   > 
> >   > other collateral and presentation, can you please reply back to 
> this request with your permission? 
> >   > 
> >   > 
> >   > Thank you! 
> >   > 
> >   >   
> >   > 
> >   > Kiran Patil 
> >   > 
> >   > Recommended Email: kiran...@intel.com 
> >   > 
> >   > Intel Corp. 
> >   > 
> >   > -- 
> >   > 
> >   > --- 
> >   > You received this message because you are subscribed to the 
> Google Groups "memcached" group. 
> >   > To unsubscribe from this group and stop receiving emails from 
> it, send an email to memc...@googlegroups.com. 
> >   > To view this discussion on the web visit 
> >   
> https://groups.google.com/d/msgid/memcached/23488c3a-7215-40fb-a6a9-69848c0ca8d4o%40googlegroups.com.
>  
>
> >   > 
> >   > 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memc...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/07a7402a-844a-4ac1-b6ec-c69f34257e23o%40googlegroups.com.
>  
>
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/8861b24a-156b-42d9-afe0-637508f3cf71o%40googlegroups.com.


Re: Request permission to use memcached logo

2020-07-14 Thread Kiran Patil



Hi Dormondo.


Sorry I was not able to spend time on that patch due to internal priority 
change and my focus got shifted meanwhile.


Now we have assigned resource (my co-workder: Sridhar Samudrala) to work on 
this patch and address those outstanding comments.


Hence can we use the logo meanwhile as one time exception?


Thanks,

-- Kiran P.



On Thursday, July 9, 2020 at 2:20:22 PM UTC-7, Dormando wrote:
>
> Hey, 
>
> I think we never got that patch merged? I'd love to allow this but I'm 
> worried people might get confused since it's not something you can do with 
> the released version of memcached? 
>
> Thanks, 
> -Dormando 
>
> On Thu, 9 Jul 2020, Kiran Patil wrote: 
>
> > 
> > Hello Dormando, 
> > 
> >   
> > 
> > I’ll be discussing support for memcached with Application Devices Queues 
> (ADQ) in a presentation at the Netdev virtual conference in August. 
> > 
> >   
> > 
> > I would like to request permission to use the memcached logo for the 
> Netdev presentation, on the intel.com website, other collateral and 
> presentations. 
> > 
> > 
> > If you are OK giving us the permission to use the memcached logo for 
> NetDev Presentation, on intel.com, other related collateral , 
> > 
> > other collateral and presentation, can you please reply back to this 
> request with your permission? 
> > 
> > 
> > Thank you! 
> > 
> >   
> > 
> > Kiran Patil 
> > 
> > Recommended Email: kiran...@intel.com  
> > 
> > Intel Corp. 
> > 
> > -- 
> > 
> > --- 
> > You received this message because you are subscribed to the Google 
> Groups "memcached" group. 
> > To unsubscribe from this group and stop receiving emails from it, send 
> an email to memc...@googlegroups.com . 
> > To view this discussion on the web visit 
> https://groups.google.com/d/msgid/memcached/23488c3a-7215-40fb-a6a9-69848c0ca8d4o%40googlegroups.com.
>  
>
> > 
> >

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/07a7402a-844a-4ac1-b6ec-c69f34257e23o%40googlegroups.com.


Request permission to use memcached logo

2020-07-09 Thread Kiran Patil


Hello Dormando,

 

I’ll be discussing support for memcached with Application Devices Queues 
(ADQ) in a presentation at the Netdev virtual conference in August.

 

I would like to request permission to use the memcached logo for the Netdev 
presentation, on the intel.com website, other collateral and presentations.


If you are OK giving us the permission to use the memcached logo for NetDev 
Presentation, on intel.com, other related collateral ,

other collateral and presentation, can you please reply back to this 
request with your permission?


Thank you!

 

Kiran Patil

Recommended Email: kiran.pa...@intel.com

Intel Corp.

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"memcached" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to memcached+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/memcached/23488c3a-7215-40fb-a6a9-69848c0ca8d4o%40googlegroups.com.


Mifos X automated installation not working

2019-06-13 Thread Kiran Patil
Hi,

I tried to install Mifos X using below link on Ubuntu 18.04.

https://mifosforge.jira.com/wiki/spaces/docs/pages/85622932/Mifos+X+Automated+Installation+on+Debian+Ubuntu

I am getting below error.

$ sudo apt-key adv --recv-keys --keyserver pgp.mit.edu B6069EA209539BFF
Executing: /tmp/apt-key-gpghome.qXH2HB3GgA/gpg.1.sh --recv-keys --keyserver
pgp.mit.edu B6069EA209539BFF

gpg: keyserver receive failed: No data

$ echo deb http://packages.mifosx.in stable main | sudo tee
/etc/apt/sources.list.d/mifosx.list
deb http://packages.mifosx.in stable main

$ sudo apt-get update
Hit:1 http://ppa.launchpad.net/cloud-images/docker-k8s1.9/ubuntu bionic
InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease

Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu bionic-updates
InRelease
Hit:4 http://ppa.launchpad.net/cloud-images/eks-01.10.0/ubuntu bionic
InRelease
Hit:5 http://security.ubuntu.com/ubuntu bionic-security InRelease

Hit:6 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu bionic-backports
InRelease
Err:7 http://packages.mifosx.in stable InRelease

  Could not resolve 'packages.mifosx.in'
Reading package lists... Done
W: Failed to fetch http://packages.mifosx.in/dists/stable/InRelease  Could
not resolve 'packages.mifosx.in'
W: Some index files failed to download. They have been ignored, or old ones
used instead.

Thanks,
Kiran.


Fineract integration with billing software and NFC card payment

2019-06-05 Thread Kiran Patil
Hello,

I would like to know the following integration with Fineract is available
or someone is working on it with opensource code.

1) Integration with any billing software like Open Source Subscription
Billing & Payment Platform 

2) Is there support for NFC card payments?

Let me know we would like to work with you and contribute to that project.

We would like to build solid opensource solution.

Also please guide me if none is available how to bootstrap it, I mean from
where to start building such solution.

Any help is appreciated.

Regards,
Kiran.


Re: fineract with moov ach

2019-06-05 Thread Kiran Patil
Dear James,

Our CEO has asked me to show demo and I did showcase without ACH but with
Mojaloop.

But our requirement is to handle bulk transactions using ACH with Fineract
as core banking solution.

Anybody in the community interested to help bootstraping ACH integration,
then please let us know.

We would like to collaborate and contribute to that project in true
Freesoftware manner.

Thanks and regards,
Kiran.

On Sat, May 4, 2019 at 8:30 PM James Dailey  wrote:

> Kiran -
>
> There are at least two efforts to connect to payment gateways.  Please see
> the Fineract wiki and search the listserv for discussion of payments,
> gateways, & mojaloop.
>
> The current active project with mojaloop hub involves creating the
> appropriate pieces on the Fineract side (both Fineract 1.x and fineract-CN)
> to handle payments of the real-time ACH variety using APIs rather than old
> school ISO8583 protocol.
>
> Please also say more about your concept and interest in this demo.
>
> I'm familiar w moov but not up to date on their latest.  Could you spell
> out some specific use cases?
>
> It is important that integrations follow good security practices to avoid
> risks. I'd like to see the current work progress to full release.
>
> Thanks
>
> James
>
>
>
> On Sat, May 4, 2019, 12:47 AM Kiran Patil  wrote:
>
>> Hello,
>>
>> I would like to know if there is a demo setup can be done with any of the
>> ACH with Fineract and see the transaction happening between them?
>>
>> Ex: Fineract(Bank1) <=> ACH <=> Fineract (Bank2)
>>
>> Please send me the required links/guides/videos to setup a demo.
>>
>> Thanks,
>> Kiran.
>>
>> On Sat, May 4, 2019 at 12:14 PM Kiran Patil 
>> wrote:
>>
>>> Hello,
>>>
>>> I would like to know if anybody has done integration with moov ach
>>> <https://github.com/moov-io/ach>.
>>>
>>> Any demo setup done by the community ?
>>>
>>> Also there is ongoing Feature Request about fineract integration with
>>> mojaloop (which acts as ACH). Is it true that mojaloop provide ACH solution
>>> ?
>>>
>>> Pardon me for naive questions, since I am new to banking sector.
>>>
>>> Thanks,
>>> Kiran.
>>>
>>


Re: fineract with moov ach

2019-05-05 Thread Kiran Patil
Dear James,

I have been asked to do demo setup as soon as possible, hence I have no
idea regarding specific use cases.

Post the demo, I may get details regarding specific use cases.

Hence, I need help from community members to share a demo setup details.

Thanks
Kiran


On Sat, May 4, 2019 at 8:30 PM James Dailey  wrote:

> Kiran -
>
> There are at least two efforts to connect to payment gateways.  Please see
> the Fineract wiki and search the listserv for discussion of payments,
> gateways, & mojaloop.
>
> The current active project with mojaloop hub involves creating the
> appropriate pieces on the Fineract side (both Fineract 1.x and fineract-CN)
> to handle payments of the real-time ACH variety using APIs rather than old
> school ISO8583 protocol.
>
> Please also say more about your concept and interest in this demo.
>
> I'm familiar w moov but not up to date on their latest.  Could you spell
> out some specific use cases?
>
> It is important that integrations follow good security practices to avoid
> risks. I'd like to see the current work progress to full release.
>
> Thanks
>
> James
>
>
>
> On Sat, May 4, 2019, 12:47 AM Kiran Patil  wrote:
>
>> Hello,
>>
>> I would like to know if there is a demo setup can be done with any of the
>> ACH with Fineract and see the transaction happening between them?
>>
>> Ex: Fineract(Bank1) <=> ACH <=> Fineract (Bank2)
>>
>> Please send me the required links/guides/videos to setup a demo.
>>
>> Thanks,
>> Kiran.
>>
>> On Sat, May 4, 2019 at 12:14 PM Kiran Patil 
>> wrote:
>>
>>> Hello,
>>>
>>> I would like to know if anybody has done integration with moov ach
>>> <https://github.com/moov-io/ach>.
>>>
>>> Any demo setup done by the community ?
>>>
>>> Also there is ongoing Feature Request about fineract integration with
>>> mojaloop (which acts as ACH). Is it true that mojaloop provide ACH solution
>>> ?
>>>
>>> Pardon me for naive questions, since I am new to banking sector.
>>>
>>> Thanks,
>>> Kiran.
>>>
>>


Re: fineract with moov ach

2019-05-04 Thread Kiran Patil
Hello,

I would like to know if there is a demo setup can be done with any of the
ACH with Fineract and see the transaction happening between them?

Ex: Fineract(Bank1) <=> ACH <=> Fineract (Bank2)

Please send me the required links/guides/videos to setup a demo.

Thanks,
Kiran.

On Sat, May 4, 2019 at 12:14 PM Kiran Patil  wrote:

> Hello,
>
> I would like to know if anybody has done integration with moov ach
> <https://github.com/moov-io/ach>.
>
> Any demo setup done by the community ?
>
> Also there is ongoing Feature Request about fineract integration with
> mojaloop (which acts as ACH). Is it true that mojaloop provide ACH solution
> ?
>
> Pardon me for naive questions, since I am new to banking sector.
>
> Thanks,
> Kiran.
>


fineract with moov ach

2019-05-04 Thread Kiran Patil
Hello,

I would like to know if anybody has done integration with moov ach
.

Any demo setup done by the community ?

Also there is ongoing Feature Request about fineract integration with
mojaloop (which acts as ACH). Is it true that mojaloop provide ACH solution
?

Pardon me for naive questions, since I am new to banking sector.

Thanks,
Kiran.


[Rails] [Rails 6 alpha] Sprockets::FileNotFound

2019-01-02 Thread Kiran Patil
Steps to reproduce 
   
   1. Create Rails 6 app
   2. yarn add onsenui
   3. Add below lines to "app/assets/stylesheets/application.css"

 *= require onsenui/css/onsenui
 *= require onsenui/css/onsen-css-components


   1. Create a sample view as below


  

  


  


  Sign in

  



   1. Access that page

Expected behavior 

Page should render without error.
Actual behavior 

$ rails s
=> Booting Puma
=> Rails 6.0.0.alpha application starting in development
=> Run rails server --help for more startup options
Puma starting in single mode...

   - Version 3.12.0 (ruby 2.5.3-p105), codename: Llamas in Pajamas
   - Min threads: 5, max threads: 5
   - Environment: development
   - Listening on tcp://0.0.0.0:3000
   Use Ctrl-C to stop
   Started GET "/pages/auth" for 127.0.0.1 at 2019-01-02 12:17:55 +0530
   (5.7ms) SELECT sqlite_version(*)
   Processing by PagesController#auth as HTML
   Rendering pages/auth.html.erb within layouts/application
   Rendered pages/auth.html.erb within layouts/application (Duration: 2.9ms 
   | Allocations: 350)
   Completed 500 Internal Server Error in 377ms (ActiveRecord: 0.0ms | 
   Allocations: 287951)

ActionView::Template::Error (couldn't find file 'onsenui/css/onsenui' with 
type 'text/css'
Checked in these paths:
/home/smitha/Documents/kiran/fooapp/app/assets/config
/home/smitha/Documents/kiran/fooapp/app/assets/images
/home/smitha/Documents/kiran/fooapp/app/assets/stylesheets
/home/smitha/Documents/kiran/rails/actioncable/app/assets/javascripts
/home/smitha/Documents/kiran/rails/activestorage/app/assets/javascripts
/home/smitha/Documents/kiran/rails/actionview/app/assets/javascripts
/home/smitha/.rvm/gems/ruby-2.5.3/gems/turbolinks-source-5.2.0/lib/assets/javascripts):
5: <%= csrf_meta_tags %>
6: <%= csp_meta_tag %>
7:
8: <%= stylesheet_link_tag 'application', media: 'all', 
'data-turbolinks-track': 'reload' %>
9: <%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' 
%>
10: 
11:

app/assets/stylesheets/application.css:14
app/views/layouts/application.html.erb:8

-- 
You received this message because you are subscribed to the Google Groups "Ruby 
on Rails: Talk" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to rubyonrails-talk+unsubscr...@googlegroups.com.
To post to this group, send email to rubyonrails-talk@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rubyonrails-talk/38cb0ca0-ce2b-4795-99ba-4706355fd6c4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[tesseract-ocr] tesseract-ocr vs SwiftOCR

2017-08-16 Thread Kiran Patil
Hi,

Anybody has tried to compare SwiftOCR with tesseract-ocr (LSTM engine) ?

https://github.com/garnele007/SwiftOCR

Please post your findings here.

Regards,
Kiran.

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tesseract-ocr+unsubscr...@googlegroups.com.
To post to this group, send email to tesseract-ocr@googlegroups.com.
Visit this group at https://groups.google.com/group/tesseract-ocr.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/8e76f4b2-1874-4a0e-9e30-092a611b58fb%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[tesseract-ocr] Re: Extracting content from specific areas such as Account Number or Cheque Number from a Cheque

2017-08-14 Thread Kiran Patil
Dear Karthick,

Did you resolve your issue to get the Account number, Check number and so 
on ?

May I know the steps you took ?

If you used any other solution, please let me know.

Regards,
Kiran.

On Thursday, 6 March 2014 14:06:26 UTC+5:30, Karthick S wrote:
>
> Hi, I am looking at building an app on Android which can take a picture of 
> a bank check and then pull out account number as well as the check number 
> from it. I do not want all text in the scanned image to be OCRed (which 
> seems to be happening with many apps doing this). The same goes with a 
> Driving License - I would like to pull the DL Number and the name of the 
> person holding the license from the scanned image of a DL. Can anyone help 
> on this?
>

-- 
You received this message because you are subscribed to the Google Groups 
"tesseract-ocr" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to tesseract-ocr+unsubscr...@googlegroups.com.
To post to this group, send email to tesseract-ocr@googlegroups.com.
Visit this group at https://groups.google.com/group/tesseract-ocr.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/tesseract-ocr/397a1012-e403-4cde-a9a9-4d81358607c8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [ansible-project] ERR : msg: src (or content) and dest are required

2016-08-05 Thread Kiran Patil
Thanks Scott , i figured it out too.


I wanted to update but the post was under moderator review so i could not.

On Friday, August 5, 2016 at 7:23:40 AM UTC-7, Scott Sturdivant wrote:
>
> dest:/etc/...  --> dest=/etc/...
>
> On Fri, Aug 5, 2016 at 7:12 AM Kiran Patil <kirandp...@gmail.com 
> > wrote:
>
>>
>> Hello,
>>
>> Can someone help on this ?
>>
>>
>> I am getting err
>>
>>
>> failed: [xxx01] => {"failed": true}
>> msg: src (or content) and dest are required
>>
>> FATAL: all hosts have already failed -- aborting
>>
>>
>>
>>
>>
>> - name:  Set puppet.conf
>>   copy: src=/etc/ansible/playbooks/files/puppet-Prod.conf 
>> dest:/etc/puppetlabs/puppet/puppet.conf mode=0644 force
>> - name: Run puppet agent in test mode
>>   command: puppet agent -t
>>
>>
>> I am using ansible to install puppet client here. 
>>
>> It fails while tryng to overwrite the file puppet.conf 
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Ansible Project" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to ansible-proje...@googlegroups.com .
>> To post to this group, send email to ansible...@googlegroups.com 
>> .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/ansible-project/a85cffd8-644c-4fe2-b714-233ad4c63978%40googlegroups.com
>>  
>> <https://groups.google.com/d/msgid/ansible-project/a85cffd8-644c-4fe2-b714-233ad4c63978%40googlegroups.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/4de59432-1e7d-4a58-acbc-2233a8c8e899%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[ansible-project] ERR : msg: src (or content) and dest are required

2016-08-05 Thread Kiran Patil

Hello,

Can someone help on this ?


I am getting err


failed: [xxx01] => {"failed": true}
msg: src (or content) and dest are required

FATAL: all hosts have already failed -- aborting





- name:  Set puppet.conf
  copy: src=/etc/ansible/playbooks/files/puppet-Prod.conf 
dest:/etc/puppetlabs/puppet/puppet.conf mode=0644 force
- name: Run puppet agent in test mode
  command: puppet agent -t


I am using ansible to install puppet client here. 

It fails while tryng to overwrite the file puppet.conf 

-- 
You received this message because you are subscribed to the Google Groups 
"Ansible Project" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to ansible-project+unsubscr...@googlegroups.com.
To post to this group, send email to ansible-project@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/ansible-project/a85cffd8-644c-4fe2-b714-233ad4c63978%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: sample data in ITSM

2015-09-12 Thread Kiran Patil
You can ignore sample data.

On Sat 12 Sep, 2015 17:51 Sandeep Pandey  wrote:

> **
> Dear List,
>
> I am going to install fresh ITSM version 9.0 in production environment. My
> query is about sample data which comes with installer. Do we required
> sample data to check while installing fresh installation because this will
> unnecessarily consume user licenses aprrox # 15.
>
> What are the advantages/disadvantages of sample data while ITSM
> operational?
>
> Thanks.
>
> BR,
> Sandy
> _ARSlist: "Where the Answers Are" and have been for 20 years_

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
"Where the Answers Are, and have been for 20 years"


Re: Remedy 9 implementation/Upgrade

2015-06-25 Thread Kiran Patil
Hi,

It's about complete platform including ITSM.

Regards

On Thu 25 Jun, 2015 12:18 Misi Mladoniczky m...@rrr.se wrote:

 Hi,

 Are you talking about the AR System or ITSM?

 Best Regards - Misi, RRR AB, http://rrr.se

  Hi All,
 
  Anyone is implementing/Upgrading Remedy 9 for their customer.
 
  1. What is major driving factor for Remedy 9 implementation/upgrade in
  competitive toolset?
 
  2. How is the customer experience on Remedy 9 interface and reporting??
 
  Thanks in advance.
 
  Regards
  Kiran Patil
  Remedy Consultant
 
 
 ___
  UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
  Where the Answers Are, and have been for 20 years
 


 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Where the Answers Are, and have been for 20 years


___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Remedy 9 implementation/Upgrade

2015-06-25 Thread Kiran Patil
Hi All,

Anyone is implementing/Upgrading Remedy 9 for their customer.

1. What is major driving factor for Remedy 9 implementation/upgrade in
competitive toolset?

2. How is the customer experience on Remedy 9 interface and reporting??

Thanks in advance.

Regards
Kiran Patil
Remedy Consultant

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


[Gluster-devel] Gluster Benchmark Kit

2015-04-27 Thread Kiran Patil
Hi,

I came across Gluster Benchmark Kit while reading [Gluster-users]
Disastrous performance with rsync to mounted Gluster volume thread.

http://54.82.237.211/gluster-benchmark/gluster-bench-README

http://54.82.237.211/gluster-benchmark

The Kit includes tools such as iozone, smallfile and fio.

This Kit is not documented and need to baseline this tool for Gluster
Benchmark testing.

The community is going to benefit by adopting and extending it as per their
needs and the kit should be hosted on Github.

The init.sh script in the Kit contains only XFS filesystem which can be
extended to BTRFS and ZFS.

Thanks Ben Turner for sharing it.

Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] Gluster Benchmark Kit

2015-04-27 Thread Kiran Patil
Hi,

I came across Gluster Benchmark Kit while reading [Gluster-users]
Disastrous performance with rsync to mounted Gluster volume thread.

http://54.82.237.211/gluster-benchmark/gluster-bench-README

http://54.82.237.211/gluster-benchmark

The Kit includes tools such as iozone, smallfile and fio.

This Kit is not documented and need to baseline this tool for Gluster
Benchmark testing.

The community is going to benefit by adopting and extending it as per their
needs and the kit should be hosted on Github.

The init.sh script in the Kit contains only XFS filesystem which can be
extended to BTRFS and ZFS.

Thanks Ben Turner for sharing it.

Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-devel] Got a slogan idea?

2015-04-01 Thread Kiran Patil
GlusterFS: Simple Scale-out Storage

GlusterFS: Simplest Scale-out Storage

GlusterFS: Storage Made For Scale

GlusterFS: Storage For Scale

On Wed, Apr 1, 2015 at 6:38 PM, Kaleb S. KEITHLEY kkeit...@redhat.com
wrote:

 On 04/01/2015 09:01 AM, Jeff Darcy wrote:

 What I am saying is that if you have a slogan idea for Gluster, I want
 to hear it. You can reply on list or send it to me directly. I will
 collect all the proposals (yours and the ones that Red Hat comes up
 with) and circle back around for community discussion in about a month
 or so.


 Personally I don't like any of these all that much, but maybe they'll
 get someone else thinking.

 GlusterFS: your data, your way

 GlusterFS: any data, any servers, any protocol

 GlusterFS: scale-out storage for everyone

 GlusterFS: software defined storage for everyone

 GlusterFS: the Swiss Army Knife of storage



 GlusterFS: Storage Made Simple

 or

 GlusterFS: Scale-out Storage Made Simple

 --

 Kaleb

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] Gluster monitoring using PCP

2015-03-17 Thread Kiran Patil
Hi,

I installed PCP(http://www.pcp.io/man/man1/pmdagluster.1.html) on gluster
nodes and enabled to collect data using Install from
/var/lib/pcp/pmdas/gluster/.

I don't see any data for bricks using pminfo command as below.

# pminfo -f gluster.brick



gluster.brick.latency.fgetxattr.count
No value(s) available!

gluster.brick.latency.fgetxattr.avg
No value(s) available!

gluster.brick.latency.fgetxattr.max
No value(s) available!

gluster.brick.latency.fgetxattr.min
No value(s) available!

gluster.brick.latency.fentrylk.count
No value(s) available!

gluster.brick.latency.fentrylk.avg
No value(s) available!

gluster.brick.latency.fentrylk.max
No value(s) available!

gluster.brick.latency.fentrylk.min
No value(s) available!

gluster.brick.latency.fallocate.count
No value(s) available!

gluster.brick.latency.fallocate.avg
No value(s) available!

gluster.brick.latency.fallocate.max
No value(s) available!

gluster.brick.latency.fallocate.min
No value(s) available!

gluster.brick.latency.entrylk.count
No value(s) available!

Thanks,
Kiran.


...
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

[Gluster-devel] Gluster monitoring using PCP

2015-03-17 Thread Kiran Patil
Hi,

I installed PCP(http://www.pcp.io/man/man1/pmdagluster.1.html) on gluster
nodes and enabled to collect data using Install from
/var/lib/pcp/pmdas/gluster/.

I don't see any data for bricks using pminfo command as below.

# pminfo -f gluster.brick



gluster.brick.latency.fgetxattr.count
No value(s) available!

gluster.brick.latency.fgetxattr.avg
No value(s) available!

gluster.brick.latency.fgetxattr.max
No value(s) available!

gluster.brick.latency.fgetxattr.min
No value(s) available!

gluster.brick.latency.fentrylk.count
No value(s) available!

gluster.brick.latency.fentrylk.avg
No value(s) available!

gluster.brick.latency.fentrylk.max
No value(s) available!

gluster.brick.latency.fentrylk.min
No value(s) available!

gluster.brick.latency.fallocate.count
No value(s) available!

gluster.brick.latency.fallocate.avg
No value(s) available!

gluster.brick.latency.fallocate.max
No value(s) available!

gluster.brick.latency.fallocate.min
No value(s) available!

gluster.brick.latency.entrylk.count
No value(s) available!

Thanks,
Kiran.


...
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [Gluster-users] [Gluster-infra] Revamping GlusterFS website for expanded participation

2015-03-02 Thread Kiran Patil
Hi,

The more the Testing information and Tools available to community, it is
easy for anyone to run the respected TestSuite and participate as part of
QA.

The goal is to uncover/discover the issues at different workload, scenario,
platform architecture and so on and to build super solid Gluster QA
community/team ...

Testcase/suite contribution should also share the same priority as code
sharing.

There is no page for QA in the index page and please add one at make it
visible in the Navigation section of index/main page.

A sample Gluster QA page may look like  below.


Testing Gluster (Gluster QA)
=

Regression Testing : Run Gluster Regression Testsuite (
http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
)

Functional Testing: Run the upcoming distaf (
https://github.com/msvbhat/distaf)

Performance Testing:
http://www.gluster.org/community/documentation/index.php/Performance_Testing

Load Testing: TBD

Stress Testing: TBD

Tools used by community: TBD

Thanks,
Kiran.

On Mon, Mar 2, 2015 at 10:51 PM, Tuomas Kuosmanen tig...@redhat.com wrote:

 TL;DR: This started in gluster-infra list as an effort to improve
 the Gluster website and to connect it better with the Gluster
 community. If you are interested in this, read below, and pitch
 in with your insights. We should continue the discussion on
 gluster-infra list to avoid fragmentation, but I wanted to cc -users
 and -devel to invite more people into the brainstorming.

 The goal: To better connect the website with the Gluster community

 I think we should list what generally goes on in our community,
 and also what things we would like to improve. Then we can try to
 figure out how the website could support that effort better.

 There might be many approaches to this, and everyone of you can help
 by pitching in ideas and things that you think could help Gluster
 community. Creating a smoother path for new people to install
 Gluster and get familiar with the community would be good.

 Here's what I've been thinking about, from the point of view of
 someone new to Gluster. Feel free to improve the list:

 I've been thinking of four main areas here: Gluster (the software),
 the Community, News, and Events.

 The Software (installation, learning about it)
 ==

 ## Discover Gluster

   * Explanation of what Gluster is and what are the strengths
 * Some good showcases that clearly show the strong points
   where Gluster makes sense
 * Some introduction videos / screencasts to explain the
   principles and concepts
 * Maybe a link to a QA forum's newbie section where
   we can help with questions we did not know about ourselves?

 ## Installing Gluster

   * Instructions to get initial installation going easily,
 this should be really straightforward and sensible
 default settings should work.

 ## Learning more about Gluster

   * Detailed admin guides / howto-documents that explain
 what kind of setups make sense for different uses etc..
   * Community QA and mailing lists and irc for more questions /
 and also to get involved
   * Consultancies and companies offering support

 ## Improving Gluster

   * Reporting Bugs
   * Sending Patches
   * Companies offering Gluster development services


 The Community
 =

 ## Communication and Hangouts

   * Explanation and the idea about open community development
   * IRC channels, Mailing Lists, etc?
   * How to get involved


 Events
 ==

   * I don't know much about what's happening in this area,
 please fill in details :-)

 News
 

   * Security alerts
   * New releases
   * Development blog / newsletter?

 Do you have other ideas? Are there other things that the
 community does that I am not aware of? Let me know. Or if
 you disagree with something, explain why I am wrong in my
 thinking :-)

 Tom, you also likely have some ideas you want to drive forward?

 //Tuomas

 ___
 Gluster-users mailing list
 gluster-us...@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Feature Freeze for 3.7

2015-02-27 Thread Kiran Patil
Namaste Vijay,

There is no mention of including ZFS snapshot feature support in Glusterfs
v3.7.

Any updates on it ?

Thanks,
Kiran.

On Fri, Feb 27, 2015 at 4:34 PM, Vijay Bellur vbel...@redhat.com wrote:

 On 02/23/2015 05:52 PM, Vijay Bellur wrote:

 Hi All,

 It is going to be a busy week for us considering the upcoming 3.7
 feature freeze over the weekend. To facilitate timely reviews, I have
 setup an etherpad [1] that will contain the list of patches related to
 features which you would like to be reviewed before then. Can owners of
 respective features update the etherpad so that reviewers know the
 patches that need to be picked up on a priority basis? After doing this,
 please be available in #gluster-dev on freenode so that any realtime
 queries regarding your patch can be answered.


  Also it would be great if more of us can review any/all patches
 mentioned in the etherpad this week. Even if you are not familiar with
 the entire patchset, providing an update on subset of files that you
 have been able to review would vastly help in reducing the review burden
 on core reviewers. If you have not been active on gerrit for code
 reviews but intend being so, there is no better time than this week to
 get started!


 Thanks to all who helped with reviews this week. I propose that we push
 out the feature freeze date by slightly more than a week. Some rationale
 for that:

 1. Accommodate more reviews - several patchsets are still being refreshed.

 2. Every major feature merged requires a refresh from other features as
 there are common areas of code in glusterd etc. being modified.

 3. Some feature owners do seem to require a bit more time to make the
 feature more complete.

 4. Regression infrastructure was flaky last week but thanks to Justin we
 seem to be doing good now.

 I propose Monday, March 9th as the new date for feature freeze. If there
 are no objections, I will update the 3.7 planning page [2] and  I don't
 expect other milestones for 3.7 to be affected because of this variation in
 schedule.

 I have also added a reviewer tag for all features listed in the review
 etherpad at [1]. If you are reviewing a feature or interested in
 contributing to reviews, please add yourself as a reviewer for the feature
 in the etherpad so that we can collaborate better.


 Thanks,
 Vijay

  [1] https://public.pad.fsfe.org/p/review-for-glusterfs-3.7

  [2] http://www.gluster.org/community/documentation/index.php/Planning37



 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-devel

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] Functional Test Suite of Glusterfs

2015-02-26 Thread Kiran Patil
Hi,

Currently I am aware of Gluster Regression Test suite.

I would like to know if there is a Test suite which covers the
Functionality of Glusterfs.

If not then what are the options do we have to come up with Functional
test suite.

The only option we have right now is to use Gluster Regression
framework to come up with Functional test suite.

Let me know your thoughts.

Thanks,
Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


[Gluster-devel] Functional Test Suite of Glusterfs

2015-02-26 Thread Kiran Patil
Hi,

Currently I am aware of Gluster Regression Test suite.

I would like to know if there is a Test suite which covers the
Functionality of Glusterfs.

If not then what are the options do we have to come up with Functional
test suite.

The only option we have right now is to use Gluster Regression
framework to come up with Functional test suite.

Let me know your thoughts.

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-users] Functional Test Suite of Glusterfs

2015-02-26 Thread Kiran Patil
I thought of first get some functional tests out of regression suite.
Do you agree with this approach or shall we write functional tests
without borrowing them from regression tests at all.

If you agree then here are some of the testcases I have noted down to
include as part functional testing and please feel free to add if I
miss any.

volume-status.t = Create a volume and mount using FUSE and NFS and
verify different volume status options
volume.t = Create a volume with 8 bricks and add bricks and check the
brick count and remove bricks and check the brick count
normal.t = Create a volume, add brick, rebalance volume, replace
brick, remove brick
bug-1004744.t = Test case: After a rebalance fix-layout, check if the
rebalance status command
#displays the appropriate message at the CLI.
bug-1022055.t = verify volume log rotate command
bug-1176062.t = volume replace brick commit force
bug-770655.t = Set stripe block size on distribute-replicate,
replicate, distribute, stripe, distributed stripe, distributed stripe
replicate volume
bug-839595.t = verify cluster.server-quorum-ratio

bug-1030208.t = Hardlink test
bug-454.t = Symlink test
bug-893338.t = Symbolic test

bug-1161092-nfs-acls.t = nfs acls test
bug-822830.t = nfs rpc test
bug-867253.t = nfs version 3 mount test

bug-982174.t = Check if incorrect log-level keywords does not crash the CLI

mount-options.disabled = test all the options available to see if the
mount succeeds with those options
#or not
mount.t = mount using FUSE and NFS with different options


quota-anon-fd-nfs.t = Quota test
bug-1038598.t = quota hardlimit and softlimit test
quota.t = Quota test extensive
afr-quota-xattr-mdata-heal.t = afr quota test
bug-1023974.t = quota limit test
bug-1040423.t = quota: filter glusterfs quota xattrs
bug-1049323.t = quota: unmount quota aux mount for volume stop
bug-1087198.t = tests the logging of the quota in the bricks after
reaching soft
## limit of the configured limit.
bug-1104692.t = quota another limit test

file-snapshot.t = gluster block snapshot on qcow2 formate
volume-snapshot.t = gluster vol snapshots
uss.t = snapshots with uss enabled
bug-1049834.t = snap-max-hard-limit test
bug-1087203.t = snapshot autodelete test
bug-1109889.t = snapshot should be deactivated when created test
bug-1112559.t = snapshot test for spurious error
bug-1112613.t = snapshot delete all test
bug-1113975.t = snapshot restore test
bug-1157991.t = snapshot activate-on-create test
bug-1162498.t = snapshot activate-on-create with feature.uss enabled
bug-1170548-dont-display-deactivated-snapshots.t =
dont-display-deactivated-snapshots

Thanks,
Kiran.


On Thu, Feb 26, 2015 at 3:25 PM, Kiran Patil ki...@fractalio.com wrote:
 Hi,

 Currently I am aware of Gluster Regression Test suite.

 I would like to know if there is a Test suite which covers the
 Functionality of Glusterfs.

 If not then what are the options do we have to come up with Functional
 test suite.

 The only option we have right now is to use Gluster Regression
 framework to come up with Functional test suite.

 Let me know your thoughts.

 Thanks,
 Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] [TSR] Failed tests on glusterfs-3.6.3beta1, ZFS, CentOS 6.6

2015-02-20 Thread Kiran Patil
I rerun the above failed tests on ext4 and below are the one who failed.

tests/basic/quota-anon-fd-nfs.t
tests/basic/volume-snapshot.t
tests/bugs/bug-1045333.t
tests/bugs/bug-1087198.t
tests/bugs/bug-1113975.t
tests/bugs/bug-1117851.t
tests/bugs/bug-1161886/bug-1161886.t
tests/bugs/bug-1162498.t
tests/bugs/bug-765380.t
tests/bugs/bug-824753.t

Thanks,
Kiran.


On Fri, Feb 20, 2015 at 4:40 PM, Kiran Patil ki...@fractalio.com wrote:
 Please find the below gluster regression test summary report.

 Test Summary Report
 ---
 ./tests/basic/ec/quota.t
 (Wstat: 0 Tests: 22 Failed: 2)
   Failed tests:  16, 20
 ./tests/basic/quota-anon-fd-nfs.t
 (Wstat: 0 Tests: 21 Failed: 1)
   Failed test:  18
 ./tests/basic/quota.t
 (Wstat: 0 Tests: 73 Failed: 4)
   Failed tests:  24, 28, 32, 65
 ./tests/basic/uss.t
 (Wstat: 0 Tests: 158 Failed: 8)
   Failed tests:  37-38, 69-70, 99-100, 127-128
 ./tests/basic/volume-snapshot.t
 (Wstat: 0 Tests: 29 Failed: 2)
   Failed tests:  28-29
 ./tests/bugs/bug-1023974.t
 (Wstat: 0 Tests: 15 Failed: 1)
   Failed test:  12
 ./tests/bugs/bug-1038598.t
 (Wstat: 0 Tests: 28 Failed: 6)
   Failed tests:  17, 21-22, 26-28
 ./tests/bugs/bug-1045333.t
 (Wstat: 0 Tests: 16 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-1087198.t
 (Wstat: 0 Tests: 26 Failed: 2)
   Failed tests:  18, 23
 ./tests/bugs/bug-1113975.t
 (Wstat: 0 Tests: 13 Failed: 3)
   Failed tests:  11-13
 ./tests/bugs/bug-1117851.t
 (Wstat: 0 Tests: 24 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-1161886/bug-1161886.t
 (Wstat: 0 Tests: 16 Failed: 4)
   Failed tests:  13-16
 ./tests/bugs/bug-1162498.t
 (Wstat: 0 Tests: 30 Failed: 13)
   Failed tests:  10, 19-30
 ./tests/bugs/bug-765380.t
 (Wstat: 0 Tests: 9 Failed: 1)
   Failed test:  6
 ./tests/bugs/bug-824753.t
 (Wstat: 0 Tests: 16 Failed: 1)
   Failed test:  11
 ./tests/bugs/bug-948729/bug-948729-mode-script.t
 (Wstat: 0 Tests: 23 Failed: 2)
   Failed tests:  19, 23
 ./tests/bugs/bug-948729/bug-948729.t
 (Wstat: 0 Tests: 23 Failed: 2)
   Failed tests:  19, 23
 Files=296, Tests=8411, 8656 wallclock secs ( 3.62 usr  1.97 sys +
 527.17 cusr 683.51 csys = 1216.27 CPU)
 Result: FAIL

 Thanks,
 Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-devel] [TSR] Failed tests on glusterfs-3.6.3beta1, ZFS, CentOS 6.6

2015-02-20 Thread Kiran Patil
I rerun the above failed tests on ext4 and below are the one who failed.

tests/basic/quota-anon-fd-nfs.t
tests/basic/volume-snapshot.t
tests/bugs/bug-1045333.t
tests/bugs/bug-1087198.t
tests/bugs/bug-1113975.t
tests/bugs/bug-1117851.t
tests/bugs/bug-1161886/bug-1161886.t
tests/bugs/bug-1162498.t
tests/bugs/bug-765380.t
tests/bugs/bug-824753.t

Thanks,
Kiran.


On Fri, Feb 20, 2015 at 4:40 PM, Kiran Patil ki...@fractalio.com wrote:
 Please find the below gluster regression test summary report.

 Test Summary Report
 ---
 ./tests/basic/ec/quota.t
 (Wstat: 0 Tests: 22 Failed: 2)
   Failed tests:  16, 20
 ./tests/basic/quota-anon-fd-nfs.t
 (Wstat: 0 Tests: 21 Failed: 1)
   Failed test:  18
 ./tests/basic/quota.t
 (Wstat: 0 Tests: 73 Failed: 4)
   Failed tests:  24, 28, 32, 65
 ./tests/basic/uss.t
 (Wstat: 0 Tests: 158 Failed: 8)
   Failed tests:  37-38, 69-70, 99-100, 127-128
 ./tests/basic/volume-snapshot.t
 (Wstat: 0 Tests: 29 Failed: 2)
   Failed tests:  28-29
 ./tests/bugs/bug-1023974.t
 (Wstat: 0 Tests: 15 Failed: 1)
   Failed test:  12
 ./tests/bugs/bug-1038598.t
 (Wstat: 0 Tests: 28 Failed: 6)
   Failed tests:  17, 21-22, 26-28
 ./tests/bugs/bug-1045333.t
 (Wstat: 0 Tests: 16 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-1087198.t
 (Wstat: 0 Tests: 26 Failed: 2)
   Failed tests:  18, 23
 ./tests/bugs/bug-1113975.t
 (Wstat: 0 Tests: 13 Failed: 3)
   Failed tests:  11-13
 ./tests/bugs/bug-1117851.t
 (Wstat: 0 Tests: 24 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-1161886/bug-1161886.t
 (Wstat: 0 Tests: 16 Failed: 4)
   Failed tests:  13-16
 ./tests/bugs/bug-1162498.t
 (Wstat: 0 Tests: 30 Failed: 13)
   Failed tests:  10, 19-30
 ./tests/bugs/bug-765380.t
 (Wstat: 0 Tests: 9 Failed: 1)
   Failed test:  6
 ./tests/bugs/bug-824753.t
 (Wstat: 0 Tests: 16 Failed: 1)
   Failed test:  11
 ./tests/bugs/bug-948729/bug-948729-mode-script.t
 (Wstat: 0 Tests: 23 Failed: 2)
   Failed tests:  19, 23
 ./tests/bugs/bug-948729/bug-948729.t
 (Wstat: 0 Tests: 23 Failed: 2)
   Failed tests:  19, 23
 Files=296, Tests=8411, 8656 wallclock secs ( 3.62 usr  1.97 sys +
 527.17 cusr 683.51 csys = 1216.27 CPU)
 Result: FAIL

 Thanks,
 Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [TSR] Failed tests on glusterfs-3.6.3beta1, ZFS, CentOS 6.6

2015-02-20 Thread Kiran Patil
Please find the below gluster regression test summary report.

Test Summary Report
---
./tests/basic/ec/quota.t
(Wstat: 0 Tests: 22 Failed: 2)
  Failed tests:  16, 20
./tests/basic/quota-anon-fd-nfs.t
(Wstat: 0 Tests: 21 Failed: 1)
  Failed test:  18
./tests/basic/quota.t
(Wstat: 0 Tests: 73 Failed: 4)
  Failed tests:  24, 28, 32, 65
./tests/basic/uss.t
(Wstat: 0 Tests: 158 Failed: 8)
  Failed tests:  37-38, 69-70, 99-100, 127-128
./tests/basic/volume-snapshot.t
(Wstat: 0 Tests: 29 Failed: 2)
  Failed tests:  28-29
./tests/bugs/bug-1023974.t
(Wstat: 0 Tests: 15 Failed: 1)
  Failed test:  12
./tests/bugs/bug-1038598.t
(Wstat: 0 Tests: 28 Failed: 6)
  Failed tests:  17, 21-22, 26-28
./tests/bugs/bug-1045333.t
(Wstat: 0 Tests: 16 Failed: 1)
  Failed test:  15
./tests/bugs/bug-1087198.t
(Wstat: 0 Tests: 26 Failed: 2)
  Failed tests:  18, 23
./tests/bugs/bug-1113975.t
(Wstat: 0 Tests: 13 Failed: 3)
  Failed tests:  11-13
./tests/bugs/bug-1117851.t
(Wstat: 0 Tests: 24 Failed: 1)
  Failed test:  15
./tests/bugs/bug-1161886/bug-1161886.t
(Wstat: 0 Tests: 16 Failed: 4)
  Failed tests:  13-16
./tests/bugs/bug-1162498.t
(Wstat: 0 Tests: 30 Failed: 13)
  Failed tests:  10, 19-30
./tests/bugs/bug-765380.t
(Wstat: 0 Tests: 9 Failed: 1)
  Failed test:  6
./tests/bugs/bug-824753.t
(Wstat: 0 Tests: 16 Failed: 1)
  Failed test:  11
./tests/bugs/bug-948729/bug-948729-mode-script.t
(Wstat: 0 Tests: 23 Failed: 2)
  Failed tests:  19, 23
./tests/bugs/bug-948729/bug-948729.t
(Wstat: 0 Tests: 23 Failed: 2)
  Failed tests:  19, 23
Files=296, Tests=8411, 8656 wallclock secs ( 3.62 usr  1.97 sys +
527.17 cusr 683.51 csys = 1216.27 CPU)
Result: FAIL

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] [TSR] Failed tests on glusterfs-3.6.3beta1, ZFS, CentOS 6.6

2015-02-20 Thread Kiran Patil
Please find the below gluster regression test summary report.

Test Summary Report
---
./tests/basic/ec/quota.t
(Wstat: 0 Tests: 22 Failed: 2)
  Failed tests:  16, 20
./tests/basic/quota-anon-fd-nfs.t
(Wstat: 0 Tests: 21 Failed: 1)
  Failed test:  18
./tests/basic/quota.t
(Wstat: 0 Tests: 73 Failed: 4)
  Failed tests:  24, 28, 32, 65
./tests/basic/uss.t
(Wstat: 0 Tests: 158 Failed: 8)
  Failed tests:  37-38, 69-70, 99-100, 127-128
./tests/basic/volume-snapshot.t
(Wstat: 0 Tests: 29 Failed: 2)
  Failed tests:  28-29
./tests/bugs/bug-1023974.t
(Wstat: 0 Tests: 15 Failed: 1)
  Failed test:  12
./tests/bugs/bug-1038598.t
(Wstat: 0 Tests: 28 Failed: 6)
  Failed tests:  17, 21-22, 26-28
./tests/bugs/bug-1045333.t
(Wstat: 0 Tests: 16 Failed: 1)
  Failed test:  15
./tests/bugs/bug-1087198.t
(Wstat: 0 Tests: 26 Failed: 2)
  Failed tests:  18, 23
./tests/bugs/bug-1113975.t
(Wstat: 0 Tests: 13 Failed: 3)
  Failed tests:  11-13
./tests/bugs/bug-1117851.t
(Wstat: 0 Tests: 24 Failed: 1)
  Failed test:  15
./tests/bugs/bug-1161886/bug-1161886.t
(Wstat: 0 Tests: 16 Failed: 4)
  Failed tests:  13-16
./tests/bugs/bug-1162498.t
(Wstat: 0 Tests: 30 Failed: 13)
  Failed tests:  10, 19-30
./tests/bugs/bug-765380.t
(Wstat: 0 Tests: 9 Failed: 1)
  Failed test:  6
./tests/bugs/bug-824753.t
(Wstat: 0 Tests: 16 Failed: 1)
  Failed test:  11
./tests/bugs/bug-948729/bug-948729-mode-script.t
(Wstat: 0 Tests: 23 Failed: 2)
  Failed tests:  19, 23
./tests/bugs/bug-948729/bug-948729.t
(Wstat: 0 Tests: 23 Failed: 2)
  Failed tests:  19, 23
Files=296, Tests=8411, 8656 wallclock secs ( 3.62 usr  1.97 sys +
527.17 cusr 683.51 csys = 1216.27 CPU)
Result: FAIL

Thanks,
Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] GlusterFS with FUSE slow vs ZFS volume

2015-02-19 Thread Kiran Patil
Hi,

We are using fio(https://github.com/axboe/fio) for load/stress testing.

We have not done performance check on a single node.

I will try to verify it.

Thanks,
Kiran.

On Thu, Feb 5, 2015 at 4:47 PM, Pranith Kumar Karampuri
pkara...@redhat.com wrote:
 +Kiran Patil may know about this.

 Pranith
 On 02/03/2015 12:56 AM, ML mail wrote:

 Hello,

 I am testing GlusterFS for the first time and have installed the latest
 GlusterFS 3.5 stable version on Debian 7 on brand new SuperMicro hardware
 with ZFS instead of hardware RAID. My ZFS pool is a RAIDZ-2 with 6 SATA
 disks of 2 TB each.

 After setting up a first and single test brick on my currently single test
 node I wanted first to see how much slower will GlusterFS be compared to
 writting directly to the ZFS volume. For that purpose I have mounted my
 GlusterFS volume locally on the same server using FUSE.

 For my tests I have used bonnie++ with the command bonnie++ -n16 -b and
 I must say I am quite shocked to see that with this current setup GlusterFS
 slows down the whole file system with a factor of approximately 6 to 8. For
 example:

 ZFS volume

 Sequential output by block (read): 936 MB/sec
 Sequential input by block (write): 1520 MB/sec


 GlusterFS on top of same ZFS volume mounted with FUSE
 Sequential output by block (read): 114 MB/sec
 Sequential input by block (write): 312 MB/sec


 Now I was wondering if such a performance drop on a single GlusterFS node
 is expected? If not is it maybe ZFS which is messing up things?

 bonnie++ took 3 minutes to rune on the ZFS volume and 18 minutes on the
 GlusterFS mount. I have copied the bonnie++ results below just in case in
 CVS format:


 1.96,1.96,ZFS,1,1422907597,31960M,,170,99,936956,94,484417,74,463,99,1520120,98,815.4,41,16,3376,26,+,+++,3109,22,3261,21,+,+++,3305,20,66881us,15214us,84887us,23648us,53641us,93322us,39607us,363us,298ms,136ms,18us,176ms

 1.96,1.96,GFS,1,1422897979,31960M,,16,17,114223,20,92610,20,+,+++,312557,14,444.5,6,16,385,3,5724,5,916,4,357,3,2044,4,750,4,550ms,9715us,23094us,3334us,125ms,90070us,154ms,8609us,17570us,67180us,4116us,7879us

 Maybe they are a few performance tuning trick that I am not aware of?

 Let me know if I should provide any more information. In advance thanks
 for your comments.

 Best regards
 ML
 ___
 Gluster-users mailing list
 Gluster-users@gluster.org
 http://www.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users


fio does not exit even if bw is below than ratemin

2015-02-06 Thread Kiran Patil
Hello,

I am using fio to run storage benchmark.

I have set ratemin=50m and ran below fio profile it recorded bw of 14
MB/s for all the jobs and it did not exited even though it got low bw
compared to ratemin.

Why fio didn't exit ? I have no idea how fio handles internally.

I observed another behavior that if I set file size to 10g then it
exits for some jobs for not getting ratemin bandwidth.

What makes the difference between small and large files ?

Please help me to understand this behavior.

Runtime environment:
---
Fio Machine - 16 GB RAM - 4 1G ports - nic bonded

Server Machine -
2 Gluster nodes with 16 GB RAM - each 4 1G ports and 4 drives - nic bonded

Fio is running on a cifs share of gluster distributed vol.

Here is my fio file
--
[global]
directory=/mnt1
engine=sync
iodepth=1
numjobs=8
rate=55m,55m
ratemin=50m,50m
size=80m
runtime=600
time_based

[FS_16k_streaming_writes]
rw=write
bs=16k

[FS_64k_streaming_writes]
rw=write
bs=64k

[FS_128k_streaming_writes]
rw=write
bs=128k

[FS_256k_streaming_writes]
rw=write
bs=256k

[FS_512k_streaming_writes]
rw=write
bs=512k

[FS_1m_streaming_writes]
rw=write
bs=1m

The output log files are attached as zip file.


fio_log.tgz
Description: GNU Zip compressed data


analyzing, visualizing, understanding and rating fio data

2015-01-13 Thread Kiran Patil
Hello,

I am going back to Wed, 8 Aug 2012 post which is
http://www.spinics.net/lists/fio/msg01363.html;.

The discussion was regarding to integrate
https://github.com/khailey/fio_scripts to fio.

Any updates on it ?

As a fio user we need it badly to plot graphs including multi threaded jobs.

I think community should come together and pick one solution, there by
it would be easy to maintain as well.

Currently, we have two types of graph generation scripts in fio.

Thanks,
Kiran.
--
To unsubscribe from this list: send the line unsubscribe fio in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

2014-12-22 Thread Kiran Patil
Hello,

Any updates on when SPC-1 is going to be available as part of fio ?

Thanks,
Kiran.

On Wed, Nov 26, 2014 at 7:23 PM, Michael O'Sullivan
michael.osulli...@auckland.ac.nz wrote:
 Hi Luis,

 We worked with Jens Axobe for a little bit to try and merge things but then 
 just got busy testing distributed file systems as opposed to raw storage.

 We had an email in 2012 from

I encountered a couple of segfaults when modifying the sample configuration 
file.

I've thought to revamp it and make it more fio like, possibly turning SPC 
into a profile so that someone can just run fio --profile=spc

 But the person that emailed did not follow up.

 I think having an fio --profile=spc-1 would be great and I'd be happy to help 
 get this working, but fio-type testing is not my core research area/area of 
 expertise. We used fio+spc-1 to test disks in order to get inputs for optimal 
 infrastructure design research (which is one mof my core research areas). 
 That said I did a lot of the original development, so I can probably help 
 people understand what the code is trying to do.

 I hope this helps. Please let me know if you'd like to revamp fio+spc-1 and 
 if you need my help.

 Thanks, Mike

 -Original Message-
 From: Luis Pabón [mailto:lpa...@redhat.com]
 Sent: Friday, 21 November 2014 3:24 a.m.
 To: Michael O'Sullivan; Justin Clift
 Cc: gluster-devel@gluster.org
 Subject: Re: [Gluster-devel] Open source SPC-1 Workload IO Pattern

 Hi Michael,
  I noticed the code on the fio branch (that is where I grabbed the 
 spc1.[hc] files :-) ).  Do you know why that branch has not being merged to 
 master?

 - Luis

 On 11/18/2014 11:56 PM, Michael O'Sullivan wrote:
 Hi Justin  Luis,

 We did a branch of fio that implemented this SPC-1 trace a few years ago. I 
 can dig up the code and paper we wrote if it is useful?

 Cheers, Mike

 On 19/11/2014, at 4:21 pm, Justin Clift jus...@gluster.org wrote:

 Nifty. :)

 (Yeah, catching up on old unread email, as the wifi in this hotel is
 so bad I can barely do anything else.  8-10 second ping times to
 www.gluster.org. :/)

 As a thought, would there be useful analysis/visualisation
 capabilities if you stored the data into a time series database (eg
 InfluxDB) then used Grafana (http://grafana.org) on it?

 + Justin


 On Fri, 07 Nov 2014 12:01:56 +0100
 Luis Pabón lpa...@redhat.com wrote:

 Hi guys,
 I created a simple test program to visualize the I/O pattern of
 NetApp's open source spc-1 workload generator. SPC-1 is an
 enterprise OLTP type workload created by the Storage Performance
 Council (http://www.storageperformance.org/results).  Some of the
 results are published and available here:
 http://www.storageperformance.org/results/benchmark_results_spc1_active .

 NetApp created an open source version of this workload and described
 it in their publication A portable, open-source implementation of
 the SPC-1 workload (
 http://www3.lrgl.uqam.ca/csdl/proceedings/iiswc/2005/9461/00/0152601
 4.pdf
 )

 The code is available onGithub: https://github.com/lpabon/spc1 .
 All it does at the moment is capture the pattern, no real IO is
 generated. I will be working on a command line program to enable
 usage on real block storage systems.  I may either extend fio or
 create a tool specifically tailored to the requirements needed to
 run this workload.

 On github, I have an example IO pattern for a simulation running 50
 mil IOs using HRRW_V2. The simulation ran with an ASU1 (Data Store)
 size of 45GB, ASU2 (User Store) size of 45GB, and ASU3 (Log) size of
 10GB.

 - Luis

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several petabytes,
 and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift
 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-devel


[Feature Request] AUTOMATICALLY DETERMINING LOAD TEST DURATION USING CONFIDENCE INTERVALS

2014-12-14 Thread Kiran Patil
Hi,

CMG india has presented a paper on Load Test duration using Confidence
Intervals.

I am newbie to performance testing and find it very difficult to judge
how long to run load tests to get proper results.

I thought it would be best to integrate with fio and none of the load
testing tools present in the market support it.

Please find the below link to download the paper.
https://drive.google.com/file/d/0B0Q2XWsPQTzsNG9lT2RXZHh3OG8/view?usp=sharing

Thanks,
Kiran.
--
To unsubscribe from this list: send the line unsubscribe fio in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Gluster-devel] Fio (ioengine=gfapi) FS_cached_4k_random_reads fails on gluster v3.6.1

2014-11-30 Thread Kiran Patil
I am running fio (https://github.com/axboe/fio) latest from master branch
with below configuration on gluster v3.6.1 CentOS v6.6 and it is failing.

I have raised issue at bugzilla id=1169236.

# fio $args --output=4k_caranred_gz.log --section=FS_cached_4k_random_reads
--ioengine=gfapi --volume=vol1 --brick=192.168.1.246 fsmb.fio

fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta
1158050440d:23h:16m:01s]

# fio $args --output=4k_caranredmt_gz.log
--section=FS_multi-threaded_cached_4k_random_reads --ioengine=gfapi
--volume=vol1 --brick=192.168.1.246 fsmb.fio

fio: failed to lseek pre-read file[0KB/0KB/0KB /s] [0/0/0 iops] [eta
1158050440d:23h:12m:37s]
fio: failed to lseek pre-read fileone] [0KB/0KB/0KB /s] [0/0/0 iops] [eta
1158050440d:23h:12m:33s]
fio: failed to lseek pre-read fileone] [4KB/0KB/0KB /s] [1/0/0 iops] [eta
1158050440d:23h:12m:32s]
fio: failed to lseek pre-read file
fsmb.fio configuration file is below:

[global]

[FS_128k_streaming_writes]
name=seqwrite
rw=write
bs=128k
size=5g
#end_fsync=1
loops=1

[FS_cached_4k_random_reads]
name=randread
rw=randread
pre_read=1
norandommap
bs=4k
size=256m
runtime=30
loops=1

[FS_multi-threaded_cached_4k_random_reads]
name=randread
numjobs=4
rw=randread
pre_read=1
norandommap
bs=4k
size=256m/4
runtime=30
loops=1

Thanks,

Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-users] ZFS and Snapshots

2014-11-26 Thread Kiran Patil
Hi Pranith,

Now we have added zfs support for gluster snapshots internally and all
snapshot testcases are passing.

Thanks,
Kiran.

On Wed, Nov 26, 2014 at 7:32 AM, Pranith Kumar Karampuri 
pkara...@redhat.com wrote:

  +Kiran to check if he knows anything about this.

 Pranith
 On 11/26/2014 02:17 AM, Kiebzak, Jason M. wrote:

  I’m running ZFS . It appears that Gluster Snapshots require LVM. I’ve
 spent the last hour googling this, and it doesn’t seem like the two  can be
 mixed – that is Gluster Snapshots and ZFS.



 Has anyone attempted to write a wrapper for ZFS to mimic LVM, and thus
 fool gluster into thinking that LVM is installed?



 Thanks


 ___
 Gluster-users mailing 
 listGluster-users@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-users



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-devel] setfacl: testfile: Remote I/O error (zfsonlinux, gluster 3.6, CentOS 6.6)

2014-11-24 Thread Kiran Patil
Testcase bug-847622.t is failing with Remote I/O error.

Steps to reproduce:
-
[root@fractal-c92e glusterfs]# glusterd

[root@fractal-c92e glusterfs]# gluster --mode=script --wignore volume
create patchy fractal-c92e.fractal.lan:/d/backends/brick0
volume create: patchy: success: please start the volume to access data

[root@fractal-c92e glusterfs]# gluster --mode=script --wignore volume start
patchy
volume start: patchy: success

[root@fractal-c92e glusterfs]# mount -t nfs -o soft,intr,vers=3nolock
fractal-c92e.fractal.lan:/patchy /mnt/nfs/0

[root@fractal-c92e glusterfs]# ls /mnt/nfs/  == here mnt is zfs dataset
0  1

[root@fractal-c92e 0]# zfs mount
d   /d
mnt /mnt
d/test1 /d/test1
d/test2 /d/test2
d/test3 /d/test3

[root@fractal-c92e glusterfs]# cd /mnt/nfs/0

[root@fractal-c92e 0]# touch testfile

[root@fractal-c92e 0]# setfacl -m u:14:r testfile
setfacl: testfile: Remote I/O error

[root@fractal-c92e 0]# getfacl testfile
# file: testfile
# owner: root
# group: root
user::rw-
group::r--
other::r--

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] setfacl: testfile: Remote I/O error (zfsonlinux, gluster 3.6, CentOS 6.6)

2014-11-24 Thread Kiran Patil
:server_setvolume]
0-patchy-server: accepted client from
fractal-c92e.fractal.lan-2793-2014/11/25-05:11:33:79268-patchy-client-0-0-0
(version: 3.6.1)
[2014-11-25 05:12:49.700853] E [posix-helpers.c:939:posix_handle_pair]
0-patchy-posix:
/d/backends/brick0/.glusterfs/b4/9b/b49bdf80-6af2-4750-a8ad-fdb56920657a:
key:system.posix_acl_access flags: 0 length:44 error:Operation not supported

Thanks,
Kiran

On Tue, Nov 25, 2014 at 12:10 AM, Vijay Bellur vbel...@redhat.com wrote:

 On 11/24/2014 05:55 PM, Kiran Patil wrote:

 getrlimit(RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=4*1024}) = 0
 lstat(testfile, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
 getxattr(testfile, system.posix_acl_access, 0x7fff9ce10d00, 132) =
 -1 ENODATA (No data available)
 stat(testfile, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
 setxattr(testfile, system.posix_acl_access,
 \x02\x00\x00\x00\x01\x00\x06\x00\xff\xff\xff\xff\x02\x00\
 x04\x00\x0e\x00\x00\x00\x04\x00\x04\x00\xff\xff\xff\xff\
 x10\x00\x04\x00\xff\xff\xff\xff
 \x00\x04\x00\xff\xff\xff\xff, 44, 0) = -1 EREMOTEIO (Remote I/O error)


 Do you happen to know from the logs which translator sends back this
 error? Logs from the brick that contain testfile and the nfs server to
 which the nfs client is connected would be a good place to begin with.

 Thanks,
 Vijay

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: Ticket Data Migration from 7.5 to 8.1

2014-11-19 Thread Kiran Patil
Thanks you so much to all for providing valuable input on data migration.

We have tested and verified integrator spoon will be best suited for our
requirement. As per our customer's requirement they are also forward to
retain foundation data ids from earlier version to. 8.1 so data access
control integrity with ticket will remain same.

Please provide your valuable input.

Regards
Kiran Patil
On 18-Nov-2014 6:27 pm, Curtis Gallant cgall...@gmail.com wrote:

 **
 Something I haven't seen mentioned but is very important as well is one
 needs to be careful when crossing over multiple major versions.  The data
 models from ITSM 7.5 to 8.1 are not fully equivalent so if you just
 straight try and move all the data from a 7.5 system into an 8.1 system (A
 - A), expect some breakage somewhere, typical examples could be changes
 with multiple approvals that are pending (in 8.0/8.1 there was some under
 the hood changes as well as consolidation of change approval processes).
 This is one of the reasons for a tool like DDM that does the version by
 version conversion in steps (albeit painfully in the setup and execution
 sometimes with workarounds needed but it's getting better and better
 documented with every release it seems).

 Straight shot tools from point A to point B are great for keeping say a QA
 environment in sync with PROD since they will be at the same version (and
 other similar requirements)  but unless you are sure of your data model
 (e.g you are running custom apps), a straight shot movement of the data
 from an older BMCs ITSM suite has some risks in an upgrade scenario that
 need to be fully vetted as breakage can and do happen very subtly sometimes.

 On a different but related topic, that CMT tool Sean mentioned a few
 emails up looks pretty neat, kinda how DDM 'should' be if what it says is
 all true :)

 On Tue, Nov 18, 2014 at 7:23 AM, Jarl Grøneng jarl.gron...@gmail.com
 wrote:

 **
 Hi

 You does not need to freeze anything. The requirement from the inital
 poster was to set up a new server.

 With a new server you can move all your data. When the first load is
 done, you start it over again. The next run will take just a few hours. And
 when your ready to switch production to your new server, you run the
 rrrChive again.

 Using this approach you can have a cut-over in just av few minutes.

 --
 J

 2014-11-18 12:07 GMT+01:00 Sean Harries sean.harr...@gmail.com:

 **

 Hi Kiran, Jarl, Listers,


 While RRRchive has some great improvements in terms of handling the
 deletion of data and configurability, the main issue you're likely to face
 is performance. If you are able to agree the data freeze and data catch up
 management around a 20 day delta process then that is OK. On many of our
 projects, we found that was difficult to agree with the business and stake
 holders so we developed the Customer Move Tool.


 The Remedy API is great at a number of things, but bulk data migration
 is not among them. Using RRRchive, it previously took us over thirty days
 to accomplish a full data migration from a full copy of a Production
 system. After that migration, we then had to perform multiple delta
 migration runs leading up to go-live. The inherent limitation of the Remedy
 API has been recognised by BMC, and for the DDM product, some Forms like
 Audit and Worklogs are now migrated at the database level.


 The CMT Tool has a number of advantages over other tools currently
 available;

  1. Moves data at the database level - we are typically able to move an
 entire ITSM application within a single day, rather than several weeks. The
 final delta migration for the Production cutover is less than an hour.

  2. Automated discovery and analysis - CMT will discover a Remedy
 application including customizations and map the data. Any discrepancies
 like mismatched field lengths, missing enums or missing fields are
 identified and presented in the CMT Workbench web UI. This is a distinct
 advantage over other tools, which require you to mess about with XML files
 and will not automatically identify differences or pick up customisations.
 For a lightly customised system we would typically be ready to move data
 within a couple of days - which believe me compares very favourably to the
 effort expended in previous upgrade projects I've been involved in!

 3. Relationship Aware – while other tools migrate on a simple
 form-by-form basis, CMT builds a data model of your Remedy application
 which it uses to migrate data.. This opens up a number of capabilities such
 as being able to migrate individual ITSM companies between Remedy systems,
 consolidating multiple Remedy systems into a single multi-tenancy system,
 performing archiving of data during data migration, etc.

 4. Flexible and Powerful Mapping and Transformation– using the CMT web
 user interface you have full control over the way data is migrated and can
 transform and map data to handle a range of scenarios, including populating
 new fields

Ticket Data Migration from 7.5 to 8.1

2014-11-14 Thread Kiran Patil
Hi All,

We are doing upgrading customer Remedy 7.5 to 8.1
Here is background -
1. Customer has 750GB-800GB transnational data of Incident, Change, Problem.
2. We are not doing in-place or staged In-Place upgrade. We will
implementing new 8.1
system and migrating data from Remedy 7.5 to Remedy 8.1.
3. Core Req:
  1 -  Customer wants all historical data to be migrated into
Remedy 8.1 along with
work logs (with attachments), Related other tickets,
task, SLA, Approvals (For change), audit log.
  2 -  Customer wants to retain old ticket number in system and
dont want new
ticket Id to be generated during migration process.

Our Solution:
1. Using UDM we can import transaction data and disable new Id creation and
can
retain old ticket number however we cannot control on C1 to retain from
old system.
UDM approach has 64000 records limitation per batch so migrating 750GB
will take
months or may be years to migrate data.

Is anyone has migrated ticket data using this approach, I would like to
hear issues/challenges occurs during activity. As per BMC somehow they dont
recommend it.

Any suggestion or idea will be welcomed.

Thanks
Kiran Patil







-- 
*​Regards*

*Kiran PatilMobile: +91 9890377125*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: [Gluster-devel] Regression testing report: Gluster v3.6.1 on CentOS 6.6

2014-11-11 Thread Kiran Patil
I have installed gluster v3.6.1 from
http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/

The /tests/bugs/bug-1112559.t testcase passed in all 3 runs and rest of the
two tests quota-anon-fd-nfs.t, /tests/bugs/886998/strict-readdir.t failed
in all 3 runs.

Ondisk filesystem is ext4.

Thanks for quick feedback.

On Tue, Nov 11, 2014 at 3:33 PM, Pranith Kumar Karampuri 
pkara...@redhat.com wrote:


 On 11/11/2014 03:13 PM, Kiran Patil wrote:

  Test Summary Report
 --
 ./tests/basic/quota-anon-fd-nfs.t  (Wstat: 0 Tests: 16
 Failed: 1)
   Failed test:  16

 This is a spurious failure at least on master. Could you run it 2-3 times
 to see if it is a consistent failure on cent-os.

  ./tests/bugs/886998/strict-readdir.t   (Wstat: 0 Tests: 30
 Failed: 2)
   Failed tests:  10, 24

 What is the underlying backend filesystem?

  ./tests/bugs/bug-1112559.t (Wstat: 0 Tests: 11
 Failed: 2)
   Failed tests:  9, 11

 CC Joseph fernandez

  Files=277, Tests=7908, 8046 wallclock secs ( 4.54 usr  0.98 sys + 902.74
 cusr 644.05 csys = 1552.31 CPU)
 Result: FAIL



 ___
 Gluster-devel mailing 
 listGluster-devel@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] gluster 3.6.1 rpm for CentOS 6.6

2014-11-10 Thread Kiran Patil
Hi,

Please let me know where can we find the gluster v3.6.1 rpm for CentOS 6.6.

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] gluster 3.6.1 rpm for CentOS 6.6

2014-11-10 Thread Kiran Patil
Name confused me as I thought epel-6 means epel-6.0 since there is no
epel-6.6.

Thanks,
Kiran.

On Tue, Nov 11, 2014 at 11:32 AM, Lalatendu Mohanty lmoha...@redhat.com
wrote:

  On 11/11/2014 10:46 AM, Kiran Patil wrote:

 Hi,

  Please let me know where can we find the gluster v3.6.1 rpm for CentOS
 6.6.

  Thanks,
 Kiran.


 ___
 Gluster-devel mailing 
 listGluster-devel@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-devel

  hey Kiran,

 Here it is :
 http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-6/

 Thanks,
 Lala

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [glusterfs-3.6.0beta3-0.11.gitd01b00a] gluster volume status is running even though the Disk is detached

2014-10-31 Thread Kiran Patil
I set zfs pool failmode to continue, which should disable only write and
not read as explained below

failmode=wait | continue | panic

   Controls the system behavior in the event of catastrophic pool
failure. This condition is typically a result of a loss of connec-
   tivity  to  the underlying storage device(s) or a failure of all
devices within the pool. The behavior of such an event is deter-
   mined as follows:

   waitBlocks all I/O access until the device connectivity
is recovered and the errors are  cleared.  This  is  the  default
   behavior.

   continueReturns  EIO  to  any  new  write  I/O  requests
 but allows reads to any of the remaining healthy devices. Any write
   requests that have yet to be committed to disk would
be blocked.

   panic   Prints out a message to the console and generates a
system crash dump.


Now, I rebuilt the glusterfs master and tried to see if failed driver
results in failed brick and in turn kill brick process and the brick is not
going offline.

# gluster volume status
Status of volume: repvol
Gluster process Port Online Pid
--
Brick 192.168.1.246:/zp1/brick1 49152 Y 2400
Brick 192.168.1.246:/zp2/brick2 49153 Y 2407
NFS Server on localhost 2049 Y 30488
Self-heal Daemon on localhost N/A Y 30495

Task Status of Volume repvol
--
There are no active volume tasks


The /var/log/gluster/mnt.log output:

[2014-10-31 09:18:15.934700] W [rpc-clnt-ping.c:154:rpc_clnt_ping_cbk]
0-repvol-client-1: socket disconnected
[2014-10-31 09:18:15.934725] I [client.c:2215:client_rpc_notify]
0-repvol-client-1: disconnected from repvol-client-1. Client process will
keep trying to connect to glusterd until brick's port is available
[2014-10-31 09:18:15.935238] I [rpc-clnt.c:1765:rpc_clnt_reconfig]
0-repvol-client-1: changing port to 49153 (from 0)

Now if I copy a file to /mnt it copied without any hang and brick still
shows online.

Thanks,
Kiran.

On Tue, Oct 28, 2014 at 3:44 PM, Niels de Vos nde...@redhat.com wrote:

 On Tue, Oct 28, 2014 at 02:08:32PM +0530, Kiran Patil wrote:
  The content of file zp2-brick2.log is at http://ur1.ca/iku0l (
  http://fpaste.org/145714/44849041/ )
 
  I can't open the file /zp2/brick2/.glusterfs/health_check since it hangs
  due to no disk present.
 
  Let me know the filename pattern, so that I can find it.

 Hmm, if there is a hang while reading from the disk, it will not get
 detected in the current solution. We implemented failure detection on
 top of the detection that is done by the filesystem. Suspending a
 filesystem with fsfreeze or similar should probably not be seen as a
 failure.

 In your case, it seems that the filesystem suspends itself when the disk
 went away. I have no idea if it is possible to configure ZFS to not
 suspend, but return an error to the reading/writing application. Please
 check with such an option.

 If you find such an option, please update the wiki page and recommend
 enabling it:
 - http://gluster.org/community/documentation/index.php/GlusterOnZFS


 Thanks,
 Niels


 
  On Tue, Oct 28, 2014 at 1:42 PM, Niels de Vos nde...@redhat.com wrote:
 
   On Tue, Oct 28, 2014 at 01:10:56PM +0530, Kiran Patil wrote:
I applied the patches, compiled and installed the gluster.
   
# glusterfs --version
glusterfs 3.7dev built on Oct 28 2014 12:03:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. http://www.redhat.com/
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
   
# git log
commit 990ce16151c3af17e4cdaa94608b737940b60e4d
Author: Lalatendu Mohanty lmoha...@redhat.com
Date:   Tue Jul 1 07:52:27 2014 -0400
   
Posix: Brick failure detection fix for ext4 filesystem
...
...
   
I see below messages
  
   Many thanks Kiran!
  
   Do you have the messages from the brick that uses the zp2 mountpoint?
  
   There also should be a file with a timestamp when the last check was
   done successfully. If the brick is still running, this timestamp should
   get updated every storage.health-check-interval seconds:
   /zp2/brick2/.glusterfs/health_check
  
   Niels
  
   
File /var/log/glusterfs/etc-glusterfs-glusterd.vol.log :
   
The message I [MSGID: 106005]
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management:
 Brick
192.168.1.246:/zp2/brick2 has disconnected from glusterd. repeated
 39
times between [2014-10-28 05:58:09.209419] and [2014-10-28
   06:00:06.226330]
[2014-10-28 06:00

Re: [Gluster-devel] glupy compilation issue and fix

2014-10-31 Thread Kiran Patil
Hi Niels,

I have raised an issue https://bugzilla.redhat.com/show_bug.cgi?id=1159248
.

I am not familiar with sending patch to gerrit, so please send the patch to
Gerrit.

Thanks,
Kiran.

On Fri, Oct 31, 2014 at 2:22 PM, Niels de Vos nde...@redhat.com wrote:

 On Fri, Oct 31, 2014 at 10:34:53AM +0530, Kiran Patil wrote:
  This patch fixed the issue.

 Thanks for testing. Could you file a bug for this issue? And, feel free
 to send the patch to Gerrit too. I can do that if you like, just let me
 know.

 Niels


 
  Thanks,
  Kiran.
 
  On Fri, Oct 31, 2014 at 2:58 AM, Niels de Vos nde...@redhat.com wrote:
 
   On Thu, Oct 30, 2014 at 05:15:20PM +, Justin Clift wrote:
On Wed, 29 Oct 2014 15:11:24 +0530
Kiran Patil ki...@fractalio.com wrote:
snip
 This issue is fixed by changing lib to lib64 at line 219
 (PYTHONDEV_LDFLAGS) in
 glusterfs/xlators/features/glupy/src/Makefile.
   
Cool, thanks.
   
Kind of wondering if there's an established to way to automatically
detect the right value there (lib/lib64).
   
Any ideas?
  
   I'm not sure, but maybe the below patch would do? You'd need to apply
   the change and re-run ./autogen.sh.
  
   Niels
  
  
   diff --git a/configure.ac b/configure.ac
   index 3757c33..3dd741c 100644
   --- a/configure.ac
   +++ b/configure.ac
   @@ -1007,7 +1007,7 @@ case $host_os in
 linux*)
   CFLAGS=`${PYTHON}-config --cflags`
   CPPFLAGS=$CFLAGS
   -   LDFLAGS=-L`${PYTHON}-config --prefix`/lib `${PYTHON}-config
   --ldflags`
   +   LDFLAGS=-L`${PYTHON}-config --prefix`/$libdir
 `${PYTHON}-config
   --ldflags`
   ;;
 darwin*)
   CFLAGS=`${PYTHON}-config --cflags`
  
  

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [glusterfs-3.6.0beta3-0.11.gitd01b00a] gluster volume status is running even though the Disk is detached

2014-10-31 Thread Kiran Patil
I am not seeing below message in any log files under /var/log/glusterfs
directroy and its subdirectories.

health-check failed, going down


On Fri, Oct 31, 2014 at 3:16 PM, Kiran Patil ki...@fractalio.com wrote:

 I set zfs pool failmode to continue, which should disable only write and
 not read as explained below

 failmode=wait | continue | panic

Controls the system behavior in the event of catastrophic pool
 failure. This condition is typically a result of a loss of connec-
tivity  to  the underlying storage device(s) or a failure of
 all devices within the pool. The behavior of such an event is deter-
mined as follows:

waitBlocks all I/O access until the device connectivity
 is recovered and the errors are  cleared.  This  is  the  default
behavior.

continueReturns  EIO  to  any  new  write  I/O  requests
  but allows reads to any of the remaining healthy devices. Any write
requests that have yet to be committed to disk
 would be blocked.

panic   Prints out a message to the console and generates a
 system crash dump.


 Now, I rebuilt the glusterfs master and tried to see if failed driver
 results in failed brick and in turn kill brick process and the brick is not
 going offline.

 # gluster volume status
 Status of volume: repvol
 Gluster process Port Online Pid

 --
 Brick 192.168.1.246:/zp1/brick1 49152 Y 2400
 Brick 192.168.1.246:/zp2/brick2 49153 Y 2407
 NFS Server on localhost 2049 Y 30488
 Self-heal Daemon on localhost N/A Y 30495

 Task Status of Volume repvol

 --
 There are no active volume tasks


 The /var/log/gluster/mnt.log output:

 [2014-10-31 09:18:15.934700] W [rpc-clnt-ping.c:154:rpc_clnt_ping_cbk]
 0-repvol-client-1: socket disconnected
 [2014-10-31 09:18:15.934725] I [client.c:2215:client_rpc_notify]
 0-repvol-client-1: disconnected from repvol-client-1. Client process will
 keep trying to connect to glusterd until brick's port is available
 [2014-10-31 09:18:15.935238] I [rpc-clnt.c:1765:rpc_clnt_reconfig]
 0-repvol-client-1: changing port to 49153 (from 0)

 Now if I copy a file to /mnt it copied without any hang and brick still
 shows online.

 Thanks,
 Kiran.

 On Tue, Oct 28, 2014 at 3:44 PM, Niels de Vos nde...@redhat.com wrote:

 On Tue, Oct 28, 2014 at 02:08:32PM +0530, Kiran Patil wrote:
  The content of file zp2-brick2.log is at http://ur1.ca/iku0l (
  http://fpaste.org/145714/44849041/ )
 
  I can't open the file /zp2/brick2/.glusterfs/health_check since it hangs
  due to no disk present.
 
  Let me know the filename pattern, so that I can find it.

 Hmm, if there is a hang while reading from the disk, it will not get
 detected in the current solution. We implemented failure detection on
 top of the detection that is done by the filesystem. Suspending a
 filesystem with fsfreeze or similar should probably not be seen as a
 failure.

 In your case, it seems that the filesystem suspends itself when the disk
 went away. I have no idea if it is possible to configure ZFS to not
 suspend, but return an error to the reading/writing application. Please
 check with such an option.

 If you find such an option, please update the wiki page and recommend
 enabling it:
 - http://gluster.org/community/documentation/index.php/GlusterOnZFS


 Thanks,
 Niels


 
  On Tue, Oct 28, 2014 at 1:42 PM, Niels de Vos nde...@redhat.com
 wrote:
 
   On Tue, Oct 28, 2014 at 01:10:56PM +0530, Kiran Patil wrote:
I applied the patches, compiled and installed the gluster.
   
# glusterfs --version
glusterfs 3.7dev built on Oct 28 2014 12:03:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. http://www.redhat.com/
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
   
# git log
commit 990ce16151c3af17e4cdaa94608b737940b60e4d
Author: Lalatendu Mohanty lmoha...@redhat.com
Date:   Tue Jul 1 07:52:27 2014 -0400
   
Posix: Brick failure detection fix for ext4 filesystem
...
...
   
I see below messages
  
   Many thanks Kiran!
  
   Do you have the messages from the brick that uses the zp2 mountpoint?
  
   There also should be a file with a timestamp when the last check was
   done successfully. If the brick is still running, this timestamp
 should
   get updated every storage.health-check-interval seconds:
   /zp2/brick2/.glusterfs/health_check
  
   Niels
  
   
File /var/log/glusterfs/etc-glusterfs-glusterd.vol.log :
   
The message I [MSGID: 106005

Re: [Gluster-devel] glupy compilation issue and fix

2014-10-30 Thread Kiran Patil
This patch fixed the issue.

Thanks,
Kiran.

On Fri, Oct 31, 2014 at 2:58 AM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Oct 30, 2014 at 05:15:20PM +, Justin Clift wrote:
  On Wed, 29 Oct 2014 15:11:24 +0530
  Kiran Patil ki...@fractalio.com wrote:
  snip
   This issue is fixed by changing lib to lib64 at line 219
   (PYTHONDEV_LDFLAGS) in glusterfs/xlators/features/glupy/src/Makefile.
 
  Cool, thanks.
 
  Kind of wondering if there's an established to way to automatically
  detect the right value there (lib/lib64).
 
  Any ideas?

 I'm not sure, but maybe the below patch would do? You'd need to apply
 the change and re-run ./autogen.sh.

 Niels


 diff --git a/configure.ac b/configure.ac
 index 3757c33..3dd741c 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -1007,7 +1007,7 @@ case $host_os in
   linux*)
 CFLAGS=`${PYTHON}-config --cflags`
 CPPFLAGS=$CFLAGS
 -   LDFLAGS=-L`${PYTHON}-config --prefix`/lib `${PYTHON}-config
 --ldflags`
 +   LDFLAGS=-L`${PYTHON}-config --prefix`/$libdir `${PYTHON}-config
 --ldflags`
 ;;
   darwin*)
 CFLAGS=`${PYTHON}-config --cflags`


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to get/check/see volume options ?

2014-10-29 Thread Kiran Patil
Is that document up to date since it does not contain the option 
storage.health-check-interval ? Why there is no Volume get command ?

On Wed, Oct 29, 2014 at 4:17 PM, Ravishankar N ravishan...@redhat.com
wrote:

  On 10/29/2014 04:11 PM, Kiran Patil wrote:

 Hi,

  The
 https://github.com/gluster/glusterfs/blob/master/doc/features/brick-failure-detection.md
 doc says storage.health-check-interval is set by default.

  I could see only gluster volume set command.

  What is the command to get the volume options ?


 #gluster volume set help
 Also,
 http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented


  Thanks,
 Kiran.


 ___
 Gluster-devel mailing 
 listGluster-devel@gluster.orghttp://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] How to get/check/see volume options ?

2014-10-29 Thread Kiran Patil
Hello Atin,

When will you push to release-3.6 branch ?

Thanks,
Kiran.

On Wed, Oct 29, 2014 at 4:31 PM, Atin Mukherjee amukh...@redhat.com wrote:

 Kiran,

 Master branch has the volume get feature. I am not sure which version of
 gluster you are using.

 git show c080403393987f807b9ca81be140618fa5e994f1

 Regards,
 Atin

 On 10/29/2014 04:27 PM, Kiran Patil wrote:
  Is that document up to date since it does not contain the option
  storage.health-check-interval ? Why there is no Volume get command ?
 
  On Wed, Oct 29, 2014 at 4:17 PM, Ravishankar N ravishan...@redhat.com
  mailto:ravishan...@redhat.com wrote:
 
  On 10/29/2014 04:11 PM, Kiran Patil wrote:
  Hi,
 
  The
 https://github.com/gluster/glusterfs/blob/master/doc/features/brick-failure-detection.md
  doc says storage.health-check-interval is set by default.
 
  I could see only gluster volume set command.
 
  What is the command to get the volume options ?
 
  #gluster volume set help
  Also,
 
 http://www.gluster.org/community/documentation/index.php/Documenting_the_undocumented
 
  Thanks,
  Kiran.
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org mailto:Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 
 
 
 
  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel
 

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [glusterfs-3.6.0beta3-0.11.gitd01b00a] gluster volume status is running even though the Disk is detached

2014-10-28 Thread Kiran Patil
I changed  git fetch git://review.gluster.org/glusterfs  to git fetch
http://review.gluster.org/glusterfs  and now it works.

Thanks,
Kiran.

On Tue, Oct 28, 2014 at 11:13 AM, Kiran Patil ki...@fractalio.com wrote:

 Hi Niels,

 I am getting fatal: Couldn't find remote ref refs/changes/13/8213/9
 error.

 Steps to reproduce the issue.

 1) # git clone git://review.gluster.org/glusterfs
 Initialized empty Git repository in /root/gluster-3.6/glusterfs/.git/
 remote: Counting objects: 84921, done.
 remote: Compressing objects: 100% (48307/48307), done.
 remote: Total 84921 (delta 57264), reused 63233 (delta 36254)
 Receiving objects: 100% (84921/84921), 23.23 MiB | 192 KiB/s, done.
 Resolving deltas: 100% (57264/57264), done.

 2) # cd glusterfs
 # git branch
 * master

 3) # git fetch git://review.gluster.org/glusterfs refs/changes/13/8213/9
  git checkout FETCH_HEAD
 fatal: Couldn't find remote ref refs/changes/13/8213/9

 Note: I also tried the above steps on git repo
 https://github.com/gluster/glusterfs and the result is same as above.

 Please let me know if I miss any steps.

 Thanks,
 Kiran.

 On Mon, Oct 27, 2014 at 5:53 PM, Niels de Vos nde...@redhat.com wrote:

 On Mon, Oct 27, 2014 at 05:19:13PM +0530, Kiran Patil wrote:
  Hi,
 
  I created replicated vol with two bricks on the same node and copied
 some
  data to it.
 
  Now removed the disk which has hosted one of the brick of the volume.
 
  Storage.health-check-interval is set to 30 seconds.
 
  I could see the disk is unavailable using zpool command of zfs on linux
 but
  the gluster volume status still displays the brick process running which
  should have been shutdown by this time.
 
  Is this a bug in 3.6 since it is mentioned as feature 
 
 https://github.com/gluster/glusterfs/blob/release-3.6/doc/features/brick-failure-detection.md
 
   or am I doing any mistakes here?

 The initial detection of brick failures did not work for all
 filesystems. It may not work for ZFS too. A fix has been posted, but it
 has not been merged into the master branch yet. When the change has been
 merged, it can get backported to 3.6 and 3.5.

 You may want to test with the patch applied, and add your +1 Verified
 to the change in case it makes it functional for you:
 - http://review.gluster.org/8213

 Cheers,
 Niels

 
  [root@fractal-c92e gluster-3.6]# gluster volume status
  Status of volume: repvol
  Gluster process Port Online Pid
 
 --
  Brick 192.168.1.246:/zp1/brick1 49154 Y 17671
  Brick 192.168.1.246:/zp2/brick2 49155 Y 17682
  NFS Server on localhost 2049 Y 17696
  Self-heal Daemon on localhost N/A Y 17701
 
  Task Status of Volume repvol
 
 --
  There are no active volume tasks
 
 
  [root@fractal-c92e gluster-3.6]# gluster volume info
 
  Volume Name: repvol
  Type: Replicate
  Volume ID: d4f992b1-1393-43b8-9fda-2e2b6e3b5039
  Status: Started
  Number of Bricks: 1 x 2 = 2
  Transport-type: tcp
  Bricks:
  Brick1: 192.168.1.246:/zp1/brick1
  Brick2: 192.168.1.246:/zp2/brick2
  Options Reconfigured:
  storage.health-check-interval: 30
 
  [root@fractal-c92e gluster-3.6]# zpool status zp2
pool: zp2
   state: UNAVAIL
  status: One or more devices are faulted in response to IO failures.
  action: Make sure the affected devices are connected, then run 'zpool
  clear'.
 see: http://zfsonlinux.org/msg/ZFS-8000-HC
scan: none requested
  config:
 
  NAMESTATE READ WRITE CKSUM
  zp2 UNAVAIL  0 0 0  insufficient replicas
sdb   UNAVAIL  0 0 0
 
  errors: 2 data errors, use '-v' for a list
 
 
  Thanks,
  Kiran.

  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [glusterfs-3.6.0beta3-0.11.gitd01b00a] gluster volume status is running even though the Disk is detached

2014-10-28 Thread Kiran Patil
I applied the patches, compiled and installed the gluster.

# glusterfs --version
glusterfs 3.7dev built on Oct 28 2014 12:03:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. http://www.redhat.com/
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

# git log
commit 990ce16151c3af17e4cdaa94608b737940b60e4d
Author: Lalatendu Mohanty lmoha...@redhat.com
Date:   Tue Jul 1 07:52:27 2014 -0400

Posix: Brick failure detection fix for ext4 filesystem
...
...

I see below messages

File /var/log/glusterfs/etc-glusterfs-glusterd.vol.log :

The message I [MSGID: 106005]
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: Brick
192.168.1.246:/zp2/brick2 has disconnected from glusterd. repeated 39
times between [2014-10-28 05:58:09.209419] and [2014-10-28 06:00:06.226330]
[2014-10-28 06:00:09.226507] W [socket.c:545:__socket_rwv] 0-management:
readv on /var/run/6154ed2845b7f728a3acdce9d69e08ee.socket failed (Invalid
argument)
[2014-10-28 06:00:09.226712] I [MSGID: 106005]
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: Brick
192.168.1.246:/zp2/brick2 has disconnected from glusterd.
[2014-10-28 06:00:12.226881] W [socket.c:545:__socket_rwv] 0-management:
readv on /var/run/6154ed2845b7f728a3acdce9d69e08ee.socket failed (Invalid
argument)
[2014-10-28 06:00:15.227249] W [socket.c:545:__socket_rwv] 0-management:
readv on /var/run/6154ed2845b7f728a3acdce9d69e08ee.socket failed (Invalid
argument)
[2014-10-28 06:00:18.227616] W [socket.c:545:__socket_rwv] 0-management:
readv on /var/run/6154ed2845b7f728a3acdce9d69e08ee.socket failed (Invalid
argument)
[2014-10-28 06:00:21.227976] W [socket.c:545:__socket_rwv] 0-management:
readv on

.
.

[2014-10-28 06:19:15.142867] I
[glusterd-handler.c:1280:__glusterd_handle_cli_get_volume] 0-glusterd:
Received get vol req
The message I [MSGID: 106005]
[glusterd-handler.c:4142:__glusterd_brick_rpc_notify] 0-management: Brick
192.168.1.246:/zp2/brick2 has disconnected from glusterd. repeated 12
times between [2014-10-28 06:18:09.368752] and [2014-10-28 06:18:45.373063]
[2014-10-28 06:23:38.207649] W [glusterfsd.c:1194:cleanup_and_exit] (--
0-: received signum (15), shutting down


dmesg output:

SPLError: 7869:0:(spl-err.c:67:vcmn_err()) WARNING: Pool 'zp2' has
encountered an uncorrectable I/O failure and has been suspended.

SPLError: 7868:0:(spl-err.c:67:vcmn_err()) WARNING: Pool 'zp2' has
encountered an uncorrectable I/O failure and has been suspended.

SPLError: 7869:0:(spl-err.c:67:vcmn_err()) WARNING: Pool 'zp2' has
encountered an uncorrectable I/O failure and has been suspended.

The brick is still online.

# gluster volume status
Status of volume: repvol
Gluster process Port Online Pid
--
Brick 192.168.1.246:/zp1/brick1 49152 Y 4067
Brick 192.168.1.246:/zp2/brick2 49153 Y 4078
NFS Server on localhost 2049 Y 4092
Self-heal Daemon on localhost N/A Y 4097

Task Status of Volume repvol
--
There are no active volume tasks

# gluster volume info

Volume Name: repvol
Type: Replicate
Volume ID: ba1e7c6d-1e1c-45cd-8132-5f4fa4d2d22b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.246:/zp1/brick1
Brick2: 192.168.1.246:/zp2/brick2
Options Reconfigured:
storage.health-check-interval: 30

Let me know if you need further information.

Thanks,
Kiran.

On Tue, Oct 28, 2014 at 11:44 AM, Kiran Patil ki...@fractalio.com wrote:

 I changed  git fetch git://review.gluster.org/glusterfs  to git fetch
 http://review.gluster.org/glusterfs  and now it works.

 Thanks,
 Kiran.

 On Tue, Oct 28, 2014 at 11:13 AM, Kiran Patil ki...@fractalio.com wrote:

 Hi Niels,

 I am getting fatal: Couldn't find remote ref refs/changes/13/8213/9
 error.

 Steps to reproduce the issue.

 1) # git clone git://review.gluster.org/glusterfs
 Initialized empty Git repository in /root/gluster-3.6/glusterfs/.git/
 remote: Counting objects: 84921, done.
 remote: Compressing objects: 100% (48307/48307), done.
 remote: Total 84921 (delta 57264), reused 63233 (delta 36254)
 Receiving objects: 100% (84921/84921), 23.23 MiB | 192 KiB/s, done.
 Resolving deltas: 100% (57264/57264), done.

 2) # cd glusterfs
 # git branch
 * master

 3) # git fetch git://review.gluster.org/glusterfs refs/changes/13/8213/9
  git checkout FETCH_HEAD
 fatal: Couldn't find remote ref refs/changes/13/8213/9

 Note: I also tried the above steps on git repo
 https://github.com/gluster/glusterfs and the result is same as above.

 Please let me know if I miss any steps.

 Thanks,
 Kiran.

 On Mon, Oct 27, 2014 at 5:53 PM, Niels de Vos

[Gluster-devel] Regression tests failure report on Gluster 3.6.0beta3-0.11 nightly build

2014-10-27 Thread Kiran Patil
Hi,

Installed and ran regression tests on gluster-3.6 from nightly build 
http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.6/epel-6-x86_64/glusterfs-3.6.0beta3-0.11.gitd01b00a.autobuild/
.

Operating system: CentOS 6.5

Test Summary Report
---
./tests/basic/quota-anon-fd-nfs.t  (Wstat: 0 Tests: 16
Failed: 1)   == This testcase occasionally passes
  Failed test:  16
./tests/bugs/886998/strict-readdir.t   (Wstat: 0 Tests: 30
Failed: 2)
  Failed tests:  10, 24
./tests/features/glupy.t   (Wstat: 0 Tests: 6
Failed: 2)  == This testcase fails due to the inclusion of vol-glupy in
testcase
  Failed tests:  2, 6
Files=277, Tests=7970, 6888 wallclock secs ( 4.52 usr  0.72 sys + 757.53
cusr 599.08 csys = 1361.85 CPU)
Result: FAIL

Note: glupy.t testcase was passed in earlier nightly build
ie. glusterfs-3.6.0beta3-0.8.git40a3784.autobuild.

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] [glusterfs-3.6.0beta3-0.11.gitd01b00a] gluster volume status is running even though the Disk is detached

2014-10-27 Thread Kiran Patil
Hi,

I created replicated vol with two bricks on the same node and copied some
data to it.

Now removed the disk which has hosted one of the brick of the volume.

Storage.health-check-interval is set to 30 seconds.

I could see the disk is unavailable using zpool command of zfs on linux but
the gluster volume status still displays the brick process running which
should have been shutdown by this time.

Is this a bug in 3.6 since it is mentioned as feature 
https://github.com/gluster/glusterfs/blob/release-3.6/doc/features/brick-failure-detection.md;
 or am I doing any mistakes here?

[root@fractal-c92e gluster-3.6]# gluster volume status
Status of volume: repvol
Gluster process Port Online Pid
--
Brick 192.168.1.246:/zp1/brick1 49154 Y 17671
Brick 192.168.1.246:/zp2/brick2 49155 Y 17682
NFS Server on localhost 2049 Y 17696
Self-heal Daemon on localhost N/A Y 17701

Task Status of Volume repvol
--
There are no active volume tasks


[root@fractal-c92e gluster-3.6]# gluster volume info

Volume Name: repvol
Type: Replicate
Volume ID: d4f992b1-1393-43b8-9fda-2e2b6e3b5039
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.1.246:/zp1/brick1
Brick2: 192.168.1.246:/zp2/brick2
Options Reconfigured:
storage.health-check-interval: 30

[root@fractal-c92e gluster-3.6]# zpool status zp2
  pool: zp2
 state: UNAVAIL
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool
clear'.
   see: http://zfsonlinux.org/msg/ZFS-8000-HC
  scan: none requested
config:

NAMESTATE READ WRITE CKSUM
zp2 UNAVAIL  0 0 0  insufficient replicas
  sdb   UNAVAIL  0 0 0

errors: 2 data errors, use '-v' for a list


Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [glusterfs-3.6.0beta3-0.11.gitd01b00a] gluster volume status is running even though the Disk is detached

2014-10-27 Thread Kiran Patil
Hi Niels,

I am getting fatal: Couldn't find remote ref refs/changes/13/8213/9 error.

Steps to reproduce the issue.

1) # git clone git://review.gluster.org/glusterfs
Initialized empty Git repository in /root/gluster-3.6/glusterfs/.git/
remote: Counting objects: 84921, done.
remote: Compressing objects: 100% (48307/48307), done.
remote: Total 84921 (delta 57264), reused 63233 (delta 36254)
Receiving objects: 100% (84921/84921), 23.23 MiB | 192 KiB/s, done.
Resolving deltas: 100% (57264/57264), done.

2) # cd glusterfs
# git branch
* master

3) # git fetch git://review.gluster.org/glusterfs refs/changes/13/8213/9 
git checkout FETCH_HEAD
fatal: Couldn't find remote ref refs/changes/13/8213/9

Note: I also tried the above steps on git repo
https://github.com/gluster/glusterfs and the result is same as above.

Please let me know if I miss any steps.

Thanks,
Kiran.

On Mon, Oct 27, 2014 at 5:53 PM, Niels de Vos nde...@redhat.com wrote:

 On Mon, Oct 27, 2014 at 05:19:13PM +0530, Kiran Patil wrote:
  Hi,
 
  I created replicated vol with two bricks on the same node and copied some
  data to it.
 
  Now removed the disk which has hosted one of the brick of the volume.
 
  Storage.health-check-interval is set to 30 seconds.
 
  I could see the disk is unavailable using zpool command of zfs on linux
 but
  the gluster volume status still displays the brick process running which
  should have been shutdown by this time.
 
  Is this a bug in 3.6 since it is mentioned as feature 
 
 https://github.com/gluster/glusterfs/blob/release-3.6/doc/features/brick-failure-detection.md
 
   or am I doing any mistakes here?

 The initial detection of brick failures did not work for all
 filesystems. It may not work for ZFS too. A fix has been posted, but it
 has not been merged into the master branch yet. When the change has been
 merged, it can get backported to 3.6 and 3.5.

 You may want to test with the patch applied, and add your +1 Verified
 to the change in case it makes it functional for you:
 - http://review.gluster.org/8213

 Cheers,
 Niels

 
  [root@fractal-c92e gluster-3.6]# gluster volume status
  Status of volume: repvol
  Gluster process Port Online Pid
 
 --
  Brick 192.168.1.246:/zp1/brick1 49154 Y 17671
  Brick 192.168.1.246:/zp2/brick2 49155 Y 17682
  NFS Server on localhost 2049 Y 17696
  Self-heal Daemon on localhost N/A Y 17701
 
  Task Status of Volume repvol
 
 --
  There are no active volume tasks
 
 
  [root@fractal-c92e gluster-3.6]# gluster volume info
 
  Volume Name: repvol
  Type: Replicate
  Volume ID: d4f992b1-1393-43b8-9fda-2e2b6e3b5039
  Status: Started
  Number of Bricks: 1 x 2 = 2
  Transport-type: tcp
  Bricks:
  Brick1: 192.168.1.246:/zp1/brick1
  Brick2: 192.168.1.246:/zp2/brick2
  Options Reconfigured:
  storage.health-check-interval: 30
 
  [root@fractal-c92e gluster-3.6]# zpool status zp2
pool: zp2
   state: UNAVAIL
  status: One or more devices are faulted in response to IO failures.
  action: Make sure the affected devices are connected, then run 'zpool
  clear'.
 see: http://zfsonlinux.org/msg/ZFS-8000-HC
scan: none requested
  config:
 
  NAMESTATE READ WRITE CKSUM
  zp2 UNAVAIL  0 0 0  insufficient replicas
sdb   UNAVAIL  0 0 0
 
  errors: 2 data errors, use '-v' for a list
 
 
  Thanks,
  Kiran.

  ___
  Gluster-devel mailing list
  Gluster-devel@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-devel


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Ubuntu 14.04: Gluster Test Framework testcases failure

2014-10-06 Thread Kiran Patil
Hello,

http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
page has been updated with Ubuntu 14.04 steps and now you can run the test
suite on ubuntu.

I ran test suite and below are results.

Gluster version: v3.4.5

OS: Ubuntu 14.04 LTS

Test Summary Report
---
./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39 Failed:
2)  == CentOS 7 and Ubuntu 14.04
  Failed tests:  28, 31
./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21 Failed:
1)  == CentOS 7 and Ubuntu 14.04
  Failed test:  15
./tests/bugs/bug-887145.t   (Wstat: 0 Tests: 31 Failed:
5)  == Only on Ubuntu 14.04
  Failed tests:  20-23, 25
./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10 Failed:
2)  == CentOS 7 and Ubuntu 14.04
  Failed tests:  8-9
Files=124, Tests=2031, 2648 wallclock secs ( 1.62 usr  0.34 sys + 204.62
cusr 217.20 csys = 423.78 CPU)
Result: FAIL

Testcase tests/bugs/bug-905864.t had a issue with gcc compilation and below
change works in ubuntu ( -lpthread is pushed to the end).

gcc -g3  $(dirname $0)/bug-905864.c -o $(dirname $0)/bug-905864 -lpthread

Please find the link http://ur1.ca/iawdm (http://fpaste.org/139530/59562614/)
where the trace of each failed testcase is available.

Let me know the possible fixes to the test cases.

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] Ubuntu 14.04: Gluster Test Framework testcases failure

2014-10-06 Thread Kiran Patil
Hello,

http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
page has been updated with Ubuntu 14.04 steps and now you can run the test
suite on ubuntu.

I ran test suite and below are results.

Gluster version: v3.4.5

OS: Ubuntu 14.04 LTS

Test Summary Report
---
./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39 Failed:
2)  == CentOS 7 and Ubuntu 14.04
  Failed tests:  28, 31
./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21 Failed:
1)  == CentOS 7 and Ubuntu 14.04
  Failed test:  15
./tests/bugs/bug-887145.t   (Wstat: 0 Tests: 31 Failed:
5)  == Only on Ubuntu 14.04
  Failed tests:  20-23, 25
./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10 Failed:
2)  == CentOS 7 and Ubuntu 14.04
  Failed tests:  8-9
Files=124, Tests=2031, 2648 wallclock secs ( 1.62 usr  0.34 sys + 204.62
cusr 217.20 csys = 423.78 CPU)
Result: FAIL

Testcase tests/bugs/bug-905864.t had a issue with gcc compilation and below
change works in ubuntu ( -lpthread is pushed to the end).

gcc -g3  $(dirname $0)/bug-905864.c -o $(dirname $0)/bug-905864 -lpthread

Please find the link http://ur1.ca/iawdm (http://fpaste.org/139530/59562614/)
where the trace of each failed testcase is available.

Let me know the possible fixes to the test cases.

Thanks,
Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Ubuntu 14.04: Gluster Test Framework testcases failure

2014-10-06 Thread Kiran Patil
Testcase tests/bugs/bug-887145.t fails due permission issue.

Here is a snippet,

root@fractal-0025:/home/kiran/glusterfs# touch /mnt/glusterfs/0/dir/file
touch: cannot touch '/mnt/glusterfs/0/dir/file': Permission denied
root@fractal-0025:/home/kiran/glusterfs#
root@fractal-0025:/home/kiran/glusterfs# ls -lh /mnt/glusterfs/0/dir/
total 0
root@fractal-0025:/home/kiran/glusterfs# ls -ld /mnt/glusterfs/0/dir/
drwxr-xr-x 2 nfsnobody nfsnobody 2 Oct  6 18:15 /mnt/glusterfs/0/dir/

Why it is not allowing in ubuntu but on CentOS ?

Thanks.

On Mon, Oct 6, 2014 at 5:22 PM, Kiran Patil kirantpa...@gmail.com wrote:

 Hello,


 http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
 page has been updated with Ubuntu 14.04 steps and now you can run the test
 suite on ubuntu.

 I ran test suite and below are results.

 Gluster version: v3.4.5

 OS: Ubuntu 14.04 LTS

 Test Summary Report
 ---
 ./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39
 Failed: 2)  == CentOS 7 and Ubuntu 14.04
   Failed tests:  28, 31
 ./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21
 Failed: 1)  == CentOS 7 and Ubuntu 14.04
   Failed test:  15
 ./tests/bugs/bug-887145.t   (Wstat: 0 Tests: 31
 Failed: 5)  == Only on Ubuntu 14.04
   Failed tests:  20-23, 25
 ./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10
 Failed: 2)  == CentOS 7 and Ubuntu 14.04
   Failed tests:  8-9
 Files=124, Tests=2031, 2648 wallclock secs ( 1.62 usr  0.34 sys + 204.62
 cusr 217.20 csys = 423.78 CPU)
 Result: FAIL

 Testcase tests/bugs/bug-905864.t had a issue with gcc compilation and
 below change works in ubuntu ( -lpthread is pushed to the end).

 gcc -g3  $(dirname $0)/bug-905864.c -o $(dirname $0)/bug-905864 -lpthread

 Please find the link http://ur1.ca/iawdm (
 http://fpaste.org/139530/59562614/) where the trace of each failed
 testcase is available.

 Let me know the possible fixes to the test cases.

 Thanks,
 Kiran.

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-27 Thread Kiran Patil
I ran each testcases with DEBUG=1 and pasted at http://ur1.ca/i8xay (
http://fpaste.org/136940/)

This time I ran the testcases by keeping default paths and now
tests/bugs/bug-861542.t passes

Need to create /var/run/gluster directory on reboot

Test Setup:
-
CentOS 7 : 3.10.0-123.8.1.el7.x86_64

gluster --version : glusterfs 3.4.5 built on Jul 24 2014 19:14:13

Zfs : zfs-0.6.3

glusterfs testcases version :
git branch : * (detached from v3.4.5)


Test Summary Report
---
./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39 Failed:
2)
  Failed tests:  28, 31
./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21 Failed:
1)
  Failed test:  15
./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10 Failed:
2)
  Failed tests:  8-9
./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11 Failed:
4)
  Failed tests:  4-6, 9
./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19 Failed:
8)
  Failed tests:  5-7, 9, 11, 13-15
./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35 Failed:
6)
  Failed tests:  29-31, 33-35
./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
Failed: 6)
  Failed tests:  29-31, 33-35
./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23 Failed:
2)
  Failed tests:  19, 23
Files=123, Tests=2019, 4834 wallclock secs ( 1.62 usr  0.25 sys + 258.83
cusr 201.05 csys = 461.75 CPU)
Result: FAIL

Thanks,
Kiran.

On Fri, Sep 26, 2014 at 9:32 PM, Lalatendu Mohanty lmoha...@redhat.com
wrote:

 On 09/26/2014 02:59 AM, Justin Clift wrote:

 On 25/09/2014, at 9:28 PM, Lalatendu Mohanty wrote:
 snip

 Have we published somewhere which distributions or OS versions we are
 running regression tests ? if not lets compile it and publish as this will
 help community to understand which os distributions are part of the
 regression testing.

 The best we have so far is probably this:

http://www.gluster.org/community/documentation/index.
 php/Using_the_Gluster_Test_Framework


  Do we have plans to run regression on a variety of distributions? Not
 sure how difficult or complex it is to maintain.

 The primary OS at the moment is CentOS 6.x (mainly due to it
 being the primary OS for GlusterFS I think).

 Manu and Harsha have been going through the regression tests
 recently, making them more cross platform in order to run on
 the BSDs.

 This effort has also highlighted some interesting Linux
 specific behaviour in the main GlusterFS code base, and led
 to fixes there.

 In short, we're all for running the regression tests on as
 many distributions as possible.  If Community members want
 to put VM's or something online (medium-long term), I'd be
 happy to hook our Jenkins infrastructure up to them to
 automatically run tests on them.

 Is that kind of what you're asking? :)


 Yup, I will try to get a CentOS 7 instance for running regression tests :)

 -Lala

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-27 Thread Kiran Patil
Testcase tests/bugs/bug-913555.t does not show the peers and please find
the details at http://ur1.ca/i8ybk (http://fpaste.org/136968/)

On Sat, Sep 27, 2014 at 3:26 PM, Kiran Patil kirantpa...@gmail.com wrote:

 Now on XFS, the Test Summary Report is same as running on ZFS except test
 case bug-953887.t failure.

 Test case tests/bugs/bug-953887.t expects force at end of line, TEST
 gluster volume add-brick $V0 $H0:$B0/${V0}{2,3} and it passed once force is
 substituted at the end of the line.

 Test case tests/bugs/bug-767095.t failed due to path issue and now it no
 more fails.

 So the failures are no more specific to on disk filesystems such as ZFS
 and XFS but glusterfs.

 Thanks,
 Kiran.

 On Sat, Sep 27, 2014 at 1:30 PM, Kiran Patil kirantpa...@gmail.com
 wrote:

 I ran each testcases with DEBUG=1 and pasted at http://ur1.ca/i8xay (
 http://fpaste.org/136940/)

 This time I ran the testcases by keeping default paths and now
 tests/bugs/bug-861542.t passes

 Need to create /var/run/gluster directory on reboot

 Test Setup:
 -
 CentOS 7 : 3.10.0-123.8.1.el7.x86_64

 gluster --version : glusterfs 3.4.5 built on Jul 24 2014 19:14:13

 Zfs : zfs-0.6.3

 glusterfs testcases version :
 git branch : * (detached from v3.4.5)


 Test Summary Report
 ---
 ./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39
 Failed: 2)
   Failed tests:  28, 31
 ./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21
 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10
 Failed: 2)
   Failed tests:  8-9
 ./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11
 Failed: 4)
   Failed tests:  4-6, 9
 ./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19
 Failed: 8)
   Failed tests:  5-7, 9, 11, 13-15
 ./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35
 Failed: 6)
   Failed tests:  29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
 Failed: 6)
   Failed tests:  29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23
 Failed: 2)
   Failed tests:  19, 23
 Files=123, Tests=2019, 4834 wallclock secs ( 1.62 usr  0.25 sys + 258.83
 cusr 201.05 csys = 461.75 CPU)
 Result: FAIL

 Thanks,
 Kiran.

 On Fri, Sep 26, 2014 at 9:32 PM, Lalatendu Mohanty lmoha...@redhat.com
 wrote:

 On 09/26/2014 02:59 AM, Justin Clift wrote:

 On 25/09/2014, at 9:28 PM, Lalatendu Mohanty wrote:
 snip

 Have we published somewhere which distributions or OS versions we are
 running regression tests ? if not lets compile it and publish as this will
 help community to understand which os distributions are part of the
 regression testing.

 The best we have so far is probably this:

http://www.gluster.org/community/documentation/index.
 php/Using_the_Gluster_Test_Framework


  Do we have plans to run regression on a variety of distributions? Not
 sure how difficult or complex it is to maintain.

 The primary OS at the moment is CentOS 6.x (mainly due to it
 being the primary OS for GlusterFS I think).

 Manu and Harsha have been going through the regression tests
 recently, making them more cross platform in order to run on
 the BSDs.

 This effort has also highlighted some interesting Linux
 specific behaviour in the main GlusterFS code base, and led
 to fixes there.

 In short, we're all for running the regression tests on as
 many distributions as possible.  If Community members want
 to put VM's or something online (medium-long term), I'd be
 happy to hook our Jenkins infrastructure up to them to
 automatically run tests on them.

 Is that kind of what you're asking? :)


 Yup, I will try to get a CentOS 7 instance for running regression tests
 :)

 -Lala




___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-27 Thread Kiran Patil
I ran each testcases with DEBUG=1 and pasted at http://ur1.ca/i8xay (
http://fpaste.org/136940/)

This time I ran the testcases by keeping default paths and now
tests/bugs/bug-861542.t passes

Need to create /var/run/gluster directory on reboot

Test Setup:
-
CentOS 7 : 3.10.0-123.8.1.el7.x86_64

gluster --version : glusterfs 3.4.5 built on Jul 24 2014 19:14:13

Zfs : zfs-0.6.3

glusterfs testcases version :
git branch : * (detached from v3.4.5)


Test Summary Report
---
./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39 Failed:
2)
  Failed tests:  28, 31
./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21 Failed:
1)
  Failed test:  15
./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10 Failed:
2)
  Failed tests:  8-9
./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11 Failed:
4)
  Failed tests:  4-6, 9
./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19 Failed:
8)
  Failed tests:  5-7, 9, 11, 13-15
./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35 Failed:
6)
  Failed tests:  29-31, 33-35
./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
Failed: 6)
  Failed tests:  29-31, 33-35
./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23 Failed:
2)
  Failed tests:  19, 23
Files=123, Tests=2019, 4834 wallclock secs ( 1.62 usr  0.25 sys + 258.83
cusr 201.05 csys = 461.75 CPU)
Result: FAIL

Thanks,
Kiran.

On Fri, Sep 26, 2014 at 9:32 PM, Lalatendu Mohanty lmoha...@redhat.com
wrote:

 On 09/26/2014 02:59 AM, Justin Clift wrote:

 On 25/09/2014, at 9:28 PM, Lalatendu Mohanty wrote:
 snip

 Have we published somewhere which distributions or OS versions we are
 running regression tests ? if not lets compile it and publish as this will
 help community to understand which os distributions are part of the
 regression testing.

 The best we have so far is probably this:

http://www.gluster.org/community/documentation/index.
 php/Using_the_Gluster_Test_Framework


  Do we have plans to run regression on a variety of distributions? Not
 sure how difficult or complex it is to maintain.

 The primary OS at the moment is CentOS 6.x (mainly due to it
 being the primary OS for GlusterFS I think).

 Manu and Harsha have been going through the regression tests
 recently, making them more cross platform in order to run on
 the BSDs.

 This effort has also highlighted some interesting Linux
 specific behaviour in the main GlusterFS code base, and led
 to fixes there.

 In short, we're all for running the regression tests on as
 many distributions as possible.  If Community members want
 to put VM's or something online (medium-long term), I'd be
 happy to hook our Jenkins infrastructure up to them to
 automatically run tests on them.

 Is that kind of what you're asking? :)


 Yup, I will try to get a CentOS 7 instance for running regression tests :)

 -Lala

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-27 Thread Kiran Patil
Now on XFS, the Test Summary Report is same as running on ZFS except test
case bug-953887.t failure.

Test case tests/bugs/bug-953887.t expects force at end of line, TEST
gluster volume add-brick $V0 $H0:$B0/${V0}{2,3} and it passed once force is
substituted at the end of the line.

Test case tests/bugs/bug-767095.t failed due to path issue and now it no
more fails.

So the failures are no more specific to on disk filesystems such as ZFS and
XFS but glusterfs.

Thanks,
Kiran.

On Sat, Sep 27, 2014 at 1:30 PM, Kiran Patil kirantpa...@gmail.com wrote:

 I ran each testcases with DEBUG=1 and pasted at http://ur1.ca/i8xay (
 http://fpaste.org/136940/)

 This time I ran the testcases by keeping default paths and now
 tests/bugs/bug-861542.t passes

 Need to create /var/run/gluster directory on reboot

 Test Setup:
 -
 CentOS 7 : 3.10.0-123.8.1.el7.x86_64

 gluster --version : glusterfs 3.4.5 built on Jul 24 2014 19:14:13

 Zfs : zfs-0.6.3

 glusterfs testcases version :
 git branch : * (detached from v3.4.5)


 Test Summary Report
 ---
 ./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39
 Failed: 2)
   Failed tests:  28, 31
 ./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21
 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10
 Failed: 2)
   Failed tests:  8-9
 ./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11
 Failed: 4)
   Failed tests:  4-6, 9
 ./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19
 Failed: 8)
   Failed tests:  5-7, 9, 11, 13-15
 ./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35
 Failed: 6)
   Failed tests:  29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
 Failed: 6)
   Failed tests:  29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23
 Failed: 2)
   Failed tests:  19, 23
 Files=123, Tests=2019, 4834 wallclock secs ( 1.62 usr  0.25 sys + 258.83
 cusr 201.05 csys = 461.75 CPU)
 Result: FAIL

 Thanks,
 Kiran.

 On Fri, Sep 26, 2014 at 9:32 PM, Lalatendu Mohanty lmoha...@redhat.com
 wrote:

 On 09/26/2014 02:59 AM, Justin Clift wrote:

 On 25/09/2014, at 9:28 PM, Lalatendu Mohanty wrote:
 snip

 Have we published somewhere which distributions or OS versions we are
 running regression tests ? if not lets compile it and publish as this will
 help community to understand which os distributions are part of the
 regression testing.

 The best we have so far is probably this:

http://www.gluster.org/community/documentation/index.
 php/Using_the_Gluster_Test_Framework


  Do we have plans to run regression on a variety of distributions? Not
 sure how difficult or complex it is to maintain.

 The primary OS at the moment is CentOS 6.x (mainly due to it
 being the primary OS for GlusterFS I think).

 Manu and Harsha have been going through the regression tests
 recently, making them more cross platform in order to run on
 the BSDs.

 This effort has also highlighted some interesting Linux
 specific behaviour in the main GlusterFS code base, and led
 to fixes there.

 In short, we're all for running the regression tests on as
 many distributions as possible.  If Community members want
 to put VM's or something online (medium-long term), I'd be
 happy to hook our Jenkins infrastructure up to them to
 automatically run tests on them.

 Is that kind of what you're asking? :)


 Yup, I will try to get a CentOS 7 instance for running regression tests :)

 -Lala



___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-devel] [Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-26 Thread Kiran Patil
I forgot to mention other issues I faced and temporary fixes

1) Most of the testcases were failing due to /var/run/gluster directory not
found.

grep: /var/run/gluster/: No such file or directory
ls: cannot access /var/run/gluster: No such file or directory

I created the directory mkdir /var/run/gluster and rerun the tests and
testcases passed where they were failing earlier.

Justin, Is CentOS 7 considered for regression testing or not ?

How about Fedora 19/20, which one do you recommend ?

2) ./tests/bugs/../include.rc: line 123: setfattr: command not found

Setfattr command is part of attr package, # yum install attr

Thanks,
Kiran.



On Thu, Sep 25, 2014 at 7:01 PM, Kiran Patil kirantpa...@gmail.com wrote:

 The below testcases failing are related to xfs, cluster and others..

 The hardcoded ones I have fixed temporarily by providing the absolute
 pathname.

 Testcase /tests/bugs/bug-767095.t is fixed by changing awk parameter $5 to
 $4.

 Testcase tests/bugs/bug-861542.t is failing at EXPECT N/A port_field $V0
 '0'; # volume status
 If I change its value to to '1' it passes, is that correct ?

 I will keep posting the new findings and possible fixes.

 Test Summary Report
 
 ./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39
 Failed: 2)
   Failed tests:  28, 31
 ./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21
 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13
 Failed: 1)
   Failed test:  10
 ./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10
 Failed: 2)
   Failed tests:  8-9
 ./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11
 Failed: 4)
   Failed tests:  4-6, 9
 ./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19
 Failed: 8)
   Failed tests:  5-7, 9, 11, 13-15
 ./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35
 Failed: 14)
   Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
 Failed: 14)
   Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23
 Failed: 4)
   Failed tests:  12, 15, 19, 23
 Files=124, Tests=2031, 4470 wallclock secs ( 1.57 usr  0.24 sys + 259.31
 cusr 220.81 csys = 481.93 CPU)
 Result: FAIL

 Thanks,
 Kiran.

 On Thu, Sep 25, 2014 at 2:17 PM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Sep 25, 2014 at 12:49:21PM +0530, Kiran Patil wrote:
  I installed the 'psmisc' package and it installed killall command and
  reverted pkill to killall in include.rc file.
 
  Testcases started executing properly and will send tests failure report
  soon.

 Thanks, I've added 'psmisc' to the list of packages in the wiki:
 -
 http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework#Preparation_steps_for_CentOS_7_.28only.29

 Niels

 
  Thanks,
  Kiran.
 
  On Thu, Sep 25, 2014 at 12:32 PM, Niels de Vos nde...@redhat.com
 wrote:
 
   On Thu, Sep 25, 2014 at 10:45:49AM +0530, Kiran Patil wrote:
pkill expects only one pattern, so I did as below in
 tests/include.rc
   file
and test cases started working fine.
   
pkill  glusterfs 2/dev/null || true;
pkill  glusterfsd 2/dev/null || true;
pkill  glusterd 2/dev/null || true;
  
   Sorry, I'm a little late to the party, but the 'killall' command
 should
   be available for CentOS-7 too. It seems to be part of the 'psmisc'
   package. I guess we should add this as a dependency on the wiki page.
  
   Could you check if that works for you too? If not, and you are
   interested, I'll help you posting a patch to make the pkill change.
  
   Thanks,
   Niels
  
  
   
On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org
   wrote:
   
 On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
  Some of the reasons I have found so far are as below,
 
  1. Cleanup operation does not work since killall is not part of
   CentOS 7
 
  2. I used pkill and still testcases fail at first step  Ex: TEST
   glusterd
 
  3. Subsequent running of testcases does not proceed and hangs
 at the
 first testcase (tests/basic/bd.t)

 This sounds like there could be a few challenges then.  I'm
 setting up
 a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs
   regression
 tests on.

 Guessing that will experience these same problems as your CentOS 7
 test run, so I'm definitely interested in this too.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift


  
___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman

Re: [Gluster-devel] [Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-26 Thread Kiran Patil


 Please keep testing CentOS 7... if you have the time/inclination
 to delve into fixing the failures.


 Kiran - thanks for your report. Would it be possible to determine what's
 causing the tests to fail in your setup? Running tests with DEBUG=1 or set
 -x in the failing testcases will help us understand the problem better.


Vijay - where should I upload each testcase output with DEBUG=1 enabled ?

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-users] [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-26 Thread Kiran Patil
I forgot to mention other issues I faced and temporary fixes

1) Most of the testcases were failing due to /var/run/gluster directory not
found.

grep: /var/run/gluster/: No such file or directory
ls: cannot access /var/run/gluster: No such file or directory

I created the directory mkdir /var/run/gluster and rerun the tests and
testcases passed where they were failing earlier.

Justin, Is CentOS 7 considered for regression testing or not ?

How about Fedora 19/20, which one do you recommend ?

2) ./tests/bugs/../include.rc: line 123: setfattr: command not found

Setfattr command is part of attr package, # yum install attr

Thanks,
Kiran.



On Thu, Sep 25, 2014 at 7:01 PM, Kiran Patil kirantpa...@gmail.com wrote:

 The below testcases failing are related to xfs, cluster and others..

 The hardcoded ones I have fixed temporarily by providing the absolute
 pathname.

 Testcase /tests/bugs/bug-767095.t is fixed by changing awk parameter $5 to
 $4.

 Testcase tests/bugs/bug-861542.t is failing at EXPECT N/A port_field $V0
 '0'; # volume status
 If I change its value to to '1' it passes, is that correct ?

 I will keep posting the new findings and possible fixes.

 Test Summary Report
 
 ./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39
 Failed: 2)
   Failed tests:  28, 31
 ./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21
 Failed: 1)
   Failed test:  15
 ./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13
 Failed: 1)
   Failed test:  10
 ./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10
 Failed: 2)
   Failed tests:  8-9
 ./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11
 Failed: 4)
   Failed tests:  4-6, 9
 ./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19
 Failed: 8)
   Failed tests:  5-7, 9, 11, 13-15
 ./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35
 Failed: 14)
   Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
 Failed: 14)
   Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
 ./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23
 Failed: 4)
   Failed tests:  12, 15, 19, 23
 Files=124, Tests=2031, 4470 wallclock secs ( 1.57 usr  0.24 sys + 259.31
 cusr 220.81 csys = 481.93 CPU)
 Result: FAIL

 Thanks,
 Kiran.

 On Thu, Sep 25, 2014 at 2:17 PM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Sep 25, 2014 at 12:49:21PM +0530, Kiran Patil wrote:
  I installed the 'psmisc' package and it installed killall command and
  reverted pkill to killall in include.rc file.
 
  Testcases started executing properly and will send tests failure report
  soon.

 Thanks, I've added 'psmisc' to the list of packages in the wiki:
 -
 http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework#Preparation_steps_for_CentOS_7_.28only.29

 Niels

 
  Thanks,
  Kiran.
 
  On Thu, Sep 25, 2014 at 12:32 PM, Niels de Vos nde...@redhat.com
 wrote:
 
   On Thu, Sep 25, 2014 at 10:45:49AM +0530, Kiran Patil wrote:
pkill expects only one pattern, so I did as below in
 tests/include.rc
   file
and test cases started working fine.
   
pkill  glusterfs 2/dev/null || true;
pkill  glusterfsd 2/dev/null || true;
pkill  glusterd 2/dev/null || true;
  
   Sorry, I'm a little late to the party, but the 'killall' command
 should
   be available for CentOS-7 too. It seems to be part of the 'psmisc'
   package. I guess we should add this as a dependency on the wiki page.
  
   Could you check if that works for you too? If not, and you are
   interested, I'll help you posting a patch to make the pkill change.
  
   Thanks,
   Niels
  
  
   
On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org
   wrote:
   
 On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
  Some of the reasons I have found so far are as below,
 
  1. Cleanup operation does not work since killall is not part of
   CentOS 7
 
  2. I used pkill and still testcases fail at first step  Ex: TEST
   glusterd
 
  3. Subsequent running of testcases does not proceed and hangs
 at the
 first testcase (tests/basic/bd.t)

 This sounds like there could be a few challenges then.  I'm
 setting up
 a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs
   regression
 tests on.

 Guessing that will experience these same problems as your CentOS 7
 test run, so I'm definitely interested in this too.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift


  
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman

Re: [Gluster-users] [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-26 Thread Kiran Patil


 Please keep testing CentOS 7... if you have the time/inclination
 to delve into fixing the failures.


 Kiran - thanks for your report. Would it be possible to determine what's
 causing the tests to fail in your setup? Running tests with DEBUG=1 or set
 -x in the failing testcases will help us understand the problem better.


Vijay - where should I upload each testcase output with DEBUG=1 enabled ?

Thanks,
Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-devel] [Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-25 Thread Kiran Patil
I installed the 'psmisc' package and it installed killall command and
reverted pkill to killall in include.rc file.

Testcases started executing properly and will send tests failure report
soon.

Thanks,
Kiran.

On Thu, Sep 25, 2014 at 12:32 PM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Sep 25, 2014 at 10:45:49AM +0530, Kiran Patil wrote:
  pkill expects only one pattern, so I did as below in tests/include.rc
 file
  and test cases started working fine.
 
  pkill  glusterfs 2/dev/null || true;
  pkill  glusterfsd 2/dev/null || true;
  pkill  glusterd 2/dev/null || true;

 Sorry, I'm a little late to the party, but the 'killall' command should
 be available for CentOS-7 too. It seems to be part of the 'psmisc'
 package. I guess we should add this as a dependency on the wiki page.

 Could you check if that works for you too? If not, and you are
 interested, I'll help you posting a patch to make the pkill change.

 Thanks,
 Niels


 
  On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org
 wrote:
 
   On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
Some of the reasons I have found so far are as below,
   
1. Cleanup operation does not work since killall is not part of
 CentOS 7
   
2. I used pkill and still testcases fail at first step  Ex: TEST
 glusterd
   
3. Subsequent running of testcases does not proceed and hangs at the
   first testcase (tests/basic/bd.t)
  
   This sounds like there could be a few challenges then.  I'm setting up
   a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs
 regression
   tests on.
  
   Guessing that will experience these same problems as your CentOS 7
   test run, so I'm definitely interested in this too.
  
   + Justin
  
   --
   GlusterFS - http://www.gluster.org
  
   An open source, distributed file system scaling to several
   petabytes, and handling thousands of clients.
  
   My personal twitter: twitter.com/realjustinclift
  
  

  ___
  Gluster-users mailing list
  gluster-us...@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] [Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-25 Thread Kiran Patil
The below testcases failing are related to xfs, cluster and others..

The hardcoded ones I have fixed temporarily by providing the absolute
pathname.

Testcase /tests/bugs/bug-767095.t is fixed by changing awk parameter $5 to
$4.

Testcase tests/bugs/bug-861542.t is failing at EXPECT N/A port_field $V0
'0'; # volume status
If I change its value to to '1' it passes, is that correct ?

I will keep posting the new findings and possible fixes.

Test Summary Report

./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39 Failed:
2)
  Failed tests:  28, 31
./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21 Failed:
1)
  Failed test:  15
./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13 Failed:
1)
  Failed test:  10
./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10 Failed:
2)
  Failed tests:  8-9
./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11 Failed:
4)
  Failed tests:  4-6, 9
./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19 Failed:
8)
  Failed tests:  5-7, 9, 11, 13-15
./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35 Failed:
14)
  Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
Failed: 14)
  Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23 Failed:
4)
  Failed tests:  12, 15, 19, 23
Files=124, Tests=2031, 4470 wallclock secs ( 1.57 usr  0.24 sys + 259.31
cusr 220.81 csys = 481.93 CPU)
Result: FAIL

Thanks,
Kiran.

On Thu, Sep 25, 2014 at 2:17 PM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Sep 25, 2014 at 12:49:21PM +0530, Kiran Patil wrote:
  I installed the 'psmisc' package and it installed killall command and
  reverted pkill to killall in include.rc file.
 
  Testcases started executing properly and will send tests failure report
  soon.

 Thanks, I've added 'psmisc' to the list of packages in the wiki:
 -
 http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework#Preparation_steps_for_CentOS_7_.28only.29

 Niels

 
  Thanks,
  Kiran.
 
  On Thu, Sep 25, 2014 at 12:32 PM, Niels de Vos nde...@redhat.com
 wrote:
 
   On Thu, Sep 25, 2014 at 10:45:49AM +0530, Kiran Patil wrote:
pkill expects only one pattern, so I did as below in tests/include.rc
   file
and test cases started working fine.
   
pkill  glusterfs 2/dev/null || true;
pkill  glusterfsd 2/dev/null || true;
pkill  glusterd 2/dev/null || true;
  
   Sorry, I'm a little late to the party, but the 'killall' command should
   be available for CentOS-7 too. It seems to be part of the 'psmisc'
   package. I guess we should add this as a dependency on the wiki page.
  
   Could you check if that works for you too? If not, and you are
   interested, I'll help you posting a patch to make the pkill change.
  
   Thanks,
   Niels
  
  
   
On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org
   wrote:
   
 On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
  Some of the reasons I have found so far are as below,
 
  1. Cleanup operation does not work since killall is not part of
   CentOS 7
 
  2. I used pkill and still testcases fail at first step  Ex: TEST
   glusterd
 
  3. Subsequent running of testcases does not proceed and hangs at
 the
 first testcase (tests/basic/bd.t)

 This sounds like there could be a few challenges then.  I'm
 setting up
 a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs
   regression
 tests on.

 Guessing that will experience these same problems as your CentOS 7
 test run, so I'm definitely interested in this too.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift


  
___
Gluster-users mailing list
gluster-us...@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
  
  

___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-users] [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-25 Thread Kiran Patil
I installed the 'psmisc' package and it installed killall command and
reverted pkill to killall in include.rc file.

Testcases started executing properly and will send tests failure report
soon.

Thanks,
Kiran.

On Thu, Sep 25, 2014 at 12:32 PM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Sep 25, 2014 at 10:45:49AM +0530, Kiran Patil wrote:
  pkill expects only one pattern, so I did as below in tests/include.rc
 file
  and test cases started working fine.
 
  pkill  glusterfs 2/dev/null || true;
  pkill  glusterfsd 2/dev/null || true;
  pkill  glusterd 2/dev/null || true;

 Sorry, I'm a little late to the party, but the 'killall' command should
 be available for CentOS-7 too. It seems to be part of the 'psmisc'
 package. I guess we should add this as a dependency on the wiki page.

 Could you check if that works for you too? If not, and you are
 interested, I'll help you posting a patch to make the pkill change.

 Thanks,
 Niels


 
  On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org
 wrote:
 
   On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
Some of the reasons I have found so far are as below,
   
1. Cleanup operation does not work since killall is not part of
 CentOS 7
   
2. I used pkill and still testcases fail at first step  Ex: TEST
 glusterd
   
3. Subsequent running of testcases does not proceed and hangs at the
   first testcase (tests/basic/bd.t)
  
   This sounds like there could be a few challenges then.  I'm setting up
   a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs
 regression
   tests on.
  
   Guessing that will experience these same problems as your CentOS 7
   test run, so I'm definitely interested in this too.
  
   + Justin
  
   --
   GlusterFS - http://www.gluster.org
  
   An open source, distributed file system scaling to several
   petabytes, and handling thousands of clients.
  
   My personal twitter: twitter.com/realjustinclift
  
  

  ___
  Gluster-users mailing list
  Gluster-users@gluster.org
  http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-25 Thread Kiran Patil
The below testcases failing are related to xfs, cluster and others..

The hardcoded ones I have fixed temporarily by providing the absolute
pathname.

Testcase /tests/bugs/bug-767095.t is fixed by changing awk parameter $5 to
$4.

Testcase tests/bugs/bug-861542.t is failing at EXPECT N/A port_field $V0
'0'; # volume status
If I change its value to to '1' it passes, is that correct ?

I will keep posting the new findings and possible fixes.

Test Summary Report

./tests/bugs/bug-802417.t   (Wstat: 0 Tests: 39 Failed:
2)
  Failed tests:  28, 31
./tests/bugs/bug-821056.t   (Wstat: 0 Tests: 21 Failed:
1)
  Failed test:  15
./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13 Failed:
1)
  Failed test:  10
./tests/bugs/bug-908146.t   (Wstat: 0 Tests: 10 Failed:
2)
  Failed tests:  8-9
./tests/bugs/bug-913555.t   (Wstat: 0 Tests: 11 Failed:
4)
  Failed tests:  4-6, 9
./tests/bugs/bug-948686.t   (Wstat: 0 Tests: 19 Failed:
8)
  Failed tests:  5-7, 9, 11, 13-15
./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35 Failed:
14)
  Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
Failed: 14)
  Failed tests:  15, 17, 19, 21, 24-27, 29-31, 33-35
./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23 Failed:
4)
  Failed tests:  12, 15, 19, 23
Files=124, Tests=2031, 4470 wallclock secs ( 1.57 usr  0.24 sys + 259.31
cusr 220.81 csys = 481.93 CPU)
Result: FAIL

Thanks,
Kiran.

On Thu, Sep 25, 2014 at 2:17 PM, Niels de Vos nde...@redhat.com wrote:

 On Thu, Sep 25, 2014 at 12:49:21PM +0530, Kiran Patil wrote:
  I installed the 'psmisc' package and it installed killall command and
  reverted pkill to killall in include.rc file.
 
  Testcases started executing properly and will send tests failure report
  soon.

 Thanks, I've added 'psmisc' to the list of packages in the wiki:
 -
 http://www.gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework#Preparation_steps_for_CentOS_7_.28only.29

 Niels

 
  Thanks,
  Kiran.
 
  On Thu, Sep 25, 2014 at 12:32 PM, Niels de Vos nde...@redhat.com
 wrote:
 
   On Thu, Sep 25, 2014 at 10:45:49AM +0530, Kiran Patil wrote:
pkill expects only one pattern, so I did as below in tests/include.rc
   file
and test cases started working fine.
   
pkill  glusterfs 2/dev/null || true;
pkill  glusterfsd 2/dev/null || true;
pkill  glusterd 2/dev/null || true;
  
   Sorry, I'm a little late to the party, but the 'killall' command should
   be available for CentOS-7 too. It seems to be part of the 'psmisc'
   package. I guess we should add this as a dependency on the wiki page.
  
   Could you check if that works for you too? If not, and you are
   interested, I'll help you posting a patch to make the pkill change.
  
   Thanks,
   Niels
  
  
   
On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org
   wrote:
   
 On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
  Some of the reasons I have found so far are as below,
 
  1. Cleanup operation does not work since killall is not part of
   CentOS 7
 
  2. I used pkill and still testcases fail at first step  Ex: TEST
   glusterd
 
  3. Subsequent running of testcases does not proceed and hangs at
 the
 first testcase (tests/basic/bd.t)

 This sounds like there could be a few challenges then.  I'm
 setting up
 a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs
   regression
 tests on.

 Guessing that will experience these same problems as your CentOS 7
 test run, so I'm definitely interested in this too.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift


  
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users
  
  

___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-24 Thread Kiran Patil
Some of the reasons I have found so far are as below,

1. Cleanup operation does not work since killall is not part of CentOS 7

2. I used pkill and still testcases fail at first step  Ex: TEST glusterd

3. Subsequent running of testcases does not proceed and hangs at the first
testcase (tests/basic/bd.t)

Please share ideas to resolve the issue.

Thanks,
Kiran.

On Wed, Sep 24, 2014 at 3:25 PM, Kiran Patil kirantpa...@gmail.com wrote:

 Hi,

 I am running Gluster Test Framework on ZFS, XFS and most of the testcases
 are failing.

 Gluster version : v3.4.5

 Operating System: CentOS 7

 Please let us know how to fix it ?

 What could be the major changes in CentOS 7 which is causing this issue ?
 or
 Is  gluster is the culprit here ?

 I have pasted here the output of run-tests.sh of both XFS and ZFS with
 gluster.

 Test Failure on ZFS:
 -
 [root@fractal-2590f1 glusterfs]# ./run-tests.sh
 [23:03:08] ./tests/basic/bd.t  mkdir:
 cannot create directory '/fractalpool/normal/mnt/glusterfs/0': File exists
 [23:03:08] ./tests/basic/bd.t  1/26
 not ok 1
 [23:03:08] ./tests/basic/bd.t  Failed
 25/26 subtests
 [23:03:09] ./tests/basic/mount.t . 1/29
 not ok 1
 not ok 6 Got Started instead of Created
 volume start: patchy: failed: Volume patchy already started
 [23:03:09] ./tests/basic/mount.t . 7/29 not ok
 7
 volume set: failed: Commit failed on localhost. Please check the log file
 for more details.
 not ok 9
 mount.nfs: requested NFS version or transport protocol is not supported
 [23:03:09] ./tests/basic/mount.t . 16/29 not
 ok 16
 touch: cannot touch '/fractalpool/normal/mnt/glusterfs/0/newfile':
 Transport endpoint is not connected
 not ok 19
 stat: cannot stat '/fractalpool/normal/mnt/glusterfs/1/newfile': Transport
 endpoint is not connected
 not ok 20
 stat: cannot stat '/fractalpool/normal/mnt/nfs/0/newfile': No such file or
 directory
 not ok 21
 rm: cannot remove '/fractalpool/normal/mnt/nfs/0/newfile': No such file or
 directory
 not ok 23
 [23:03:09] ./tests/basic/mount.t . 28/29 not
 ok 29
 [23:03:09] ./tests/basic/mount.t . Failed
 10/29 subtests
 [23:03:16] ./tests/basic/posixonly.t . ok   1
 s
 [23:03:16] ./tests/basic/pump.t .. 1/12
 not ok 1
 volume start: patchy: failed: Failed to find brick directory
 /fractalpool/normal/d/backends/patchy1 for volume patchy. Reason : No such
 file or directory
 [23:03:16] ./tests/basic/pump.t .. 4/12 not ok
 4
 ./tests/basic/pump.t: line 13: cd: /fractalpool/normal/mnt/glusterfs/0:
 Transport endpoint is not connected
 [23:03:16] ./tests/basic/pump.t .. 6/12 volume
 replace-brick: failed: volume: patchy is not started
 [23:03:16] ./tests/basic/pump.t .. 7/12 not ok
 7
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 volume replace-brick: failed: volume: patchy is not started
 ^C

 Test Failure on XFS:
 --
 [root@fractal-2590f1 glusterfs]# ./run-tests.sh
 [23:06:15] ./tests/basic/bd.t  1/26
 not ok 1
 volume start: patchy: failed: Failed to find brick directory
 /fractalpool/normal/d/backends/patchy1 for volume patchy. Reason : No such
 file or directory
 not ok 6
 [23:06:15] ./tests/basic/bd.t  7/26 not ok
 7 Got Created instead of Started
 touch: cannot touch '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is
 not connected
 not ok 9
 stat: cannot stat '/dev/__bd_vg/lv1': No such file or directory
 not ok 10
 rm: cannot remove '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is
 not connected
 not ok 11
 touch: cannot touch '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is
 not connected
 not ok 13
 truncate: cannot open '/mnt/glusterfs/0/__bd_vg/lv1' for writing:
 Transport endpoint is not connected
 not ok 14
 ln: failed to access '/mnt/glusterfs/0/__bd_vg/lv2': Transport endpoint is
 not connected
 not ok 15
 stat: cannot stat '/dev/__bd_vg/lv2': No such file or directory
 not ok 16
 rm: cannot remove '/mnt/glusterfs/0/__bd_vg/lv1': Transport

Re: [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-24 Thread Kiran Patil
pkill expects only one pattern, so I did as below in tests/include.rc file
and test cases started working fine.

pkill  glusterfs 2/dev/null || true;
pkill  glusterfsd 2/dev/null || true;
pkill  glusterd 2/dev/null || true;

Thanks,
Kiran.

On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org wrote:

 On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
  Some of the reasons I have found so far are as below,
 
  1. Cleanup operation does not work since killall is not part of CentOS 7
 
  2. I used pkill and still testcases fail at first step  Ex: TEST glusterd
 
  3. Subsequent running of testcases does not proceed and hangs at the
 first testcase (tests/basic/bd.t)

 This sounds like there could be a few challenges then.  I'm setting up
 a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs regression
 tests on.

 Guessing that will experience these same problems as your CentOS 7
 test run, so I'm definitely interested in this too.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift


___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] CentOS 7: Gluster Test Framework testcases failure

2014-09-24 Thread Kiran Patil
Hi,

I am running Gluster Test Framework on ZFS, XFS and most of the testcases
are failing.

Gluster version : v3.4.5

Operating System: CentOS 7

Please let us know how to fix it ?

What could be the major changes in CentOS 7 which is causing this issue ?
or
Is  gluster is the culprit here ?

I have pasted here the output of run-tests.sh of both XFS and ZFS with
gluster.

Test Failure on ZFS:
-
[root@fractal-2590f1 glusterfs]# ./run-tests.sh
[23:03:08] ./tests/basic/bd.t  mkdir:
cannot create directory '/fractalpool/normal/mnt/glusterfs/0': File exists
[23:03:08] ./tests/basic/bd.t  1/26
not ok 1
[23:03:08] ./tests/basic/bd.t  Failed 25/26
subtests
[23:03:09] ./tests/basic/mount.t . 1/29
not ok 1
not ok 6 Got Started instead of Created
volume start: patchy: failed: Volume patchy already started
[23:03:09] ./tests/basic/mount.t . 7/29 not ok
7
volume set: failed: Commit failed on localhost. Please check the log file
for more details.
not ok 9
mount.nfs: requested NFS version or transport protocol is not supported
[23:03:09] ./tests/basic/mount.t . 16/29 not ok
16
touch: cannot touch '/fractalpool/normal/mnt/glusterfs/0/newfile':
Transport endpoint is not connected
not ok 19
stat: cannot stat '/fractalpool/normal/mnt/glusterfs/1/newfile': Transport
endpoint is not connected
not ok 20
stat: cannot stat '/fractalpool/normal/mnt/nfs/0/newfile': No such file or
directory
not ok 21
rm: cannot remove '/fractalpool/normal/mnt/nfs/0/newfile': No such file or
directory
not ok 23
[23:03:09] ./tests/basic/mount.t . 28/29 not ok
29
[23:03:09] ./tests/basic/mount.t . Failed 10/29
subtests
[23:03:16] ./tests/basic/posixonly.t . ok   1 s
[23:03:16] ./tests/basic/pump.t .. 1/12
not ok 1
volume start: patchy: failed: Failed to find brick directory
/fractalpool/normal/d/backends/patchy1 for volume patchy. Reason : No such
file or directory
[23:03:16] ./tests/basic/pump.t .. 4/12 not ok
4
./tests/basic/pump.t: line 13: cd: /fractalpool/normal/mnt/glusterfs/0:
Transport endpoint is not connected
[23:03:16] ./tests/basic/pump.t .. 6/12 volume
replace-brick: failed: volume: patchy is not started
[23:03:16] ./tests/basic/pump.t .. 7/12 not ok
7
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
volume replace-brick: failed: volume: patchy is not started
^C

Test Failure on XFS:
--
[root@fractal-2590f1 glusterfs]# ./run-tests.sh
[23:06:15] ./tests/basic/bd.t  1/26
not ok 1
volume start: patchy: failed: Failed to find brick directory
/fractalpool/normal/d/backends/patchy1 for volume patchy. Reason : No such
file or directory
not ok 6
[23:06:15] ./tests/basic/bd.t  7/26 not ok
7 Got Created instead of Started
touch: cannot touch '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is
not connected
not ok 9
stat: cannot stat '/dev/__bd_vg/lv1': No such file or directory
not ok 10
rm: cannot remove '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is not
connected
not ok 11
touch: cannot touch '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is
not connected
not ok 13
truncate: cannot open '/mnt/glusterfs/0/__bd_vg/lv1' for writing: Transport
endpoint is not connected
not ok 14
ln: failed to access '/mnt/glusterfs/0/__bd_vg/lv2': Transport endpoint is
not connected
not ok 15
stat: cannot stat '/dev/__bd_vg/lv2': No such file or directory
not ok 16
rm: cannot remove '/mnt/glusterfs/0/__bd_vg/lv1': Transport endpoint is not
connected
rm: cannot remove '/mnt/glusterfs/0/__bd_vg/lv2': Transport endpoint is not
connected
Volume patchy is not started
not ok 17
stat: cannot stat '/dev/__bd_vg/lv1': No such file or directory
not ok 18
Volume patchy is not started
not ok 19
stat: cannot stat '/dev/__bd_vg/lv2': No such file or directory
not ok 20
Volume patchy is not started
not ok 21
Volume patchy is not started
not ok 22
stat: cannot stat '/dev/__bd_vg/lv2': No such file or directory
not ok 23
rm: cannot remove '/mnt/glusterfs/0/__bd_vg/lv2': Transport 

Re: [Gluster-users] [Gluster-devel] CentOS 7: Gluster Test Framework testcases failure

2014-09-24 Thread Kiran Patil
pkill expects only one pattern, so I did as below in tests/include.rc file
and test cases started working fine.

pkill  glusterfs 2/dev/null || true;
pkill  glusterfsd 2/dev/null || true;
pkill  glusterd 2/dev/null || true;

Thanks,
Kiran.

On Wed, Sep 24, 2014 at 6:48 PM, Justin Clift jus...@gluster.org wrote:

 On 24/09/2014, at 2:07 PM, Kiran Patil wrote:
  Some of the reasons I have found so far are as below,
 
  1. Cleanup operation does not work since killall is not part of CentOS 7
 
  2. I used pkill and still testcases fail at first step  Ex: TEST glusterd
 
  3. Subsequent running of testcases does not proceed and hangs at the
 first testcase (tests/basic/bd.t)

 This sounds like there could be a few challenges then.  I'm setting up
 a new Fedora 20 (or 21 alpha) VM in Rackspace for running btrfs regression
 tests on.

 Guessing that will experience these same problems as your CentOS 7
 test run, so I'm definitely interested in this too.

 + Justin

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-devel] Unable to run Gluster Test Framework on CentOS 7

2014-09-23 Thread Kiran Patil
Hi,

I followed the below steps to run tests.

The link
http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
does not have info for CentOS 7 and I tried to follow the same steps with
epel pointing to release 7

Here are the steps for CentOS 7:

1. Install EPEL:

$ sudo yum install -y
http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-1.noarch.rpm

2. Install the CentOS 7.x dependencies:

$ sudo yum install -y --enablerepo=epel cmockery2-devel dbench git
libacl-devel mock nfs-utils perl-Test-Harness yajl xfsprogs
$ sudo yum install -y --enablerepo=epel python-webob1.0
python-paste-deploy1.5 python-sphinx10 redhat-rpm-config
== The missing packages are
No package python-webob1.0 available.
No package python-paste-deploy1.5 available.
No package python-sphinx10 available.

$ sudo yum install -y --enablerepo=epel autoconf automake bison
dos2unix flex fuse-devel libaio-devel libibverbs-devel \
 librdmacm-devel libtool libxml2-devel lvm2-devel make
openssl-devel pkgconfig \
 python-devel python-eventlet python-netifaces python-paste-deploy \
 python-simplejson python-sphinx python-webob pyxattr
readline-devel rpm-build \
 systemtap-sdt-devel tar

3. Create the mock user

$ sudo useradd -g mock mock

4. Running the testcases results in error as below

[root@fractal-c5ac glusterfs]# ./run-tests.sh

... GlusterFS Test Framework ...

Running all the regression test cases
[09:55:02] ./tests/basic/afr/gfid-mismatch.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/gfid-self-heal.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/metadata-self-heal.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/read-subvol-data.t . Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/read-subvol-entry.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/resolve.t .. Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/self-heal.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/sparse-file-self-heal.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/stale-file-lookup.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/bd.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/cdc.t .. Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-12-4.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-3-1.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-4-1.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-5-1.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-5-2.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-6-2.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-7-3.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/nfs.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/self-heal.t . Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/file-snapshot.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/fops-sanity.t .. Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:03] ./tests/basic/gfid-access.t .. Dubious,
test returned 1 (wstat 256, 0x100)

Both glusterd and glusterfsd are running fine

[root@fractal-c5ac glusterfs]# systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Tue 2014-09-23 14:45:42 IST; 45min ago
  Process: 12246 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid
(code=exited, status=0/SUCCESS)
 Main PID: 12247 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─12247 /usr/sbin/glusterd -p 

Re: [Gluster-devel] Unable to run Gluster Test Framework on CentOS 7

2014-09-23 Thread Kiran Patil
Hi,

Sorry, I did not checkout the gluster source for the version I am running
to run the testcases.

It is working fine.

Thanks,
Kiran.

On Tue, Sep 23, 2014 at 1:09 PM, Kiran Patil kirantpa...@gmail.com wrote:

 Hi,

 I followed the below steps to run tests.

 The link
 http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
 does not have info for CentOS 7 and I tried to follow the same steps with
 epel pointing to release 7

 Here are the steps for CentOS 7:

 1. Install EPEL:

 $ sudo yum install -y 
 http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-1.noarch.rpm

 2. Install the CentOS 7.x dependencies:

 $ sudo yum install -y --enablerepo=epel cmockery2-devel dbench git 
 libacl-devel mock nfs-utils perl-Test-Harness yajl xfsprogs
 $ sudo yum install -y --enablerepo=epel python-webob1.0 
 python-paste-deploy1.5 python-sphinx10 redhat-rpm-config
 == The missing packages are
 No package python-webob1.0 available.
 No package python-paste-deploy1.5 available.
 No package python-sphinx10 available.

 $ sudo yum install -y --enablerepo=epel autoconf automake bison dos2unix flex 
 fuse-devel libaio-devel libibverbs-devel \
  librdmacm-devel libtool libxml2-devel lvm2-devel make openssl-devel 
 pkgconfig \
  python-devel python-eventlet python-netifaces python-paste-deploy \
  python-simplejson python-sphinx python-webob pyxattr readline-devel 
 rpm-build \
  systemtap-sdt-devel tar

 3. Create the mock user

 $ sudo useradd -g mock mock

 4. Running the testcases results in error as below

 [root@fractal-c5ac glusterfs]# ./run-tests.sh

 ... GlusterFS Test Framework ...

 Running all the regression test cases
 [09:55:02] ./tests/basic/afr/gfid-mismatch.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/gfid-self-heal.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/metadata-self-heal.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/read-subvol-data.t . Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/read-subvol-entry.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/resolve.t .. Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/self-heal.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/sparse-file-self-heal.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/stale-file-lookup.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/bd.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/cdc.t .. Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-12-4.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-3-1.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-4-1.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-5-1.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-5-2.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-6-2.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-7-3.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/nfs.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/self-heal.t . Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/file-snapshot.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/fops-sanity.t .. Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:03] ./tests/basic/gfid-access.t .. Dubious,
 test returned 1 (wstat 256, 0x100)

 Both glusterd and glusterfsd are running fine

 [root@fractal-c5ac glusterfs]# systemctl status glusterd.service
 glusterd.service - GlusterFS an clustered file-system server
Loaded: loaded (/usr/lib/systemd

[Gluster-users] Unable to run Gluster Test Framework on CentOS 7

2014-09-23 Thread Kiran Patil
Hi,

I followed the below steps to run tests.

The link
http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
does not have info for CentOS 7 and I tried to follow the same steps with
epel pointing to release 7

Here are the steps for CentOS 7:

1. Install EPEL:

$ sudo yum install -y
http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-1.noarch.rpm

2. Install the CentOS 7.x dependencies:

$ sudo yum install -y --enablerepo=epel cmockery2-devel dbench git
libacl-devel mock nfs-utils perl-Test-Harness yajl xfsprogs
$ sudo yum install -y --enablerepo=epel python-webob1.0
python-paste-deploy1.5 python-sphinx10 redhat-rpm-config
== The missing packages are
No package python-webob1.0 available.
No package python-paste-deploy1.5 available.
No package python-sphinx10 available.

$ sudo yum install -y --enablerepo=epel autoconf automake bison
dos2unix flex fuse-devel libaio-devel libibverbs-devel \
 librdmacm-devel libtool libxml2-devel lvm2-devel make
openssl-devel pkgconfig \
 python-devel python-eventlet python-netifaces python-paste-deploy \
 python-simplejson python-sphinx python-webob pyxattr
readline-devel rpm-build \
 systemtap-sdt-devel tar

3. Create the mock user

$ sudo useradd -g mock mock

4. Running the testcases results in error as below

[root@fractal-c5ac glusterfs]# ./run-tests.sh

... GlusterFS Test Framework ...

Running all the regression test cases
[09:55:02] ./tests/basic/afr/gfid-mismatch.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/gfid-self-heal.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/metadata-self-heal.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/read-subvol-data.t . Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/read-subvol-entry.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/resolve.t .. Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/self-heal.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/sparse-file-self-heal.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/afr/stale-file-lookup.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/bd.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/cdc.t .. Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-12-4.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-3-1.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-4-1.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-5-1.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-5-2.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-6-2.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec-7-3.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/ec.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/nfs.t ... Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/ec/self-heal.t . Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/file-snapshot.t  Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:02] ./tests/basic/fops-sanity.t .. Dubious,
test returned 1 (wstat 256, 0x100)
No subtests run
[09:55:03] ./tests/basic/gfid-access.t .. Dubious,
test returned 1 (wstat 256, 0x100)

Both glusterd and glusterfsd are running fine

[root@fractal-c5ac glusterfs]# systemctl status glusterd.service
glusterd.service - GlusterFS an clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled)
   Active: active (running) since Tue 2014-09-23 14:45:42 IST; 45min ago
  Process: 12246 ExecStart=/usr/sbin/glusterd -p /run/glusterd.pid
(code=exited, status=0/SUCCESS)
 Main PID: 12247 (glusterd)
   CGroup: /system.slice/glusterd.service
   └─12247 /usr/sbin/glusterd -p 

Re: [Gluster-users] Unable to run Gluster Test Framework on CentOS 7

2014-09-23 Thread Kiran Patil
Hi,

Sorry, I did not checkout the gluster source for the version I am running
to run the testcases.

It is working fine.

Thanks,
Kiran.

On Tue, Sep 23, 2014 at 1:09 PM, Kiran Patil kirantpa...@gmail.com wrote:

 Hi,

 I followed the below steps to run tests.

 The link
 http://gluster.org/community/documentation/index.php/Using_the_Gluster_Test_Framework
 does not have info for CentOS 7 and I tried to follow the same steps with
 epel pointing to release 7

 Here are the steps for CentOS 7:

 1. Install EPEL:

 $ sudo yum install -y 
 http://epel.mirror.net.in/epel/7/x86_64/e/epel-release-7-1.noarch.rpm

 2. Install the CentOS 7.x dependencies:

 $ sudo yum install -y --enablerepo=epel cmockery2-devel dbench git 
 libacl-devel mock nfs-utils perl-Test-Harness yajl xfsprogs
 $ sudo yum install -y --enablerepo=epel python-webob1.0 
 python-paste-deploy1.5 python-sphinx10 redhat-rpm-config
 == The missing packages are
 No package python-webob1.0 available.
 No package python-paste-deploy1.5 available.
 No package python-sphinx10 available.

 $ sudo yum install -y --enablerepo=epel autoconf automake bison dos2unix flex 
 fuse-devel libaio-devel libibverbs-devel \
  librdmacm-devel libtool libxml2-devel lvm2-devel make openssl-devel 
 pkgconfig \
  python-devel python-eventlet python-netifaces python-paste-deploy \
  python-simplejson python-sphinx python-webob pyxattr readline-devel 
 rpm-build \
  systemtap-sdt-devel tar

 3. Create the mock user

 $ sudo useradd -g mock mock

 4. Running the testcases results in error as below

 [root@fractal-c5ac glusterfs]# ./run-tests.sh

 ... GlusterFS Test Framework ...

 Running all the regression test cases
 [09:55:02] ./tests/basic/afr/gfid-mismatch.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/gfid-self-heal.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/metadata-self-heal.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/read-subvol-data.t . Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/read-subvol-entry.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/resolve.t .. Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/self-heal.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/sparse-file-self-heal.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/afr/stale-file-lookup.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/bd.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/cdc.t .. Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-12-4.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-3-1.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-4-1.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-5-1.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-5-2.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-6-2.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec-7-3.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/ec.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/nfs.t ... Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/ec/self-heal.t . Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/file-snapshot.t  Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:02] ./tests/basic/fops-sanity.t .. Dubious,
 test returned 1 (wstat 256, 0x100)
 No subtests run
 [09:55:03] ./tests/basic/gfid-access.t .. Dubious,
 test returned 1 (wstat 256, 0x100)

 Both glusterd and glusterfsd are running fine

 [root@fractal-c5ac glusterfs]# systemctl status glusterd.service
 glusterd.service - GlusterFS an clustered file-system server
Loaded: loaded (/usr/lib/systemd

[Gluster-devel] calamari for gluster

2014-09-15 Thread Kiran Patil
Hi,

I think there is no Webapp for gluster to monitor and management. I was
wondering if anyone has thought of using calamari to support gluster.

How feasible it is to extend Calamari to support gluster ?

What could be the Pros/Cons ?

https://github.com/ceph/calamari
https://github.com/ceph/calamari-clients

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-users] calamari for gluster

2014-09-15 Thread Kiran Patil
Hi,

I think there is no Webapp for gluster to monitor and management. I was
wondering if anyone has thought of using calamari to support gluster.

How feasible it is to extend Calamari to support gluster ?

What could be the Pros/Cons ?

https://github.com/ceph/calamari
https://github.com/ceph/calamari-clients

Thanks,
Kiran.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: Remedy Administrator in Jacksonville Florida

2014-09-05 Thread Kiran Patil
Will employer sponsor the VISA?


On Fri, Sep 5, 2014 at 7:39 PM, Terri Lockwood teresa.lockw...@sungard.com
wrote:

 **

 My company, SunGard Corporation, is looking for a *BMC Remedy
 Administrator*. SunGard is a growing company and great to work for.  It
 is a full time, direct hire position.  It would be based here in
 Jacksonville Florida.  Below are the position requirements.



 *BMC Remedy Administrator *



 Job ID #: 29618  Primary Location: US-FL-Jacksonville-701 San Marco Blvd

 Secondary Location(s):  UK-England-London-25 Canada Square



 Division: Financial Systems  Group: Corporate Liquidity

 Business Unit: AvantGard-Treasury/Receivables  Department: Operations - IT


 Functional Area: Information Technology  Education Desired: Bachelor’s
 Degree or equivalent

 Position Type: Full-Time Regular  Experience Desired: At least 10 years

 Relocation Provided: No

 Travel Percentage: 10



 Position Responsibilities



 • Provide daily support to customers of the AvantGard Remedy application.

 • Evaluate, design, develop, and debug BMC Remedy forms and workflows for
 IT and non-IT functional areas.

 • Configure ITSM suite in a multi-tenant, internal support and external
 customer facing environment.

 • Develop and maintain application interfaces to BMC Remedy systems.

 • Monitor and support BMC Remedy implementations and upgrades.

 • Analyze existing IT processes as related to ITSM tools.

 • Work as a member of BMC Remedy related projects.

 • Support and train all users on use of BMC Remedy and supporting systems.

 • Stay on top of all trends and technologies supporting ITSM functions.

 • Participate and engage in meetings to discuss, address, evaluate,
 support, or advance the role of Remedy.

 • Recommend process improvements to increase employee productivity and
 reduce administrative overhead as identified through reporting and auditing.

 • Provide scheduled metrics reporting as defined by management.

 • Provide ad hoc reporting on an as needed basis.

 • Support related systems as required by management.



 Position Requirements



 • Bachelor’s degree or equivalent work experience

 •10 years BMC Remedy Development and Administration.

 •7 years Crystal Reports, Business Objects, Informatica or similar
 reporting tools development

 • Knowledge of BMC Remedy ARS 7.x platform or 8.x platform

 • Knowledge of BMC Remedy  ITSM Suite 7.x. or 8.x

 • Knowledge of BMC Remedy on Windows Platforms

 • Knowledge of Perl, Shell, and other scripting automation languages

 • Knowledge of SQL.

 • Familiarity with Incident/Problem Management, Change Management,
 Asset/Inventory Management, SDLC, and DevOps processes in an enterprise
 environment

 • Communication and interpersonal skills

 • Project management and documentation skills

 • Troubleshooting, diagnostic and performance analysis skills

 • Remedy Administrator and Developer Training or demonstrated experience

 • ITIL Foundation Certification (v3 preferred, v2 accepted).



 If interested please send me your resume or click here to apply:
 http://financialsystemsjobs.sungard.com/jacksonville/information-technology/jobid6005465-bmc-remedy-administrator-jobs





 *TERRI LOCKWOOD* • SENIOR SYSTEM ADMINISTRATOR • SunGard •  AvantGard •

 701 San Marco Blvd, Suite 1100 •  Jacksonville, FL 32207

 Office +1 (904) 281-8069 • Cell +1 (904) 627-8651 •
 teresa.lockw...@sungard.com



 [image:
 http://my.knowhow.sungard.com/_layouts/images/SunGard%20Signature/Signature1.png]
 http://www.sungard.com/ten



 *Join the online conversation with SunGard’s customers, partners and
 Industry experts and find an event near you at: www.sungard.com/ten
 http://www.sungard.com/ten. *



 P *Think before you print *



 CONFIDENTIALITY: This e-mail (including any attachments) may contain
 confidential, proprietary and privileged information, and unauthorized
 disclosure or use is prohibited. If you received this e-mail in error,
 please notify the sender and delete this e-mail from your system.


  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*​Regards*

*Kiran PatilMobile: +91 9890377125*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: [Gluster-devel] Gluster Test Framwork tests failed on Gluster+Zfs(Zfs on linux)

2014-09-04 Thread Kiran Patil
Hi,

Please let us know if there are any fixes to the gluster test framework.

Any help/guidance on what should we do or can be done ?

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


Re: [Gluster-devel] Gluster Test Framwork tests failed on Gluster+Zfs(Zfs on linux)

2014-09-04 Thread Kiran Patil
Hi Santosh,

May be if you or anyone could lay a way to add Zfs(ZOL) or Btrfs specific
test cases, that would be great and we could contribute our best to add new
test cases.

Thanks,
Kiran.


On Thu, Sep 4, 2014 at 4:19 PM, Santosh Pradhan sprad...@redhat.com wrote:

 Hi,
 Currently GlusterFS is tightly coupled with ext(2/3/4) and XFS. Zfs (ZOL)
 and Btrfs are not supported at the moment, may get supported in future (at
 least btrfs).

 Thanks,
 Santosh



 On 09/04/2014 03:34 PM, Justin Clift wrote:

 On 28/08/2014, at 9:30 AM, Kiran Patil wrote:

 Hi Gluster Devs,

 I ran the Gluster Test Framework on Gluster+zfs stack and found issues.

 I would like to know if I need to submit a bug at Redhat Bugzilla since
 the stack has zfs, which is not supported by Redhat or Fedora if I am not
 wrong?

 Definitely create an issue on the Red Hat Bugzilla, for the GlusterFS
 product ; there:

https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS

 Since it's for the upstream Community, the official Red Hat Supported
 list isn't super relevant.


 snip

 Test Summary Report
 ---
 ./tests/basic/quota.t   (Wstat: 0 Tests: 45
 Failed: 3) -- quota issue
Failed tests:  24, 28, 32
 ./tests/bugs/bug-1004744.t  (Wstat: 0 Tests: 14
 Failed: 4) -- passes on changing EXPECT_WITHIN 20 to EXPECT_WITHIN 30
Failed tests:  10, 12-14
 ./tests/bugs/bug-1023974.t  (Wstat: 0 Tests: 15
 Failed: 1) -- quota issue
Failed test:  12
 ./tests/bugs/bug-824753.t   (Wstat: 0 Tests: 16
 Failed: 1) -- file-locker issue
Failed test:  11
 ./tests/bugs/bug-856455.t   (Wstat: 0 Tests: 8
 Failed: 1) -- brick directory name is hardcoded while executing kill
 command
Failed test:  8
 ./tests/bugs/bug-860663.t   (Wstat: 0 Tests: 10
 Failed: 1) -- brick directory name is hardcoded and failed at TEST ! touch
 $M0/files{1..1};
Failed test:  8
 ./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13
 Failed: 4) -- brick directory name is hardcoded and all EXPECT tests are
 failing
Failed tests:  10-13
 ./tests/bugs/bug-902610.t   (Wstat: 0 Tests: 8
 Failed: 1) -- brick directory name is hardcoded and EXPECT test failing
Failed test:  8
 ./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35
 Failed: 4) -- XFS related and brick directory name is hardcoded
Failed tests:  15, 17, 19, 21
 ./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
 Failed: 8) -- XFS related and brick directory name is hardcoded
Failed tests:  15, 17, 19, 21, 24-27
 ./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23
 Failed: 3) -- XFS related and brick directory name is hardcoded
Failed tests:  12, 15, 23
 ./tests/bugs/bug-963541.t   (Wstat: 0 Tests: 13
 Failed: 3) -- remove-brick issue
Failed tests:  8-9, 13
 ./tests/features/glupy.t(Wstat: 0 Tests: 6
 Failed: 2)
Failed tests:  2, 6

 Subset of the above bugs which can be reproduced on Glusterfs + ext4 is
 filed at Redhat bugzilla which is Bug id 1132496.

 The glupy.t one I'll be able to look at, but not in the next few days.  I
 need
 to finish my current task, and then to get my head back into Glupy.

 bug-1004744.t and the tests with hard coded brick directory names may be
 easy
 to fix.  The others I'm not sure about.

 Do you have any interest in creating the fixes for the one's you're
 comfortable with, and submitting them through Gerrit? (review.gluster.org
 )

 Regards and best wishes,

 Justin Clift

 --
 GlusterFS - http://www.gluster.org

 An open source, distributed file system scaling to several
 petabytes, and handling thousands of clients.

 My personal twitter: twitter.com/realjustinclift

 ___
 Gluster-devel mailing list
 Gluster-devel@gluster.org
 http://supercolony.gluster.org/mailman/listinfo/gluster-devel



___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[Gluster-devel] Gluster Test Framwork tests failed on Gluster+Zfs(Zfs on linux)

2014-08-28 Thread Kiran Patil
Hi Gluster Devs,

I ran the Gluster Test Framework on Gluster+zfs stack and found issues.

I would like to know if I need to submit a bug at Redhat Bugzilla since the
stack has zfs, which is not supported by Redhat or Fedora if I am not
wrong?

We modified the paths in include.rc to make sure that mount points and
bricks directories are created under the zfs datasets.

For example: include.rc first line

Original path -  M0=${M0:=/mnt/glusterfs/0};   # 0th mount point for FUSE

New path - M0=${M0:=/fractalpool/normal/mnt/glusterfs/0};   # 0th mount
point for FUSE

Where /fractalpool is zfs pool and normal is zfs dataset.

Gluster version: v3.5.2

Zfs version: v0.6.2-1

Hardware: x86_64

How reproducible: Always

Steps to Reproduce:
1. Install gluster v3.5.2 rpm on CentOS 6.4
2. Install zfsonlinux v0.6.2
3. clone the gluster from github and checkout v3.5.2
4. ./run-tests.sh

Here is a summary of Testcases failed, along with some hints on where they
failed.

Test Summary Report
---
./tests/basic/quota.t   (Wstat: 0 Tests: 45 Failed:
3) -- quota issue
  Failed tests:  24, 28, 32
./tests/bugs/bug-1004744.t  (Wstat: 0 Tests: 14 Failed:
4) -- passes on changing EXPECT_WITHIN 20 to EXPECT_WITHIN 30
  Failed tests:  10, 12-14
./tests/bugs/bug-1023974.t  (Wstat: 0 Tests: 15 Failed:
1) -- quota issue
  Failed test:  12
./tests/bugs/bug-824753.t   (Wstat: 0 Tests: 16 Failed:
1) -- file-locker issue
  Failed test:  11
./tests/bugs/bug-856455.t   (Wstat: 0 Tests: 8 Failed:
1) -- brick directory name is hardcoded while executing kill command
  Failed test:  8
./tests/bugs/bug-860663.t   (Wstat: 0 Tests: 10 Failed:
1) -- brick directory name is hardcoded and failed at TEST ! touch
$M0/files{1..1};
  Failed test:  8
./tests/bugs/bug-861542.t   (Wstat: 0 Tests: 13 Failed:
4) -- brick directory name is hardcoded and all EXPECT tests are failing
  Failed tests:  10-13
./tests/bugs/bug-902610.t   (Wstat: 0 Tests: 8 Failed:
1) -- brick directory name is hardcoded and EXPECT test failing
  Failed test:  8
./tests/bugs/bug-948729/bug-948729-force.t  (Wstat: 0 Tests: 35 Failed:
4) -- XFS related and brick directory name is hardcoded
  Failed tests:  15, 17, 19, 21
./tests/bugs/bug-948729/bug-948729-mode-script.t (Wstat: 0 Tests: 35
Failed: 8) -- XFS related and brick directory name is hardcoded
  Failed tests:  15, 17, 19, 21, 24-27
./tests/bugs/bug-948729/bug-948729.t(Wstat: 0 Tests: 23 Failed:
3) -- XFS related and brick directory name is hardcoded
  Failed tests:  12, 15, 23
./tests/bugs/bug-963541.t   (Wstat: 0 Tests: 13 Failed:
3) -- remove-brick issue
  Failed tests:  8-9, 13
./tests/features/glupy.t(Wstat: 0 Tests: 6 Failed:
2)
  Failed tests:  2, 6

Subset of the above bugs which can be reproduced on Glusterfs + ext4 is
filed at Redhat bugzilla which is Bug id 1132496.

Thanks,
Kiran.
___
Gluster-devel mailing list
Gluster-devel@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-devel


[ruote:4325] Re: ruote alternatives since it is going to be ceased

2013-12-23 Thread Kiran Patil
How about using https://github.com/geekq/workflow ?

On Friday, 22 November 2013 13:46:34 UTC+5:30, Kiran Patil wrote:

 Hello,

 The recent commit to Github shows that 
 Active development on ruote ceased.

 Do you recommend https://github.com/bokmann/stonepath ?

 Please let us know what are the best alternatives for ruote.

 Thanks.


-- 
-- 
you received this message because you are subscribed to the ruote users group.
to post : send email to openwferu-users@googlegroups.com
to unsubscribe : send email to openwferu-users+unsubscr...@googlegroups.com
more options : http://groups.google.com/group/openwferu-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
ruote group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to openwferu-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[ruote:4295] ruote alternatives since it is going to be ceased

2013-11-22 Thread Kiran Patil
Hello,

The recent commit to Github shows that Active development on ruote ceased.

Do you recommend https://github.com/bokmann/stonepath ?

Please let us know what are the best alternatives for ruote.

Thanks.

-- 
-- 
you received this message because you are subscribed to the ruote users group.
to post : send email to openwferu-users@googlegroups.com
to unsubscribe : send email to openwferu-users+unsubscr...@googlegroups.com
more options : http://groups.google.com/group/openwferu-users?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
ruote group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to openwferu-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: 44699 Resending

2013-07-24 Thread Kiran Patil
Hi Robert,

Are you creating Incident as fulfillment application?


Regards
Kiran


On Thu, Jul 25, 2013 at 4:40 AM, Robert Heverley
robert.hever...@gmail.comwrote:

 **
 Hello All,

 Please see the attachment. Can someone give some guidance on where I go to
 manually assign the group. Every time this happens, I have to cancel the
 request. Any help would be greatly appreciated. Thank you.

 Robert
 _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: 44699 Resending

2013-07-24 Thread Kiran Patil
Corrected**
 Are you using Incident as fulfillment application?


On Thu, Jul 25, 2013 at 4:40 AM, Robert Heverley
robert.hever...@gmail.comwrote:

 **
 Hello All,

 Please see the attachment. Can someone give some guidance on where I go to
 manually assign the group. Every time this happens, I have to cancel the
 request. Any help would be greatly appreciated. Thank you.

 Robert
 _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Does the init form still work?

2013-07-23 Thread Kiran Patil
Hi Lisa,

Init-Form: Form Name parameter can be used to capture event when user
login to user tool (thick client) this *parameter does not work for web
client*.

You can add java script to remedy login web page to record required
information.


Regards
Kiran






On Tue, Jul 23, 2013 at 2:59 PM, Lisa Singh lisa.si...@gmail.com wrote:

 I was reading a discussion about using an init form to be able to
 track last login time of users more easily than trying to download
 millions of records using AR System Historical License Usage.

 Does anyone know if init forms still work in the Brave New World of
 mid-tier?

 Kind Regards,

 Lisa


 Remedy 7.6.04 SP4
 Windows 2008 R2
 MS SQL Sserver


 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Where the Answers Are, and have been for 20 years




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Asset Classification

2013-07-23 Thread Kiran Patil
Hi Kathy,

I think Ethernet can be mapped to *BMC.CORE:BMC_LAN* class as its
subcomponent of network connectivity and related to LAN terminology and CI
type can be *Collection*  *Connectivity Collection** *in Asset Management*.
*Relationship can be deifned as* **BMC_HostedAccessPoint** *or you may
create new relationship class as CDM changes.

As per my expereince Ethernet can be categories as *Network - LAN -
Ethernet** * in product catalog.

Hope this will help you to define your class model.

*Regards*
*Kiran*
*
*


On Tue, Jul 23, 2013 at 10:49 PM, Kathy Morris kathymorris...@aol.comwrote:

 **

 Hi,

 ** **

 I have HP Virtual Connect Flex-10 GB Ethernet Modules for BladeSystems.  *
 ***

 Does anyone have these type of assets in your environment?

 Trying to figure out if these should be classified as NICs, or if we
 should create a separate class for them.

 These HP Ethernet modules need to be related to a server and the chassis.*
 ***

 Any recommendations on how to classify these assets to build the correct
 relationships?

 ** **
  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Adding New change reason value in 8.1

2013-07-22 Thread Kiran Patil
Hi Lokesh,

OOB workflow does work for OOB change reason field. You need to make sure
OOB workflow also works for new attribute you have created for change
reason field.
Enable server side log and find out workflow use change reason field to
initiate approval process and make sure same workflow understand new
attribute.

You may need to add same attribute to approval and change join from to
carry same value further, check server side log for more details.

Regards
Kiran



On Mon, Jul 22, 2013 at 12:33 PM, Lokesh Jayaraman 
lokeshjayara...@gmail.com wrote:

 **


 I am having issues when I add new change reason value in infrastructure
 change related forms. When I select the new change reason , the CRQ should
 move forward to Scheduled status from Draft.

 I have created a new process flow where the CRQ moves forward from Draft
 to Scheduled for No Impact changes and associated the new process flow to
 the Approval process config form. In approval process config form I have
 defined a record.

 In change template form when I have a old value in the Change Reason field
 say as Maintenance and try to create a CRQ, it works as expected. However
 When I have the new value in the Change reason field and create a CRQ but
 the CR is not moving from draft to Scheduled.

 Please any suggestions...

 Regards,
 Lokesh Jayaraman
 9566066338

  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Adding New change reason value in 8.1

2013-07-22 Thread Kiran Patil
Hi Lokesh,

Below error indicates selection field does not have value is in the list
being configured. For you need to add your new attribute to Approval-change
join form as well.

Check, transaction in filter+SQL log @ same timestamp mentioned in approval
log, you will get it.

APPR [Fri Jul 19 05:20:52.140] [Thread 1] ERROR - 306 Value does not
fall within the limits specified for the field
APPR [Fri Jul 19 05:20:52.140] [Thread 1] ERROR - 4502 Operation
cancelled due to error
APPR [Fri Jul 19 05:20:52.140] [Thread 1] ERROR - New-Details -
CHG:Infrastructure Change - CRQ1120


Regards

Kiran



On Mon, Jul 22, 2013 at 2:31 PM, Lokesh Jayaraman lokeshjayara...@gmail.com
 wrote:

 **
 Hi Kiran,

 I have compared the workflow between OOB change reason and newly added
 change reason. Everything looks same but approval logs shows error Value
 does not fallen with limit but not sure which form , I have receiving
 error.


 On Mon, Jul 22, 2013 at 2:14 PM, Kiran Patil kiranpatil@gmail.comwrote:

 **
 Hi Lokesh,

 OOB workflow does work for OOB change reason field. You need to make sure
 OOB workflow also works for new attribute you have created for change
 reason field.
 Enable server side log and find out workflow use change reason field to
 initiate approval process and make sure same workflow understand new
 attribute.

 You may need to add same attribute to approval and change join from to
 carry same value further, check server side log for more details.

 Regards
  Kiran



 On Mon, Jul 22, 2013 at 12:33 PM, Lokesh Jayaraman 
 lokeshjayara...@gmail.com wrote:

 **


 I am having issues when I add new change reason value in infrastructure
 change related forms. When I select the new change reason , the CRQ should
 move forward to Scheduled status from Draft.

 I have created a new process flow where the CRQ moves forward from Draft
 to Scheduled for No Impact changes and associated the new process flow to
 the Approval process config form. In approval process config form I have
 defined a record.

 In change template form when I have a old value in the Change Reason
 field say as Maintenance and try to create a CRQ, it works as expected.
 However When I have the new value in the Change reason field and create a
 CRQ but the CR is not moving from draft to Scheduled.

 Please any suggestions...

 Regards,
 Lokesh Jayaraman
 9566066338

  _ARSlist: Where the Answers Are and have been for 20 years_




 --
 *Regards*
 *
 **Kiran Patil*
 *Cognizant Technology Solutions*
 *Pune, India*
 *Mob No: +91 989 037 7125
 *
  _ARSlist: Where the Answers Are and have been for 20 years_




 --
 Lokesh Jayaraman
 9566066338
 “இறைவன் மனிதனுக்குச் சொன்னது *கீதை*,
 மனிதன் இறைவனுக்குச் சொன்னது *திருவாசகம*்,
 மனிதன் மனிதனுக்குச் சொன்னது *திருக்குறள*்”
  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: How to generate work infos and associate to incidents from incoming emails

2013-07-22 Thread Kiran Patil
Hi,

You can use one fixed user for all email coming from unregister email I'd in 
remedy and say requter's FN, LN, location  same for all same kind of request 
except email I'd. Carry same email I'd (from where original email come) on 
incident form (through interface form) in customer email I'd field. 
Here even if you have fixed user for unregister email I'd, system will sent all 
customer notification to requter's email I'd. 

Regards
Kiran




On Tue, 23 Jul 2013 00:23:59 +0550, arslist@ARSLIST.ORG wrote:
 Thanks Douglas. I can't figure out how to get the email to be picked up by 
 the email engine to begin with, without submitting as a template, i.e. 
 including #AR-Message-Begin and the server/user/pw etc. I've got filters on 
 the Messages form to fire on submit where direction = Inbound, and to parse 
 the INC out of the subject, and a staging form and lookup workflow 
 between the AR System Email Messages form and HPD:IncidentInterface forms, 
 but without the email being picked up to begin with, it's running pretty dry 
 :).
 
 Any thoughts?
 
 
 ** 
 It can be pretty complicated.
 
 You will need to have workflow that watches the AR System Email messages 
 form, and when an email comes in, you can, in the simplest case, use the 
 HPD:IncidentInterface_Create form to push a new incident into Help Desk.  YOu 
 could then attach the email as a work info item, using the 
 HPD:IncidentInterface and the Incident created previously.
 
 To get fancier, you could grab the user information from the from field, and 
 use that to set the requester information on the incident during the initial 
 create.
 
 It's not hard, but it's not enable this active link either.
 
 On Mon, Jul 22, 2013 at 2:30 PM, Jim Hetfield jim.hetfi...@gmail.com wrote:
 Anyone know how BMC's support website takes incoming emails and associates 
 them to incidents? I'm looking into how to do that, creating work info 
 entries from incoming emails. We're on 7.5 ARS/ITSM and going to 8.X isn't an 
 option at this time.
 
 TIA
 Jim
 
 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Where the Answers Are, and have been for 20 years
 
 _ARSlist: Where the Answers Are and have been for 20 years_ 
 
 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Where the Answers Are, and have been for 20 years

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: How to send message to parent that child window has finished loading in parent view field

2013-07-21 Thread Kiran Patil
Hi,

Please refer below steps from BMC Help, hopw this will help you.
You can check BMC help for details.

 Using Commit Changes with a dialog box

In workflow, you can use a display-only form as a dialog box to capture
user input. To do this, you use the Commit Changes action to transfer the
data back to the parent form, in combination with the Open Window and Close
Window actions, as follows:

- In an Open Window action with the Dialog window type, map the data to
   be written in the field mapping for the On Dialog Close Action. See Open
   Window 
actionhttp://127.0.0.1:57285/help/topic/com.bmc.arsys.studio.help/BMC%20Remedy%20Developer%20Studio%20Help-33-31.html
   .
- Use the Commit Changes action before the Close Window action to write
   the data entered in a dialog box to the parent form.
- Use the Close Window action to close the dialog box. See Close
   Window 
actionhttp://127.0.0.1:57285/help/topic/com.bmc.arsys.studio.help/BMC%20Remedy%20Developer%20Studio%20Help-33-13.html
   .


Regards
Kiran



On Sun, Jul 21, 2013 at 3:10 PM, Angus Comber arsl...@iteloffice.comwrote:

 Just to be sure I understand what you are saying, you mean that I create
 an active link with execution options Display selected.  Then If action
 commit changes?  So I commit some variables in fields from my child from to
 the parent form.


 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Where the Answers Are, and have been for 20 years




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: How to send message to parent that child window has finished loading in parent view field

2013-07-20 Thread Kiran Patil
Hi,

Use commit changes action in your active link on display only (child) form.

Regards
Kiran 


On Sat, 20 Jul 2013 20:41:17 +0550, arslist@ARSLIST.ORG wrote:
 I am displaying a child window in a parent form view field.  I want to send a 
 message to the parent window when the child window is loaded.
 
 so I created a run process active link on the child form with execution 
 options: 'Window Loaded'
 
 The run process command was:
 
 PERFORM-ACTION-SEND-EVENT @ ChildFormReady
 
 On the parent form I created an active link with execution options: Event 
 and Run if $EVENTTYPE$ = ChildFormReady  and load a message to user (this 
 is just testing feature)
 
 But the message never gets raised.
 
 If I right click on child window and refresh page, only then do I see message.
 
 What am I doing wrong?
 
 This is thin client, Internet Explorer 10 (in compatibility mode) on remedy 
 7.6
 
 ___
 UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
 Where the Answers Are, and have been for 20 years

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Extending 8.1 CMDB with a new Class / AST:Attributes records not being created

2013-07-19 Thread Kiran Patil
Hi Tim,

We have ARS 8.1 environment with same Atrium version. I had created new
class under computerSystem class. Class has been created successfully along
with Asset Join form and I am able to create and search CI in newly created
class.

Please check below things if synchronization has not happened correctly.

1. Open new created class in edit mode in class manager - More information
tab - check if AR
System form name is there or not.
2. Make sure you have completed Aseet cmdb synch through cmdbdriver.
3. Go to SHR:SchemaNames - search record with new class schema name - Set
*Has Asset UI = Yes * (Yes/No)
4. Form Lookup Tab - Set Form Code e.g.ALP
5. Check *Form lookup* Check Box.

Clear browser cache and temporary files. Its working for me.

Good luck !!

Regards
Kiran




On Thu, Jul 18, 2013 at 11:46 PM, Hulmes, Timothy CTR MDA/ICTO 
timothy.hulmes@mda.mil wrote:

 **

 I know there has been many discussions about how the CMDB functionality
 has changed in the 8.1 system.

 I discovered a new issue that I don't think has been discussed yet,
 Extending the CMDB with a new class.

 We have a requirement that has caused us to extend the CMDB with an
 additional class.

 The normal process for a CMDB extension has been completed. 

 The problem: When users create a record in the new Asset class form
 nothing is pushed to the AST:attributes form.  (This causes the user to not
 see the record.)

 We have done some research and have identified at least 2 filters that
 possible need to be modified. These filters push data to BMC forms and to
 the attributes form.  They are both set to trigger on the AST forms. Our
 new AST from is not in the Associated forms list. We have added our new
 form to these two filters (ASI:SHR:All_600_PushToBMCForm and
 AST:SHR:All_PushToAssetAttribute).  We are still not getting a record
 created for this new class in the AST:Attributes form.

 The question: Has anyone been able to successfully extend the CMDB with a
 new class and have the attributes form created so users can see the
 records? 

 Tim

 ** **
  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Workflow Concern

2013-07-19 Thread Kiran Patil
Hi,



It is not recommended to write custom code to perform normalization
activity. Atrium has OOB capability to perform normalization, you may need
to debug it and find out root cause for failure.

 Check atrium log in atrium folder where it would be easy to identify root
cause. search in log with Instance ID or CI Name where normalization has
been failed.

 And also check CI type mentioned on ProductAliasMapping form and on
Product Catalog foundation form. We have faced this issue because of CI
type mismatch.



What Atrium version are you running on?



Regards

Kiran





On Thu, Jul 18, 2013 at 8:38 PM, Kathy Morris kathymorris...@aol.comwrote:

 **

 Hi,

 ** **

 We tried to set a normalization rule for a class of CIs and it was not
 working properly.

 If we used a filter to set a field attribute in class BMC_OperatingSystem
 – would the filter be more of a performance hit?

 Not sure of how the workflow is queried/modified thru the normalization
 rule.  Is it about the same performance hit as a filter?
  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


Re: Archive Incident Management Change Management 7.1

2013-07-15 Thread Kiran Patil
Hi Mahmoud,

You can achieve remedy archiving using some of the external tools like
rrchive, Informatica, BMC DSO

1. First find out business timeframe and condition for data archieving.
2. Identify main forms and related forms to archieve.

Use any of the above tool or similar tool to migrate data.

Challenge here would be data integritiy which should be impacted.

Good Luck !!

Regards
Kiran






On Mon, Jul 15, 2013 at 2:22 PM, mahmoud mahdy mahmoud_ma...@live.comwrote:

 **
 Dears,

 Kindly help as we have (500 GB database file) which affecting our
 performance and causing system down.
 is there a way to truncate the history data from the system or DB..?.

 Thanks,
 Best Regards,

  _ARSlist: Where the Answers Are and have been for 20 years_




-- 
*Regards*
*
**Kiran Patil*
*Cognizant Technology Solutions*
*Pune, India*
*Mob No: +91 989 037 7125
*

___
UNSUBSCRIBE or access ARSlist Archives at www.arslist.org
Where the Answers Are, and have been for 20 years


  1   2   >