Regards

Gerard
---
Gerard Kilburn
Enovia Config SME
Tel: 8 726 6970
Ext: +44 (0) 7620 6970
Jaguar Land Rover Limited
Registered Office: Abbey Road, Whitley, Coventry CV3 4LF
Registered in England No: 1672070


On 16 April 2013 14:57, <[email protected]> wrote:

> Send Linux-HA mailing list submissions to
>         [email protected]
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://lists.linux-ha.org/mailman/listinfo/linux-ha
> or, via email, send a message with subject or body 'help' to
>         [email protected]
>
> You can reach the person managing the list at
>         [email protected]
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Linux-HA digest..."
>
> Today's Topics:
>
>    1. Re: How to (Andrew Beekhof)
>    2. Re: Migrating from heartbeat Fedora 17 to Fedora 18       pacemaker
>       (Andrew Beekhof)
>    3. Antw: Re: Q: limiting parallel execution of resource actions
>       (Ulrich Windl)
>    4. Re: Antw: Re: Q: limiting parallel execution of resource
>       actions (Lars Marowsky-Bree)
>    5. Re: Behaviour of fence/stonith device fence_imm (Andreas Mock)
>    6. Resource move not moving (Marcus Bointon)
>    7. Re: Resource move not moving (RaSca)
>    8. Re: Resource move not moving (fabian.herschel)
>
>
> ---------- Forwarded message ----------
> From: Andrew Beekhof <[email protected]>
> To: General Linux-HA mailing list <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 12:47:46 +1000
> Subject: Re: [Linux-HA] How to
>
> On 16/04/2013, at 1:11 AM, Moullé Alain <[email protected]> wrote:
>
> > Hi,
> >
> > I wonder if there is documentation somewhere to know how to exploit such
> > file for example : /var/lib/pengine/pe-input-890 from the original
> > zipped file :
> > /var/lib/pengine/pe-input-890.bz2
>
> Pass it to crm_simulate (-x), you'll be able to turn up the output logging
> and see what the cluster did and why.
>
> Both
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html/Pacemaker_Explained/s-config-testing-changes.html
> and
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-crmsh/html/Pacemaker_Explained/_interpreting_the_graphviz_output.html
> have some god info on interpreting the results.
>
> >
> > I mean it seems that it is quite like a cib.xml or a mix a information ,
> but
> > what can I get as interesting information from all these files under
> > /var/lib/pengine ?
> >
> > (samething for pe-warn-xxx files)
> >
> > Thanks
> > Alain
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
>
>
>
>
> ---------- Forwarded message ----------
> From: Andrew Beekhof <[email protected]>
> To: General Linux-HA mailing list <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 12:51:30 +1000
> Subject: Re: [Linux-HA] Migrating from heartbeat Fedora 17 to Fedora 18
> pacemaker
>
> On 16/04/2013, at 12:05 AM, Guilsson Guilsson <[email protected]> wrote:
>
> > Is there a way to continue using my cfg files in Fedora 18 pacemaker ?
>
> No. Sorry.
>
> > If not, Is there a straightforward (and simple) conversion between my cfg
> > files to new format ?
>
> There was one, but its honestly much simpler to start from scratch.
> Happily we have a step-by-step worked example:
>
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/index.html
>
> > Although it seams pacemaker has more features, is it become more
> > complicated ?
>
> On the surface it seems like it, but CLIs like pcs and crmsh make it
> reasonably accessible.
> I'd highly recommend this:
>
> http://blog.clusterlabs.org/blog/2009/configuring-heartbeat-v1-was-so-simple/
>
> >
> >
> > On Mon, Apr 15, 2013 at 9:58 AM, Digimer <[email protected]> wrote:
> >
> >> Heartbeat has been deprecated for a while. I suspect whoever maintains
> the
> >> heartbeat finally pointed it at pacemaker, which is it's replacement.
> Even
> >> if you can make heartbeat work on F18, it's better instead to put your
> time
> >> into making the move away.
> >>
> >> Cheers
> >>
> >>
> >> On 04/15/2013 08:53 AM, Guilsson Guilsson wrote:
> >>
> >>> Dear all
> >>>
> >>> I've been using HA for several years. It's a quite simple setup to
> create
> >>> an use. I'm using HA between 2 firewalls machines.
> >>> It's almost the same configuration for years: ha.cf / haresources /
> >>> authkeys.
> >>>
> >>> In previous instalations (Fedora 10/11/12/13/14/15/16/17) I simply
> issue
> >>> on
> >>> both firewalls:
> >>>
> >>> # yum -y install heartbeat (so all dependencies are installed)
> >>> # cp /root/backup/{ha.cf,**haresources/authkeys} /etc/ha.d/
> >>> # chkconfig hearbeat on (or systemctl enable hearbeat.service)
> >>> # service heartbeat start (of systemctl start heartbeat.service)
> >>> # /usr/share/heartbeat/hb_**standby all|foreign/failback (sometimes to
> >>> manage
> >>> nodes)
> >>>
> >>> and everythng has been working fine (for years).
> >>> Current working scenario in Fedora 17:
> >>>  # rpm -q heartbeat resource-agents
> >>> heartbeat-3.0.4-1.fc17.2.i686
> >>> resource-agents-3.9.2-2.fc17.**1.i686
> >>>
> >>>
> >>> My problems started when I re-installed the firewalls using Fedora 18.
> >>> At first "yum -y install heartbeat" install "pacemaker" instead.
> >>> After copying my files and tried to start the services nothing
> happened.
> >>> In fact, It seams EVERYTHING CHANGED A LOT: PCS, Corosync, Pacemaker,
> cib,
> >>> etc
> >>>
> >>> I prefer to stick on Fedora 18, so:
> >>>
> >>> Is there a way to continue using my cfg files in Fedora 18 pacemaker ?
> >>> If not, Is there a straightforward (and simple) conversion between my
> cfg
> >>> files to new format ?
> >>> Although it seams pacemaker has more features, is it become more
> >>> complicated ?
> >>>
> >>>
> >>> My cfg files:
> >>>
> >>> ha.cf
> >>> ------------------------------**------------------------------**
> >>> ------------------------------**--
> >>> debugfile /var/log/ha-debug
> >>> logfile /var/log/ha-log
> >>> logfacility local0
> >>> keepalive 2
> >>> deadtime 15
> >>> warntime 5
> >>> initdead 45
> >>> bcast eth1
> >>> auto_failback on
> >>> node gw1.gs.local
> >>> node gw2.gs.local
> >>> ping_group topsites 200.160.0.10 200.189.40.10 200.192.232.10
> >>> 200.219.154.10 200.229.248.10 200.219.159.10
> >>> respawn hacluster /usr/lib/heartbeat/ipfail
> >>> ------------------------------**------------------------------**
> >>> ------------------------------**--
> >>>
> >>> haresources
> >>> ------------------------------**------------------------------**
> >>> ------------------------------**--
> >>> gw1.gs.local 10.10.10.10/24/eth1
> >>> gw2.gs.local 10.10.10.20/24/eth1
> >>> ------------------------------**------------------------------**
> >>> ------------------------------**--
> >>>
> >>> authkeys
> >>> ------------------------------**------------------------------**
> >>> ------------------------------**--
> >>> auth 1
> >>> 1 md5 Super!@#$#%SecreT
> >>> ------------------------------**------------------------------**
> >>> ------------------------------**--
> >>>
> >>> Thanks in advance,
> >>> -Guilsson
> >>> ______________________________**_________________
> >>> Linux-HA mailing list
> >>> [email protected]
> >>> http://lists.linux-ha.org/**mailman/listinfo/linux-ha<
> http://lists.linux-ha.org/mailman/listinfo/linux-ha>
> >>> See also: http://linux-ha.org/**ReportingProblems<
> http://linux-ha.org/ReportingProblems>
> >>>
> >>>
> >>
> >> --
> >> Digimer
> >> Papers and Projects: https://alteeve.ca/w/
> >> What if the cure for cancer is trapped in the mind of a person without
> >> access to education?
> >>
> > _______________________________________________
> > Linux-HA mailing list
> > [email protected]
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
>
>
>
>
> ---------- Forwarded message ----------
> From: "Ulrich Windl" <[email protected]>
> To: "General Linux-HA mailing list" <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 09:01:30 +0200
> Subject: [Linux-HA] Antw: Re: Q: limiting parallel execution of resource
> actions
> >>> David Vossel <[email protected]> schrieb am 16.04.2013 um 00:28 in
> Nachricht
> <[email protected]>:
>
> [...]
> >
> > hey,
> >
> > 'batch-limit' cluster option might help.
>
> Yes, but that's for every resource then. There are "independent
> lightweight resources" (like adding IP addresses), and "interacting
> heavyweight resources" (like Xen VMs live-migratiing using one network
> channel so sync gigabytes of RAM). What makes sense for one, doesn't
> necessarily make sense for the other...
>
> Regards,
> Ulrich
>
> >
> >
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explai
> > ned/index.html#_available_cluster_options
> >
> > -- Vossel
>
>
>
>
>
> ---------- Forwarded message ----------
> From: Lars Marowsky-Bree <[email protected]>
> To: General Linux-HA mailing list <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 09:14:14 +0200
> Subject: Re: [Linux-HA] Antw: Re: Q: limiting parallel execution of
> resource actions
> On 2013-04-16T09:01:30, Ulrich Windl <[email protected]>
> wrote:
>
> > Yes, but that's for every resource then. There are "independent
> > lightweight resources" (like adding IP addresses), and "interacting
> > heavyweight resources" (like Xen VMs live-migratiing using one network
> > channel so sync gigabytes of RAM).
>
> "migration-limit"
>
>
> Regards,
>     Lars
>
> --
> Architect Storage/HA
> SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix
> Imendörffer, HRB 21284 (AG Nürnberg)
> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
>
>
>
>
> ---------- Forwarded message ----------
> From: "Andreas Mock" <[email protected]>
> To: "'General Linux-HA mailing list'" <[email protected]>,
> "'Andrew Beekhof'" <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 11:47:05 +0200
> Subject: Re: [Linux-HA] Behaviour of fence/stonith device fence_imm
> Hi Marek, hi all,
>
> we just investigated this problem a little further while
> looking at the sources of fence_imm.
>
> It seems that the IMM device does a soft shutdown despite
> documented differently. I can reproduce this with the
> ipmitool directly and also using ssh access.
>
> The only thing which seems to work in the expected rigorous
> way is the ipmi-command 'power reset'. But with this
> command I can't shutdown the server.
>
> I'll offer more informations when I get feedback to this
> behaviour.
>
> Best regards
> Andreas
>
>
> -----Ursprüngliche Nachricht-----
> Von: [email protected]
> [mailto:[email protected]] Im Auftrag von Marek Grac
> Gesendet: Montag, 15. April 2013 11:02
> An: Andrew Beekhof
> Cc: General Linux-HA mailing list
> Betreff: Re: [Linux-HA] Behaviour of fence/stonith device fence_imm
>
> Hi,
>
> On 04/15/2013 04:17 AM, Andrew Beekhof wrote:
> > On 13/04/2013, at 12:21 AM, Andreas Mock <[email protected]> wrote:
> >
> >> Hi all,
> >>
> >> just played with the fence/stonith device fence_imm.
> >> (as part of pacemaker on RHEL6.x and clones)
> >>
> >> It is configured to use the action 'reboot'.
> >> This action seems to cause a graceful reboot of the node.
> >>
> >> My question. Is this graceful reboot feasible when the node
> >> gets unreliable or would it be better to power cycle the
> >> machine (off/on)?
> Yes, it will. For fence_imm the standard IPMILAN fence agent is used
> without additional options. It uses  a method described by you: power
> off / check status / power on; it looks like that there are some changes
> in IMM we are not aware. Please fill a bugzilla for this issue, if you
> can do a proper non-graceful power off using ipmitools, please add it too.
>
> >> How can I achieve that the fence_imm is making a power cycle
> >> (off/on) instead of a "soft" reboot?
> >>
> Yes, you can use -M (method in STDIN/cluster configuration) with values
> 'onoff' (default) or 'cycle' (use reboot command on IPMI)
>
> m,
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
>
>
>
> ---------- Forwarded message ----------
> From: Marcus Bointon <[email protected]>
> To: General Linux-HA mailing list <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 15:50:07 +0200
> Subject: [Linux-HA] Resource move not moving
> I'm running crm using heartbeat 3.0.5 pacemaker 1.1.6 on Ubuntu Lucid 64.
>
> I have a small resource group containing an IP, ARP and email notifier on
> a cluster containing two nodes called proxy1 and proxy2. I asked it to move
> nodes, and it seems to say that was ok, but it hasn't actually moved, and
> crm_mon still shows it on the original node.
>
> # crm resource move proxyfloat3
> WARNING: Creating rsc_location constraint 'cli-standby-proxyfloat3' with a
> score of -INFINITY for resource proxyfloat3 on proxy1.
>         This will prevent proxyfloat3 from running on proxy1 until the
> constraint is removed using the 'crm_resource -U' command or manually with
> cibadmin
>         This will be the case even if proxy1 is the last node in the
> cluster
>         This message can be disabled with -Q
>
> This was in syslog:
>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib_process_request: Operation
> complete: op cib_delete for section constraints
> (origin=local/crm_resource/3, version=0.57.2): ok (rc=0)
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: - <cib admin_epoch="0"
> epoch="57" num_updates="2" />
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <cib
> validate-with="pacemaker-1.0" crm_feature_set="3.0.5" have-quorum="1"
> admin_epoch="0" epoch="58" num_updates="1" cib-last-written="Tue Apr 16
> 08:52:01 2013" dc-uuid="68890308-615b-4b28-bb8b-5aa00bdbf65c" >
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +   <configuration >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: abort_transition_graph:
> te_update_diff:124 - Triggered transition abort (complete=1, tag=diff,
> id=(null), magic=NA, cib=0.58.1) : Non-status change
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +     <constraints >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_state_transition: State
> transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL
> origin=abort_transition_graph ]
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +       <rsc_location
> id="cli-standby-proxyfloat3" rsc="proxyfloat3" >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_state_transition: All 2
> cluster nodes are eligible to run resources.
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +         <rule
> id="cli-standby-rule-proxyfloat3" score="-INFINITY" boolean-op="and" >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_pe_invoke: Query 150:
> Requesting the current CIB: S_POLICY_ENGINE
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +
> <expression id="cli-standby-expr-proxyfloat3" attribute="#uname"
> operation="eq" value="proxy1" type="string" __crm_diff_marker__="added:top"
> />
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +         </rule>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +       </rsc_location>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +     </constraints>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +   </configuration>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </cib>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib_process_request: Operation
> complete: op cib_modify for section constraints
> (origin=local/crm_resource/4, version=0.58.1): ok (rc=0)
>
> Yet crm status still shows:
>
>  Resource Group: proxyfloat3
>      ip3        (ocf::heartbeat:IPaddr2):       Started proxy1
>      ip3arp     (ocf::heartbeat:SendArp):       Started proxy1
>      ip3email   (ocf::heartbeat:MailTo):        Started proxy1
>
> So if all that's true, why is that resource group still on the original
> node? Is there something else I need to do?
>
> Marcus
> --
> Marcus Bointon
> Synchromedia Limited: Creators of http://www.smartmessages.net/
> UK info@hand CRM solutions
> [email protected] | http://www.synchromedia.co.uk/
>
>
>
>
> ---------- Forwarded message ----------
> From: RaSca <[email protected]>
> To: General Linux-HA mailing list <[email protected]>
> Cc:
> Date: Tue, 16 Apr 2013 15:56:11 +0200
> Subject: Re: [Linux-HA] Resource move not moving
> Il giorno Mar 16 Apr 2013 15:50:07 CEST, Marcus Bointon ha scritto:
> > I'm running crm using heartbeat 3.0.5 pacemaker 1.1.6 on Ubuntu Lucid 64.
> [...]
> > So if all that's true, why is that resource group still on the original
> node? Is there something else I need to do?
> > Marcus
>
> Try using crm_resource with -f, for forcing.
>
> --
> RaSca
> Mia Mamma Usa Linux: Niente č impossibile da capire, se lo spieghi bene!
> [email protected]
> http://www.miamammausalinux.org
>
>
>
> ---------- Forwarded message ----------
> From: "fabian.herschel" <[email protected]>
> To: [email protected]
> Cc:
> Date: Tue, 16 Apr 2013 15:57:52 +0200
> Subject: Re: [Linux-HA] Resource move not moving
>
> Just a tryal ... could you check the failcounts on both nodes?
> Maybe we also need more from your messages, as we only see the status
> changing to policy engine and not the next state. In you few lines the
> cluster is still not in status idle again, so there still could be pending
> actings or something like that.
>
>
>
> Von Samsung-Tablet gesendetMarcus Bointon <[email protected]> hat
> geschrieben:I'm running crm using heartbeat 3.0.5 pacemaker 1.1.6 on Ubuntu
> Lucid 64.
>
> I have a small resource group containing an IP, ARP and email notifier on
> a cluster containing two nodes called proxy1 and proxy2. I asked it to move
> nodes, and it seems to say that was ok, but it hasn't actually moved, and
> crm_mon still shows it on the original node.
>
> # crm resource move proxyfloat3
> WARNING: Creating rsc_location constraint 'cli-standby-proxyfloat3' with a
> score of -INFINITY for resource proxyfloat3 on proxy1.
>         This will prevent proxyfloat3 from running on proxy1 until the
> constraint is removed using the 'crm_resource -U' command or manually with
> cibadmin
>         This will be the case even if proxy1 is the last node in the
> cluster
>         This message can be disabled with -Q
>
> This was in syslog:
>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib_process_request: Operation
> complete: op cib_delete for section constraints
> (origin=local/crm_resource/3, version=0.57.2): ok (rc=0)
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: - <cib admin_epoch="0"
> epoch="57" num_updates="2" />
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + <cib
> validate-with="pacemaker-1.0" crm_feature_set="3.0.5" have-quorum="1"
> admin_epoch="0" epoch="58" num_updates="1" cib-last-written="Tue Apr 16
> 08:52:01 2013" dc-uuid="68890308-615b-4b28-bb8b-5aa00bdbf65c" >
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +   <configuration >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: abort_transition_graph:
> te_update_diff:124 - Triggered transition abort (complete=1, tag=diff,
> id=(null), magic=NA, cib=0.58.1) : Non-status change
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +     <constraints >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_state_transition: State
> transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL
> origin=abort_transition_graph ]
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +       <rsc_location
> id="cli-standby-proxyfloat3" rsc="proxyfloat3" >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_state_transition: All 2
> cluster nodes are eligible to run resources.
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +         <rule
> id="cli-standby-rule-proxyfloat3" score="-INFINITY" boolean-op="and" >
> Apr 16 13:32:35 proxy1 crmd: [2952]: info: do_pe_invoke: Query 150:
> Requesting the current CIB: S_POLICY_ENGINE
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +
> <expression id="cli-standby-expr-proxyfloat3" attribute="#uname"
> operation="eq" value="proxy1" type="string" __crm_diff_marker__="added:top"
> />
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +         </rule>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +       </rsc_location>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +     </constraints>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: +   </configuration>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib:diff: + </cib>
> Apr 16 13:32:35 proxy1 cib: [2948]: info: cib_process_request: Operation
> complete: op cib_modify for section constraints
> (origin=local/crm_resource/4, version=0.58.1): ok (rc=0)
>
> Yet crm status still shows:
>
> Resource Group: proxyfloat3
>      ip3        (ocf::heartbeat:IPaddr2):       Started proxy1
>      ip3arp     (ocf::heartbeat:SendArp):       Started proxy1
>      ip3email   (ocf::heartbeat:MailTo):        Started proxy1
>
> So if all that's true, why is that resource group still on the original
> node? Is there something else I need to do?
>
> Marcus
> --
> Marcus Bointon
> Synchromedia Limited: Creators of http://www.smartmessages.net/
> UK info@hand CRM solutions
> [email protected] | http://www.synchromedia.co.uk/
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
> _______________________________________________
> Linux-HA mailing list
> [email protected]
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to