Here is the way to load existing xml file:
http://gemfire.docs.pivotal.io/latest/configuring/cluster_config/gfsh_load_from_shared_dir.html

-Anil.


On Fri, Sep 25, 2015 at 2:18 PM, Barry Oglesby <[email protected]> wrote:

> Wes,
>
> I'm not sure about the "other commands", but the rest of what you said is
> right. The cluster config files are not created based on xml configuration,
> but only when gfsh is used.
>
> Barry Oglesby
> GemFire Advanced Customer Engineering (ACE)
> For immediate support please contact Pivotal Support at
> http://support.pivotal.io/
>
>
> On Fri, Sep 25, 2015 at 2:14 PM, Real Wes Williams <[email protected]
> >
> wrote:
>
> > Hi Barry,
> >
> > Neither I nor my associates are creating regions via gfsh but rather
> > letting them be created via cache.xml.  If I understand your inference,
> the
> > User’s Guide may not be clear that the cluster_config is not created if
> > your regions are created via cache.xml or your servers are started via
> gfsh
> > but only when creating regions via gfsh and/or perhaps other commands.
> >
> > > On Sep 25, 2015, at 5:09 PM, Barry Oglesby <[email protected]>
> wrote:
> > >
> > > Wes,
> > >
> > > The directory and files don't get created unless they need to be. Are
> you
> > > creating regions or anything using gfsh? The directory and files will
> get
> > > created at that time.
> > >
> > > Barry Oglesby
> > > GemFire Advanced Customer Engineering (ACE)
> > > For immediate support please contact Pivotal Support at
> > > http://support.pivotal.io/
> > >
> > >
> > > On Fri, Sep 25, 2015 at 1:55 PM, Anilkumar Gingade <
> [email protected]>
> > > wrote:
> > >
> > >> Hi Wes,
> > >>
> > >> I could see cluster config files getting created under:
> > >> ~/gem/gemfire82_dev/build/product/locator1/cluster_config/cluster
> > >>
> > >> I am trying this with my gemfire 8.2 checkout; it should be same with
> > >> Geode...
> > >>
> > >> The recommended way to generate/get the config files are by using
> export
> > >> command:
> > >>
> > >>
> > >>
> >
> http://gemfire.docs.pivotal.io/latest/configuring/cluster_config/persisting_configurations.html
> > >>
> > >>
> > >>
> >
> http://gemfire.docs.pivotal.io/latest/configuring/cluster_config/gfsh_config_troubleshooting.html
> > >> Sorry, i am referring to gemfire docs...
> > >>
> > >> -Anil.
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>
> > >> On Fri, Sep 25, 2015 at 1:07 PM, Real Wes Williams <
> > [email protected]
> > >>>
> > >> wrote:
> > >>
> > >>> I’m finding behavior with the cluster configuration service that
> seems
> > >>> different than what is in print. Can someone please clarify?
> > >>>
> > >>> Page 163 of
> http://gemfire.docs.pivotal.io/pdf/pivotal-gemfire-ug.pdf
> > <
> > >>> http://gemfire.docs.pivotal.io/pdf/pivotal-gemfire-ug.pdf> states:
> > >>>
> > >>> "For configurations that apply to all members of a cluster, the
> locator
> > >>> creates a cluster subdirectory within the cluster_config directory….
> > >>> This directory contains:
> > >>> • cluster.xml -- A cache.xml file containing configuration common to
> > all
> > >>> members
> > >>> • cluster.properties -- a gemfire.properties file containing
> properties
> > >>> common to all members
> > >>> • Jar files that are intended for deployment to all members”
> > >>>
> > >>> However, the directory does not create. I just spun up a locator and
> > >>> server with cache.xml and saw that the cluster-config directory was
> not
> > >>> created. I stopped the cluster and it was still not created.  The
> > locator
> > >>> config had:
> > >>>   -Dgemfire.enable-cluster-configuration=true
> > >>>
> > >>> The locator’s log had the message:
> > >>>   [info 2015/09/25 11:28:51.356 EDT loc1-d9635 <Pooled Message
> > Processor
> > >>> 1> tid=0x56] Cluster configuration service start up completed
> > >> successfully
> > >>> and is now running ….
> > >>>
> > >>> I’m finding that other’s also have this behavior.  Specifically under
> > >> what
> > >>> circumstances is the cluster-config directory created?
> > >>>
> > >>> Thanks,
> > >>> Wes
> > >>
> >
> >
>

Reply via email to