Hey Mark,

Does the address `mybox` resolve on you system? Gluster requires a
resolvable hostname to be used for bricks. If is doesn't resolve add
an entry to your /etc/hosts. for eg. '127.0.0.1 mybox'. This allows
the name to be resolved and gluster will allow you to create the
volume. You will now be able to create your single brick volume using
`gluster volume create <volume-name> mybox:/<brick-path>`

If later you want to expand you cluster, you should make sure that the
name `mybox` is resolvable from all other nodes to point to your first
system.

~kaushal

On Mon, Aug 17, 2015 at 3:37 PM, Mark s2c <[email protected]> wrote:
> Hello
> Thanks for the suggestions, but they don’t work:
>
> [root@mybox ~]# gluster volume create myVol1 /gfs/mybox/brick1
> Wrong brick type: /gfs/mybox/brick1, use <HOSTNAME>:<export-dir-abs-path>
> Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [replica <COUNT>
> [arbiter <COUNT>]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy
> <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK>?<vg_name>... [force]
>
> [root@mybox ~]# gluster volume create myvol localhost:/gfs/mybox/brick1
> Please provide a valid hostname/ip other than localhost, 127.0.0.1 or
> loopback address (0.0.0.0 to 0.255.255.255).
>
> And as a recap, when I follow this form (which I did at first, hence the
> post):
> [root@mybox ~]# gluster volume create myvol1 mybox:/gfs/mybox/brick1
> volume create: myvol1: failed: Host mybox is not in 'Peer in Cluster’ state
>
> Any other alternative suggestions would be useful.
> Thanks
>
>
> From: Jordan Willis <[email protected]>
> Date: Saturday, 15 August 2015 11:02
> To: Mark Lewis <[email protected]>
> Cc: Atin Mukherjee <[email protected]>, "[email protected]"
> <[email protected]>
>
> Subject: Re: [Gluster-users] One volume gluster vol
>
> If you are creating a volume that your brick is already mounted on, I’m not
> even sure you have to give it a hostname.
>
> gluster volume create myVol1 /gfs/mybox/brick1
>
>
> or
>
> gluster volume create myVol1 localhost:/gfs/mybox/brick1
>
>
>
>
> On Aug 15, 2015, at 2:37 AM, Mark s2c <[email protected]> wrote:
>
> [root@mybox ~]# df -h
> Filesystem               Size  Used Avail Use% Mounted on
> /dev/mapper/centos-root   50G  1.3G   49G   3% /
> devtmpfs                 7.8G     0  7.8G   0% /dev
> tmpfs                    7.8G     0  7.8G   0% /dev/shm
> tmpfs                    7.8G  8.7M  7.8G   1% /run
> tmpfs                    7.8G     0  7.8G   0% /sys/fs/cgroup
> /dev/sda1                497M  164M  334M  33% /boot
> /dev/mapper/centos-home  166G   33M  166G   1% /home
> /dev/mapper/brick1        17T   34M   17T   1% /gfs/mybox/brick1
> [root@mybox ~]# !531
> gluster volume create myVol1 mybox:/gfs/mybox/brick1
> volume create: myVol1: failed: Host mybox is not in 'Peer in Cluster’ state
>
> Can you give me the command cos I can only find one instance of it on the
> net and it’s this one.
>
> Much obliged.
>
> From: Atin Mukherjee <[email protected]>
> Date: Friday, 14 August 2015 15:17
> To: Mark Lewis <[email protected]>
> Cc: "[email protected]" <[email protected]>
> Subject: Re: [Gluster-users] One volume gluster vol
>
> Can you please detail the exact issue. I don't see any issue in setting a
> single node cluster apart from sacrificing high availability.
> -Atin
> Sent from one plus one
> On Aug 14, 2015 1:06 PM, "Mark s2c" <[email protected]> wrote:
>>
>>
>>
>> Hello can I create a one volume gfs vol?
>> As I have no peer, I appear to only have local host and even if I
>> reference it with its host name as the error says, it doesn't work. Is it
>> even possible or do I need a second server?
>>
>> We've just bought a big box so a second would be a big outlay. Could I use
>> a VM with disproportionate bricks just to get the peerage set up and then
>> remove one of the members?
>> _______________________________________________
>> Gluster-users mailing list
>> [email protected]
>> http://www.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://www.gluster.org/mailman/listinfo/gluster-users
>
>
>
> _______________________________________________
> Gluster-users mailing list
> [email protected]
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
[email protected]
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to