2018-04-27 7:13 GMT-06:00 Serkan Çoban :
>>but the Doubt is if I can use glusterfs with a san connected by FC?
> Yes, just format the volumes with xfs and ready to go
>
Ok , excelent
>
> For a replica in different DC, be careful about latency. What is the
> connection between DCs?
> It can be do
> On Apr 26, 2018, at 9:00 PM, Krutika Dhananjay wrote:
> But anway, why is copying data into new unsharded volume disruptive for you?
The copy itself isn't; blowing away the existing volume and recreating it is.
That is for the usual reasons - storage on the cluster machines is not
infinite,
Kaushal M wrote:
On Thu, Apr 26, 2018 at 9:06 PM Mark Staudinger
wrote:
Hi Folks,
I'm trying to debug an issue that I've found while attempting to qualify
GlusterFS for potential distributed storage projects on the FreeBSD-11.1
server platform - using the existing package of GlusterFS v3.11.1
Hi Hemant
I am not sure if Gluster is supported for RMAN. Please check on that.
But I did find couple of articles online related to the ORA-19510 errors
that point to either no space left or improper RMAN configuration causing
this. They may be off help. Check them out.
https://oradbps.wordpr
Sending again.
Can somebody please take a look and let me know is this is doable?
Folks,
We have glusters with version 3.8.13 and we are using that for RMAN backups.
We get errors/warnings,
RMAN-03009: failure of backup command on C1 channel at 03/28/2018 16:55:43
ORA-19510: failed to set si
On Fri, Apr 27, 2018 at 07:22:29PM +1200, Thing wrote:
> I have 4 nodes, so a quorum would be 3 of 4.
Nope, gluster doesn't work quite the way you're looking at it.
(Incidentally, I started off with the same expectation that you have.)
When you create a 4-brick replica 2 volume, you don't get a s
>but the Doubt is if I can use glusterfs with a san connected by FC?
Yes, just format the volumes with xfs and ready to go
For a replica in different DC, be careful about latency. What is the
connection between DCs?
It can be doable if latency is low.
On Fri, Apr 27, 2018 at 4:02 PM, Ricky Gutie
Hi, any advice?
El mié., 25 abr. 2018 19:56, Ricky Gutierrez
escribió:
> Hi list, I need a little help, I currently have a cluster with vmware
> and 3 nodes, I have a storage (Dell powervault) connected by FC in
> redundancy, and I'm thinking of migrating it to proxmox since the
> maintenance co
For me, the process of copying out the drive file from Ovirt is a tedious, very
manual process. Each vm has a single drive file with tens of thousands of
shards each. Typical vm size is 100G for me. And it's all mostly sparse. So,
yes, a copy out from the gluster share is best.
Did the outstan
Hi Jose,
Why are all the bricks visible in volume info if the pre-validation
for add-brick failed? I suspect that the remove brick wasn't done
properly.
You can provide the cmd_history.log to verify this. Better to get the
other log messages.
Also I need to know what are the bricks that were act
Hi,
>>gluster volume create gv0 replica 2 glusterp1:/bricks/brick1/gv0
glusterp2:/bricks/brick1/gv0 glusterp3:/bricks/brick1/gv0
glusterp4:/bricks/brick1/gv0
This command will create a distributed-replicate volume(yes you have to opt
'y' at the warning message to get it created). We will have two
Hi,
I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does
the documentation give this command as an example without qualifying it?
SO I am running the wrong command? I want a "raid10"
On 27 April 2018 at 18:05, Karthik Subrahmanya wrote:
> Hi,
>
> With replica 2 volumes o
12 matches
Mail list logo