On 12/8/2011 7:52 AM, Shashank wrote:
> I want to know whether _datavolume can be used for non-database
> filesystems. Oracle support is recommending the use of _datavolume
> along with _netdev
datavolume option was a way to tell older oracle database versions that
the filesystem does support dire
I want to know whether _datavolume can be used for non-database
filesystems. Oracle support is recommending the use of _datavolume
along with _netdev
In reading the documentation, I have found the following which
suggests _datavolume should never be used for anything other than the
CRS/OCR/DATA/RE
Which distribution is this? The distros it is working right now include
sles, opensuse, fedora, ubuntu and debian. We are not shipping all
the bits for rhel as yet.
On 02/11/2011 06:22 AM, Alain.Moulle wrote:
Hi,
I've tried to configure ocfs2.pcmk again with ocfs2 rpms 1.6.3.1
and rpms pacemake
In fact, when I trace in the o2cb ocf script, I got :
ocfs2_controld[9265]: 2011/02/11_15:40:15 info: get_cluster_type:
Cluster type is: 'openais'.
ocfs2_controld[9265]: 2011/02/11_15:40:15 info:
init_ais_connection_classic: Creating connection to our Corosync plugin
ocfs2_controld[9265]: 2011/0
Hi,
I've tried to configure ocfs2.pcmk again with ocfs2 rpms 1.6.3.1
and rpms pacemaker-1.1.2-7.el6 & corosync-1.2.3-21.el6 :
I've configured the clone-dlm and the o2cb-dlm likewise I did
with 1.4.3-3 and with the export in /etc/init.d/corosync :
export
COROSYNC_DEFAULT_CONFIG_IFACE="openaisserv
Hi Sunil,
yes I know about ocfs2.pcmk, but it was absolutely not working
in previous releases, so :
- is this stack ocfs2 for pacemaker maintained , supported ?
- is it really used on some customers clusters ?
Thanks
Regards
Alain
Sunil Mushran a e'crit:
On 02/08/2011 01:32 AM, Alain.Moulle wr
On 02/08/2011 01:32 AM, Alain.Moulle wrote:
> OK but what I wonder now is :
> is OCFS2 really capable of fencing an adjacent node ?
> or is it only capable of "node self-fencing" ?
> I thought that ocfs2 was only capable of "node self-fencing" because
> there is no configuration of any fencing devi
Hi,
sorry but I don't understand, do you mean that even if the linkcom
problem is only
on node1 side (i.e. if the Eth Board has a breakdown) ,you mean that
this will
lead to a self-fence of node1 whereas it is node 0 which has the real
problem ?
Thanks for this precision
PS : and no, I can't
ssage-
From: ocfs2-users-boun...@oss.oracle.com
[mailto:ocfs2-users-boun...@oss.oracle.com] On Behalf Of Alain.Moulle
Sent: Tuesday, February 08, 2011 3:06 PM
To: ocfs2-users@oss.oracle.com
Subject: Re: [Ocfs2-users] Question about eth communication for ocfs2
Hi,
I have give it
Hi,
I have give it a try :
linkcom PtP between node1 and node2
IO traffic on OCFS2 on both sides
unplug the PtP cable from both nodes
=>both IO traffic on OCFS2 are stalled
=>after a short while, node2 self-fences,
=>whereas a short while after the self-fence of node2 , IO traffic
on node1 s
And a relative question :
if there is a network breakdown on the only communication link
for ocfs2
in a two-nodes cluster, what is the behavior :
- will both nodes decide to suicide ?
- will both nodes remain alive providing the exchanges of
informations on disk are always wo
On 02/07/2011 06:14 AM, Alain.Moulle wrote:
> Hi,
>
> I wonder if there is a way to configure two ip_addr in the
> /etc/ocfs2/cluster.conf
> so that it works on two physically distinct networks, to get the
> redundancy of
> the link for ocfs2 ?
> (I know there should be the possibility to use the
Hi,
> Hi,
>
> I wonder if there is a way to configure two ip_addr in the
> /etc/ocfs2/cluster.conf
> so that it works on two physically distinct networks, to get the
> redundancy of
> the link for ocfs2 ?
No.
> (I know there should be the possibility to use the "bonding"
> functionnality on
Hi,
I wonder if there is a way to configure two ip_addr in the
/etc/ocfs2/cluster.conf
so that it works on two physically distinct networks, to get the
redundancy of
the link for ocfs2 ?
(I know there should be the possibility to use the "bonding"
functionnality on
Ethernet, but I would like n
ocfs2 is a shared disk clustered file system. As the disk is accessible
by all nodes, there is no need to keep the data in sync. Instead the fs
needs to ensure nodes coordinate access such that multiple nodes reading
and writing concurrently do not corrupt the fs.
You may want to read the ocfs2 1.
OCFS2 does not do any of that - It's just a filesystem. What is your
underlying disk architecture? FC-SAN?
Ed Stafford wrote:
> I'm considering using OCFS2 on some image servers (~25M image files)
> to keep them in sync across all the nodes (currently 7). The
> questions I couldn't find answers
I'm considering using OCFS2 on some image servers (~25M image files)
to keep them in sync across all the nodes (currently 7). The
questions I couldn't find answers to are:
1) How many nodes can I lose before I can no longer access data?
1a) Is there a way to increase the number?
2) Is the data m
OCFS 2 namely ocfs2-1.2.7
Linux is Red Hat rel 4 2.6.9.22 SMP
I have a SR open 7045509.994 but was wondering if you knew about any
issues like below.
I have an existing database on OCFS which I want to move to new disk
areas.
The new areas I formatted with OCFS 4k block 256k cluster
[EMAIL PROTECTED] schrieb:
> In mainline, that issue was resolved in 2.6.21. We have patches for
> 2.6.20 but not older than that.
>
Thanks Sunil,
I guess we need to upgrade the kernel.
Cheers,
Markus
___
Ocfs2-users mailing list
Ocfs2-users
In mainline, that issue was resolved in 2.6.21. We have patches for
2.6.20 but not older than that.
Sunil
--- Begin Message ---
Hi all,
I just started with OCFS2 and set up a 2-node cluster where one node is
writing and both read from the clustered volume. Currently I'm moving
data to the volum
Hi all,
I just started with OCFS2 and set up a 2-node cluster where one node is
writing and both read from the clustered volume. Currently I'm moving
data to the volume via tar and since I started this more and more memory
is used until nothing is left and the box reboots. This takes about two
Tim Lank wrote:
No problem. :-)
The main points of having a RAC cluster as I understand it are
availability and scalability on low-cost systems. Shouldn't ocfs2 have
the ability to perform online expansion like this? I know that Red Hat's
GFS can add journals to accomodate new nodes while the
No problem. :-)
The main points of having a RAC cluster as I understand it are
availability and scalability on low-cost systems. Shouldn't ocfs2 have
the ability to perform online expansion like this? I know that Red Hat's
GFS can add journals to accomodate new nodes while the filesystems are
o
You are absolutely right. Sorry about that. I think I didn't have enough
caffeine this morning ;-).
The procedure is when you have available slots and just want to add a
node to the cluster.
Any change to the superblock needs to have the partition offline.
You don't need to umount to upgrade dat
So since tunefs.ocfs2 is the tool that actually does all of the work to
add more slots and the tunefs.ocfs2 man page states:
DESCRIPTION
tunefs.ocfs2 is used to adjust OCFS2 file system parameters on disk.
In order to prevent data loss, tunefs.ocfs2 will not perform any
a
The console does that (You can use the console to add a new node).
tunefs.ocfs2 is actually the tool that will change the superblock to add
more slots (see the man pages) and it is called by the console (a more
user friendly interface) to perform the action.
o2cb_ctl only defines the new node(s) o
So does the o2cb_ctl command touch the ocfs2 filesystem superblock and
increase the node slot value in this example?
>>From
>> http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html
>
> 19 - How do I add a new node to an online cluster?
> You can use the console to add a new node.
>From http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_faq.html
19 - How do I add a new node to an online cluster?
You can use the console to add a new node. However, you will need to
explicitly add the new node on all the online nodes. That is, adding on
one node and propagating to t
We have a test 10gR2 RAC cluster using ocfs2 filesystems for the
Clusterware files and the Database files.
We need to increase the node slots to accomodate new RAC nodes. Is it
true that we will need to umount these filesystems for the upgrade (i.e.
Database and Clusterware also)?
We are plannin
On Tue, Jan 30, 2007 at 11:08:20PM -0800, Brandon Lamb wrote:
> So I shutdown one, then two. But how does the quorum(sp?) work? If I
> am down to only 2 servers up and running will they fence? I read
> something about the lowest number node having more votes or something
> like that?
If you do a r
Can someone correct me on my understanding of the order to take down a cluster?
Lets say I have a server exporting a drive via iscsi, and 4 nodes
connecting to it.
Now I want to shutdown all my machines to do maintenance or what have you.
So I shutdown one, then two. But how does the quorum(sp?
Just create a one node cluster.
However, if you were to mount two mirrored volumes on the same node,
you will have problems as detailed in this thread:
http://oss.oracle.com/pipermail/ocfs2-users/2006-July/000630.html
Thanks to Andre, the next drop of ocfs2-tools will have a fix for this
(abilit
Hi everybody,
I am new in the ocfs2 technology,
I have a cluster installed, and I have a EMC clones for backup propose,
Now I want to mount the disk cloned by EMC in another machine ( not in the
cluster ) I need to make another cluster for this porpose? What are the best
practices to mount the d
33 matches
Mail list logo