I understand that my test its a bit particular because the client was
also one of the servers.
Usually clients don't have direct access to the storages, but still it
made think, hot the things are supposed to work.
For example i did another test with 3 dd's, one each server. All the
servers was writing to all the luns.
In other words a lun was accessed in parallel by 3 servers.
Its that a problem, or gpfs manage properly the concurrency and avoid
data corruption?
I'm asking because i was not expecting a server to write to an NSD he
doesn't own, even if its locally available.
I thought that the general availablity was for failover, not for
parallel access.
Regards,
Salvatore
On 05/11/14 10:22, Vic Cornell wrote:
Hi Salvatore,
If you are doing the IO on the NSD server itself and it can see all of
the NSDs it will use its "local” access to write to the LUNS.
You need some GPFS clients to see the workload spread across all of
the NSD servers.
Vic
On 5 Nov 2014, at 10:15, Salvatore Di Nardo <[email protected]
<mailto:[email protected]>> wrote:
Hello again,
to understand better GPFS, recently i build up an test gpfs cluster
using some old hardware that was going to be retired. THe storage was
SAN devices, so instead to use native raids I went for the old school
gpfs. the configuration is basically:
3x servers
3x san storages
2x san switches
I did no zoning, so all the servers can see all the LUNs, but on nsd
creation I gave each LUN a primary, secondary and third server. with
the following rule:
STORAGE
primary
secondary
tertiary
storage1
server1
server2 server3
storage2 server2 server3 server1
storage3 server3 server1 server2
looking at the mmcrnsd, it was my understanding that the primary
server is the one that wrote on the NSD unless it fails, then the
following server take the ownership of the lun.
Now come the question:
when i did from server 1 a dd surprisingly i discovered that server1
was writing to all the luns. the other 2 server was doing nothing.
this behaviour surprises me because on GSS only the RG owner can
write, so one server "ask" the other server to write to his own
RG's.In fact on GSS can be seen a lot of ETH traffic and io/s on each
server. While i understand that the situation it's different I'm
puzzled about the fact that all the servers seems able to write to
all the luns.
SAN deviced usually should be connected to one server only, as
paralled access could create data corruption. In environments where
you connect a SAN to multiple servers ( example VMWARE cloud) its
softeware task to avoid data overwriting between server ( and data
corruption ).
Honestly, what i was expecting is: server1 writing on his own luns,
and data traffic ( ethernet) to the other 2 server , basically asking
*them* to write on the other luns. I dont know if this behaviour its
normal or not. I triied to find a documentation about that, but could
not find any.
Could somebody tell me if this _/"every server write to all the
luns"/_ its intended or not?
Thanks in advance,
Salvatore
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org <http://gpfsug.org>
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss