Re: How to use the logger in the BackupFilter?

2016-07-24 Thread Jason
hi Val,

Another question about point 3:
Event with user attributes, but when do the deployment of adding new
machines to the node, it will roll out one machine group by one, so this
means during the period of deployment, all the nodes cannot get the same
user attributes (machines group info) and if the Rendezvous hashing return a
new machine list for one partition with a new machine in the top N (N is the
backup # + 1), the backup filter in different nodes may made different
decision.

E.g. for partition p of cache c, its node list after Rendezvous hashing is
A1, X1, B2, C3, D2, ..., where letter is machine and digit is group; and X
is a new added machine and for rolled machine, its group is 1 but for
others, it's unknown; and suppose the backup = 2. Say, we've rolled out
group 1 and are rolling out group 2 now, so in machine A, it selects (A1,
B2, C3), but in machine C, it may select (A1, X, B2), because it doesn't
know which group X belongs to now. This will cause inconsistency.

And we must ensure no data loss in the memory only.

BTW, do you have any successful case for this usage scenario? how do you
resolve this problem?

Thanks,
-Jason




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-use-the-logger-in-the-BackupFilter-tp6442p6499.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: How to use the logger in the BackupFilter?

2016-07-21 Thread Jason
Thanks Alexey.

1. Tried the Ignite instance with the below code, but seems that it also
cannot injected automatically. Is there extra work, e.g. any
config/implements some interface, to make the annotation:
IgniteInstanceResource work?

/** Ignite instance. */
@IgniteInstanceResource
private Ignite ignite;



2. when is the AffinityFunction synced between all the nodes? only in the
node join? 
If the BackupFilter's result depends on some node specific thing, e.g. a
local file which may change very rarely, it will cause the ignite not to
work, right?

3. Actually our use scenario is just as below:
i)   group all the machines in the cluster into different groups
ii)  don't hope all the replicas of one partitions are assigned to the same
group
iii) when do the deployment, all the groups are restarted one by one, so
this can make sure the data in the memory don't lose.

This should be a common scenario for the all the production use, right? Is
there a common way to do this in Ignite team now?


Thanks,
-Jason 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-use-the-logger-in-the-BackupFilter-tp6442p6461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


How to use the logger in the BackupFilter?

2016-07-21 Thread Jason
hi Ignite team,

I want to implement a customized BackupFilter which is called in the
RendezvousAffinityFunction for my own cluster's feature, and I encountered a
problem: how to get the logger in ignite?

I've tried  below ways, but all of them don't work, (NullPointerException).

1. Use the way just like below in RendezvousAffinityFunction, but seems that
the log cannot be automatically injected in my code, but it works in
RendezvousAffinityFunction. Any extra work for this?
/** Logger instance. */
@LoggerResource
private transient IgniteLogger log;

2. JavaLogger log = new JavaLogger();

3. U.warn(null, msg)

4. Try to pass the GridKernelContext to my class, then use IgniteLogger log
= ctx.log(myclass). It works when create my class, but do marshal/unmarshal,
it becomes null again.

BTW, I hard-code my BackupFilter in the RendezvousAffinityFunction as
default not by config. Because i use the .net version, it's a little
complicated to use config for this now.

Any suggestion on this? 

My detailed class is as below:
public class ScaleUnitBackFilter implements IgniteBiPredicate {
/**
 * It's used by the JdkMarshaller
 */
private static final long serialVersionUID = -5036727407264096908L;

private static final long ReloadCheckIntervalInMilliSecond = 30;
/**
 * delay the loading to the first read
 */
private long lastLoadTime = 0;

private static final String ScaleUnitFilePath = 
"d:/data/machineinfo.csv";

private HashMap scaleUnitMap;

/** Logger instance. */
@LoggerResource
private transient IgniteLogger log;

public ScaleUnitBackFilter() {
scaleUnitMap = new HashMap();
}

@Override
public boolean apply(ClusterNode primaryNode, ClusterNode
backupNodeCandidate) {
long curTime = U.currentTimeMillis();
if (curTime - lastLoadTime >= ReloadCheckIntervalInMilliSecond) 
{
loadScaleUnitMap();
}

A.ensure(primaryNode.hostNames().size() >= 1, "Primary Node 
must have
hostname.");
A.ensure(backupNodeCandidate.hostNames().size() >= 1, "Backup 
Node must
have hostname.");

// Remove the domain in the full hostname
String pn = primaryNode.hostNames().toArray(new
String[0])[0].split("\\.")[0];
String bnc = backupNodeCandidate.hostNames().toArray(new
String[0])[0].split("\\.")[0];
LT.info(log, "PN: " + pn + ", BNC: " + bnc, false);

if (scaleUnitMap == null || scaleUnitMap.isEmpty()) {
LT.warn(log, null, "The machineinfo.csv file may be 
empty. !!!PAY MORE
ATTENTION!!!", false);
return true;
}

if (!scaleUnitMap.containsKey(primaryNode) ||
!scaleUnitMap.containsKey(backupNodeCandidate)) {
LT.warn(log, null, "One machine isn't in the 
machineinfo.csv. !!!PAY MORE
ATTENTION!!!", false);
return true;
}

MachineInfo pnInfo = scaleUnitMap.get(primaryNode);
LT.info(log, printMachineInfo(pn, pnInfo), false);
MachineInfo bncInfo = scaleUnitMap.get(backupNodeCandidate);
LT.info(log, printMachineInfo(bnc, bncInfo), false);

// If in the same scale unit or backup node isn't in 'H' 
status, don't
select it as the backup node
if (pnInfo.scaleUnit.equals(bncInfo.scaleUnit) ||
!"H".equals(bncInfo.status)) {
LT.info(log, "Backup Node Candidate is filtered!", 
false);
return false;
}

LT.info(log, "PN: " + pn + ", BN: " + bnc + " is selected!", 
false);

return true;
}

private String printMachineInfo(String machine, MachineInfo 
machineInfo) {
return machine + "[" + machineInfo.scaleUnit + ", " + 
machineInfo.status +
"]";
}

private synchronized void loadScaleUnitMap() {
// double check 
long curTime = U.currentTimeMillis();
if (curTime - lastLoadTime >= ReloadCheckIntervalInMilliSecond) 
{
return;
}

String line = null;
String csvSplitBy = ",";
BufferedReader br = null;

try {
br = new BufferedReader(new 
FileReader(ScaleUnitFilePath));
while ((line = br.readLine()) != null) {
String[] fields = line.split(csvSplitBy);