[infinispan-dev] Infinispan 10 "Chupacabra"

2019-10-28 Thread Tristan Tarrant
Infinispan 10 is here! New server, new marshalling, better and faster 
REST API, new CLI, new container image, new operator!!!

https://infinispan.org/blog/2019/10/28/infinispan-10-final/


___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] BoundedConcurrentHashMap

2019-07-10 Thread Tristan Tarrant
The bounding part was never part of Doug's code, but was written by us

On Wed, 10 Jul 2019, 20:25 Sanne Grinovero,  wrote:

> Hi all,
>
> does anyone remember where BoundedConcurrentHashMap was copied from?
>
> we have a copy in Hibernate ORM; the comments state:
>  - copied from Infinispan
>  - original author Doug Lea
>
> but I don't see any similar implementation in JSR166, nor any
> reference to this classname on their archives:
>  -
> http://jsr166-concurrency.10961.n7.nabble.com/template/NamlServlet.jtp?macro=search_page=2=BoundedConcurrentHashMap
>
> The comments looks like suspiciously like this was originally a copy
> of ConcurrentHashMap... I'm wondering which fixes we're missing out,
> and if I should plan to get rid of this liability since Infinispan
> also seems to have removed it.
>
> Thanks,
> Sanne
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Obtaining cache via CDI on WildFly

2018-12-05 Thread Tristan Tarrant
On 12/5/18 9:44 AM, Gunnar Morling wrote:
> Hey,
> 
> I was trying to configure and inject an Infinispan cache through CDI,
> running on WildFly 14, using the Infinispan modules provided by the
> server.
> 
> While I'm not sure whether that's something supported or recommended,
> I found this preferable over adding Infinispan another time as part of
> the deployment. I couldn't find any recent info on doing this (would
> love any pointers, though), so here's my findings, in case it's
> interesting for others:

You should not be using the Infinispan subsystem that comes with WildFly 
as its configuration capabilities are a bit limited, but the modules we 
supply:

http://infinispan.org/docs/stable/user_guide/user_guide.html#infinispan_modules_for_wildfly_eap

> Btw. I also couldn't find an example for configuring a cache through
> jboss-cli.sh, perhaps that's something to consider, too?

Yes, that should be added.

Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Branches

2018-10-08 Thread Tristan Tarrant
Hi all,

master will stay on 9.4.x for a couple of weeks: we'll probably have 1 
or 2 bug fix micros by then.
We will then branch 9.4.x and master will become open for 10.0.x work.

Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.4.0.Final

2018-10-08 Thread Tristan Tarrant
Infinispan 9.4.0.Final has been released

Come and read all about it:

https://blog.infinispan.org/2018/10/infinispan-940final.html

Thanks to the whole core team and community for the contributions. You 
are awesome as usual!

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] ServerNG simple design doc

2018-10-02 Thread Tristan Tarrant
Guys, I have created

https://github.com/infinispan/infinispan-designs/pull/10

which outlines what the new server looks/behaves like.

Please comment on the PR.

Tristan

___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] 9.4.0.CR3 and code name vote

2018-09-17 Thread Tristan Tarrant
Hi everybody,

we've had to amend our release schedule, so today, instead of Final, I'm 
announcing 9.4.0.CR3.

Please read all about it, and vote for its codename, on:

https://blog.infinispan.org/2018/09/infinispan-940cr3-933-and-codename-vote.html


Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] 9.4.0.Final: one extra sprint

2018-09-13 Thread Tristan Tarrant
Hi all,

in view of the current status of the master branch and the fact that 
some essential items still are not ready to be merged, we are going to 
add an extra sprint.

We will therefore release 9.4.0.CR3 tomorrow Friday 14th and cut Final 
on Friday October 5th.

Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan chat moves to Zulip

2018-05-02 Thread Tristan Tarrant
For over 9 years Infinispan has used IRC for real-time interaction 
between the development team, contributors and users. While IRC has 
served us well over the years, we decided that the time has come to 
start using something better. After trying out a few "candidates" we 
have settled on Zulip.

Zulip gives us many improvements over IRC and over many of the other 
alternatives out there. In particular:

* multiple conversation streams
* further filtered with the use of topics
* organization management to organize users into groups
* it's open source

So, if you want to chat with us, join us on the Infinispan Zulip 
Organization [1]

Tristan

[1] https://infinispan.zulipchat.com
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.2.2.Final and 9.3.0.Alpha1 are out

2018-05-02 Thread Tristan Tarrant
We have two releases to announce:

first of all is 9.2.2.Final which introduces a second-level cache 
provider for the upcoming Hibernate ORM 5.3 as well as numerous 
bugfixes. [1]

Next is 9.3.0.Alpha1 which is the first iteration of our next release. [2]
The main item here, aside from bugfixes and preparation work for 
upcoming features, is the upgrade of our server component to WildFly 12.

Go and get them on our download page [3]

Tristan


[1] 
https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799=12337245
[2] 
https://issues.jboss.org/secure/ReleaseNote.jspa?projectId=12310799=12337078
[3] https://infinispan.org/download/
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly IRC Meeting logs 2018-04-16

2018-04-16 Thread Tristan Tarrant
Get them here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-04-16-14.00.log.html

Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Protobuf metadata cache and x-site

2018-04-12 Thread Tristan Tarrant
We also need: backup priority for internal caches as well as conflict 
resolution for backups to avoid broken data replicating in the wrong 
direction.

Tristan

On 4/12/18 10:13 PM, Tristan Tarrant wrote:
> I think we can certainly make it additive, especially now that we have 
> configuration templates in place: the user supplies a base template, and 
> the internal cache logic override with what is needed so that broken 
> configs are less probable (but still possible). Alternatively, instead 
> of overriding, we just check that it matches the requirements.
> 
> Tristan
> 
> On 4/12/18 10:10 PM, Adrian Nistor wrote:
>> Backing up caches with protobuf payload to a remote site will not work 
>> if they are indexed, unless the remote site already has the schema for 
>> the types in question, or else indexing will fail. If the cache is not 
>> indexed it matters less.
>>
>> So the replication of protobuf metadata cache has to be arranged 
>> somehow before any other data is replicated. Manual replication is 
>> indeed PITA.
>>
>> I remember in the very early version of remote query the protobuf 
>> metadata cache configuration was created programatically on startup 
>> unless a manually defined configuration with that name was found, 
>> already provided by the user. In that case the user's config was used. 
>> This approach had the benefit of allowing the user to gain control if 
>> needed. But can also lead to gloom and doom. Was that too bad to do it 
>> again :)))?
>>
>> Adrian
>>
>> On 04/12/2018 10:27 PM, Tristan Tarrant wrote:
>>> It is definitely an internal cache. Because of this, automatically
>>> backing it up to a remote site might not be such a good idea.
>>>
>>> Backups are enabled per-cache, and therefore just blindly replicating
>>> the schema cache to the other site is not a good idea.
>>>
>>> I think that we need a cache-manager-level backup setting that does the
>>> right thing.
>>>
>>> Tristan
>>>
>>> On 4/12/18 7:01 PM, Pedro Ruivo wrote:
>>>> Wouldn't be better to assume the protobuf cache doesn't fit the 
>>>> internal
>>>> cache use case? :)
>>>>
>>>> On 12-04-2018 17:21, Galder Zamarreno wrote:
>>>>> Ok, we do need to find a better way to deal with this.
>>>>>
>>>>> JIRA: https://issues.jboss.org/browse/ISPN-9074
>>>>>
>>>>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo <pe...@infinispan.org
>>>>> <mailto:pe...@infinispan.org>> wrote:
>>>>>
>>>>>
>>>>>
>>>>>   On 12-04-2018 15:49, Galder Zamarreno wrote:
>>>>>    > Hi,
>>>>>    >
>>>>>    > We have an issue with protobuf metadata cache.
>>>>>    >
>>>>>    > If you run in a multi-site scenario, protobuf metadata
>>>>>   information does
>>>>>    > not travel across sites by default.
>>>>>    >
>>>>>    > Being an internal cache, is it possible to somehow
>>>>>   override/reconfigure
>>>>>    > it so that cross-site configuration can be added in 
>>>>> standalone.xml?
>>>>>
>>>>>   No :( since it is an internal cache, its configuration can't 
>>>>> be changed.
>>>>>
>>>>>    >
>>>>>    > We're currently running a periodic job that checks if the
>>>>>   metadata is
>>>>>    > present and if not present add it. So, we have a 
>>>>> workaround for
>>>>>   it, but
>>>>>    > it'd be not very user friendly for end users.
>>>>>    >
>>>>>    > Thoughts?
>>>>>
>>>>>   Unfortunately none... it is the first time an internal cache 
>>>>> needs
>>>>>   to do
>>>>>   some x-site.
>>>>>
>>>>>    >
>>>>>    > Cheers,
>>>>>    > Galder
>>>>>    >
>>>>>    >
>>>>>    > ___
>>>>>    > infinispan-dev mailing list
>>>>>    > infinispan-dev@lists.jboss.org
>>>>>   <mailto:infinispan-dev@lists.jboss.org>
>>>>>    > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>    >
>>>>>   ___
>>>>>   infinispan-dev mailing list
>>>>>   infinispan-dev@lists.jboss.org 
>>>>> <mailto:infinispan-dev@lists.jboss.org>
>>>>>   https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> infinispan-dev mailing list
>>>>> infinispan-dev@lists.jboss.org
>>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>>
>>>> ___
>>>> infinispan-dev mailing list
>>>> infinispan-dev@lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>
> 

-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Protobuf metadata cache and x-site

2018-04-12 Thread Tristan Tarrant
I think we can certainly make it additive, especially now that we have 
configuration templates in place: the user supplies a base template, and 
the internal cache logic override with what is needed so that broken 
configs are less probable (but still possible). Alternatively, instead 
of overriding, we just check that it matches the requirements.

Tristan

On 4/12/18 10:10 PM, Adrian Nistor wrote:
> Backing up caches with protobuf payload to a remote site will not work 
> if they are indexed, unless the remote site already has the schema for 
> the types in question, or else indexing will fail. If the cache is not 
> indexed it matters less.
> 
> So the replication of protobuf metadata cache has to be arranged somehow 
> before any other data is replicated. Manual replication is indeed PITA.
> 
> I remember in the very early version of remote query the protobuf 
> metadata cache configuration was created programatically on startup 
> unless a manually defined configuration with that name was found, 
> already provided by the user. In that case the user's config was used. 
> This approach had the benefit of allowing the user to gain control if 
> needed. But can also lead to gloom and doom. Was that too bad to do it 
> again :)))?
> 
> Adrian
> 
> On 04/12/2018 10:27 PM, Tristan Tarrant wrote:
>> It is definitely an internal cache. Because of this, automatically
>> backing it up to a remote site might not be such a good idea.
>>
>> Backups are enabled per-cache, and therefore just blindly replicating
>> the schema cache to the other site is not a good idea.
>>
>> I think that we need a cache-manager-level backup setting that does the
>> right thing.
>>
>> Tristan
>>
>> On 4/12/18 7:01 PM, Pedro Ruivo wrote:
>>> Wouldn't be better to assume the protobuf cache doesn't fit the internal
>>> cache use case? :)
>>>
>>> On 12-04-2018 17:21, Galder Zamarreno wrote:
>>>> Ok, we do need to find a better way to deal with this.
>>>>
>>>> JIRA: https://issues.jboss.org/browse/ISPN-9074
>>>>
>>>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo <pe...@infinispan.org
>>>> <mailto:pe...@infinispan.org>> wrote:
>>>>
>>>>
>>>>
>>>>   On 12-04-2018 15:49, Galder Zamarreno wrote:
>>>>    > Hi,
>>>>    >
>>>>    > We have an issue with protobuf metadata cache.
>>>>    >
>>>>    > If you run in a multi-site scenario, protobuf metadata
>>>>   information does
>>>>    > not travel across sites by default.
>>>>    >
>>>>    > Being an internal cache, is it possible to somehow
>>>>   override/reconfigure
>>>>    > it so that cross-site configuration can be added in 
>>>> standalone.xml?
>>>>
>>>>   No :( since it is an internal cache, its configuration can't 
>>>> be changed.
>>>>
>>>>    >
>>>>    > We're currently running a periodic job that checks if the
>>>>   metadata is
>>>>    > present and if not present add it. So, we have a workaround 
>>>> for
>>>>   it, but
>>>>    > it'd be not very user friendly for end users.
>>>>    >
>>>>    > Thoughts?
>>>>
>>>>   Unfortunately none... it is the first time an internal cache 
>>>> needs
>>>>   to do
>>>>   some x-site.
>>>>
>>>>    >
>>>>    > Cheers,
>>>>    > Galder
>>>>    >
>>>>    >
>>>>    > ___
>>>>    > infinispan-dev mailing list
>>>>    > infinispan-dev@lists.jboss.org
>>>>   <mailto:infinispan-dev@lists.jboss.org>
>>>>    > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>    >
>>>>   ___
>>>>   infinispan-dev mailing list
>>>>   infinispan-dev@lists.jboss.org 
>>>> <mailto:infinispan-dev@lists.jboss.org>
>>>>   https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>>>
>>>>
>>>> ___
>>>> infinispan-dev mailing list
>>>> infinispan-dev@lists.jboss.org
>>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
> 

-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Protobuf metadata cache and x-site

2018-04-12 Thread Tristan Tarrant
It is definitely an internal cache. Because of this, automatically 
backing it up to a remote site might not be such a good idea.

Backups are enabled per-cache, and therefore just blindly replicating 
the schema cache to the other site is not a good idea.

I think that we need a cache-manager-level backup setting that does the 
right thing.

Tristan

On 4/12/18 7:01 PM, Pedro Ruivo wrote:
> Wouldn't be better to assume the protobuf cache doesn't fit the internal
> cache use case? :)
> 
> On 12-04-2018 17:21, Galder Zamarreno wrote:
>> Ok, we do need to find a better way to deal with this.
>>
>> JIRA: https://issues.jboss.org/browse/ISPN-9074
>>
>> On Thu, Apr 12, 2018 at 5:56 PM Pedro Ruivo <pe...@infinispan.org
>> <mailto:pe...@infinispan.org>> wrote:
>>
>>
>>
>>  On 12-04-2018 15:49, Galder Zamarreno wrote:
>>   > Hi,
>>   >
>>   > We have an issue with protobuf metadata cache.
>>   >
>>   > If you run in a multi-site scenario, protobuf metadata
>>  information does
>>   > not travel across sites by default.
>>   >
>>   > Being an internal cache, is it possible to somehow
>>  override/reconfigure
>>   > it so that cross-site configuration can be added in standalone.xml?
>>
>>  No :( since it is an internal cache, its configuration can't be changed.
>>
>>   >
>>   > We're currently running a periodic job that checks if the
>>  metadata is
>>   > present and if not present add it. So, we have a workaround for
>>  it, but
>>   > it'd be not very user friendly for end users.
>>   >
>>   > Thoughts?
>>
>>  Unfortunately none... it is the first time an internal cache needs
>>  to do
>>  some x-site.
>>
>>   >
>>   > Cheers,
>>   > Galder
>>   >
>>   >
>>   > ___
>>   > infinispan-dev mailing list
>>   > infinispan-dev@lists.jboss.org
>>  <mailto:infinispan-dev@lists.jboss.org>
>>   > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>   >
>>  ___
>>  infinispan-dev mailing list
>>  infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>>  https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] CLI hangs for huge cache if RocksDB is used

2018-04-02 Thread Tristan Tarrant
I think it makes sense to discuss this here as Will has been busy working
on cache store iteration performance, and I'm sure he's interested in the
rocksdb specific optimizations.

--
Tristan Tarrant
Infinispan Lead & Data Grid Architect
Red Hat

On Mon, 2 Apr 2018, 19:37 Galder Zamarreno, <gal...@redhat.com> wrote:

> Infinispan version? Thread dumps?
>
> Best if you open a user forum post here:
>
> https://developer.jboss.org/en/infinispan/content?filterID=contentstatus%5Bpublished%5D~objecttype~objecttype%5Bthread%5D
>
> Cheers,
>
> On Mon, Apr 2, 2018 at 5:47 AM Sergey Chernolyas <
> sergey.chernol...@gmail.com> wrote:
>
>> Hi!
>>
>> I am using RocksDB Cache Storage. I faced with problem that CLI/Web hangs
>> long time before open information about all caches.
>> I uploaded to one cache 30_000_000 objects. Last versions of RocksDB has
>> property 'rocksdb.estimate-num-keys'. The property contains count of keys.
>> I supported the property in method RocksDBCacheStore.size .
>> But ... performance of CLI/Web changes a little.
>>
>> How I can fix a problem with CLI/Web performance ?
>>
>> A lot of thanks!
>>
>> --
>> -
>>
>> With best regards, Sergey Chernolyas
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Testsuite stability

2018-03-05 Thread Tristan Tarrant
On 3/5/18 11:39 AM, Radim Vansa wrote:
> On 03/01/2018 08:54 PM, Tristan Tarrant wrote:
>> Team,
>>
>> we currently have 6 failures happening on master:
>>
>> org.infinispan.test.hibernate.cache.commons.entity.EntityRegionAccessStrategyTest.testUpdate[non-JTA,
>> REPL_SYNC,AccessType[read-write]]
>>
>>  Radim is investigating this one in [1]
> 
> Uh, how is [1] related to 2LC? Both tests start with 'Ent' but that's
> all... I think Galder should be investigating that failure.

Damn, sleep deprivation plays tricks on me.

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Testsuite stability

2018-03-01 Thread Tristan Tarrant
Team,

we currently have 6 failures happening on master:
 
org.infinispan.test.hibernate.cache.commons.entity.EntityRegionAccessStrategyTest.testUpdate[non-JTA,
 
REPL_SYNC,AccessType[read-write]]

Radim is investigating this one in [1]

org.infinispan.query.blackbox.CompatModeClusteredCacheTest.testMerge

Gustavo/Adrian, any info on this one ?

org.infinispan.query.remote.impl.ProtobufMetadataCachePreserveStateAcrossRestartsTest.testStatePreserved

Adrian ?

org.infinispan.spring.support.embedded.InfinispanDefaultCacheFactoryBeanContextTest.springTestContextPrepareTestInstance
org.infinispan.spring.provider.sample.SampleRemoteCacheTest.springTestContextPrepareTestInstance

These two are fixed by [2]

org.infinispan.topology.ClusterTopologyManagerImplTest.testCoordinatorLostDuringRebalance

Dan ?

I think we can definitely get master green in time for 9.2.1.Final next 
week.


[1] https://github.com/infinispan/infinispan/pull/5746
[2] https://github.com/infinispan/infinispan/pull/5803
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

2018-03-01 Thread Tristan Tarrant
Why not just prestart caches ?

On 3/1/18 5:14 PM, Thomas SEGISMONT wrote:
> 
> 2018-03-01 16:36 GMT+01:00 Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>>:
> 
> You need to use the brand new CacheAdmin API:
> 
> 
> http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches
> 
> <http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches>
> 
> 
> I'll look into that for Vert.x 3.6 which will be based on Infinispan 9.2.
> 
> Is there any way to achieve these goals with 9.1.x?
> 
> 
> 
> 
> Tristan
> 
> On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
>  > Hi,
>  >
>  > This email follows up on my testing of the Infinispan Cluster Manager
>  > for Vert.x on Kubernetes.
>  >
>  > In one of the tests, we want to make sure that, after a rolling
> update
>  > of the application, the data submitted to Vert.x' AsyncMap is still
>  > present. And I found that when the underlying cache is predefined in
>  > infinispan.xml, the data is present, otherwise it's not.
>  >
>  > I pushed a simple reproducer on GitHub:
>  > https://github.com/tsegismont/cachedataloss
> <https://github.com/tsegismont/cachedataloss>
>  >
>  > The code does this:
>  > - a first node is started, and creates data
>  > - new nodes are started, but they don't invoke cacheManager.getCache
>  > - the initial member is killed
>  > - a "testing" member is started, printing out the data in the console
>  >
>  > Here are my findings.
>  >
>  > 1/ Even when caches are declared in infinispan.xml, the data is lost
>  > after the initial member goes away.
>  >
>  > A little digging showed that the caches are really distributed only
>  > after you invoke cacheManager.getCache
>  >
>  > 2/ Checking cluster status "starts" triggers distribution
>  >
>  > I was wondering why the behavior was not the same as with my Vert.x
>  > testing on Openshift. And then realized the only difference was the
>  > cluster readiness check, which reads the cluster health. So I updated
>  > the reproducer code to add such a check (still without invoking
>  > cacheManager.getCache). Then the caches defined in infinispan.xml
> have
>  > their data distributed.
>  >
>  > So,
>  >
>  > 1/ How can I make sure caches are distributed on all nodes, even
> if some
>  > nodes never try to get a reference with cacheManager.getCache, or
> don't
>  > check cluster health?
>  > 2/ Are we doing something wrong with our way to declare the default
>  > configuration for caches [1][2]?
>  >
>  > Thanks,
>  > Thomas
>  >
>  > [1]
>  >
> 
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
> 
> <https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10>
>  > [2]
>  >
> 
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
> 
> <https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22>
>  >
>  > ___
>  > infinispan-dev mailing list
>  > infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>
>  > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>  >
> 
> --
> Tristan Tarrant
> Infinispan Lead and Data Grid Architect
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Embedded mode: how-to get all caches started on all nodes?

2018-03-01 Thread Tristan Tarrant
You need to use the brand new CacheAdmin API:

http://infinispan.org/docs/stable/user_guide/user_guide.html#obtaining_caches


Tristan

On 3/1/18 4:30 PM, Thomas SEGISMONT wrote:
> Hi,
> 
> This email follows up on my testing of the Infinispan Cluster Manager 
> for Vert.x on Kubernetes.
> 
> In one of the tests, we want to make sure that, after a rolling update 
> of the application, the data submitted to Vert.x' AsyncMap is still 
> present. And I found that when the underlying cache is predefined in 
> infinispan.xml, the data is present, otherwise it's not.
> 
> I pushed a simple reproducer on GitHub: 
> https://github.com/tsegismont/cachedataloss
> 
> The code does this:
> - a first node is started, and creates data
> - new nodes are started, but they don't invoke cacheManager.getCache
> - the initial member is killed
> - a "testing" member is started, printing out the data in the console
> 
> Here are my findings.
> 
> 1/ Even when caches are declared in infinispan.xml, the data is lost 
> after the initial member goes away.
> 
> A little digging showed that the caches are really distributed only 
> after you invoke cacheManager.getCache
> 
> 2/ Checking cluster status "starts" triggers distribution
> 
> I was wondering why the behavior was not the same as with my Vert.x 
> testing on Openshift. And then realized the only difference was the 
> cluster readiness check, which reads the cluster health. So I updated 
> the reproducer code to add such a check (still without invoking 
> cacheManager.getCache). Then the caches defined in infinispan.xml have 
> their data distributed.
> 
> So,
> 
> 1/ How can I make sure caches are distributed on all nodes, even if some 
> nodes never try to get a reference with cacheManager.getCache, or don't 
> check cluster health?
> 2/ Are we doing something wrong with our way to declare the default 
> configuration for caches [1][2]?
> 
> Thanks,
> Thomas
> 
> [1] 
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L10
> [2] 
> https://github.com/tsegismont/cachedataloss/blob/master/src/main/resources/infinispan.xml#L22
> 
> _______
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] ci.infinispan.org

2018-03-01 Thread Tristan Tarrant
Hi all,

just a few notes on ci.infinispan.org:

- Added a permanent redirect rule from http to https
- Refreshed JDKs (9.0.4, 1.8.0_161, 1.8.0_sr5fp10)
- Updated Maven to 3.5.2 and Ant to 1.10.2
- Installed git 2.9.3 from the Software Collections to resolve the issue 
of shallow clones not working correctly

Additionally, the envinject plugin for Jenkins is preventing the 
inherited environment variables from leaking into the agent build. While 
this creates more reliable builds, it also caused failures in the 
WildFly integration tests because they could not resolve env.JAVA_HOME.
I have therefore added a line in Jenkinsfile for master that selects the 
JDK tool() to use for the build.
Unfortunately there is no way for declarative pipelines to parameterize 
this for other JDKs, so we will probably have to adopt a different 
strategy in order to build with different JDKs.

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] 9.3 branch in a week

2018-03-01 Thread Tristan Tarrant
Hi all,

we will branch for 9.3 on March 7th.

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.2.0.Final

2018-02-28 Thread Tristan Tarrant
We have finally release Infinispan 9.2.0.Final.

Come and read all about it:

http://blog.infinispan.org/2018/02/infinispan-920final.html

Thanks to the whole core team and community for the contributions. You 
are awesome !

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly IRC Meeting Logs 2018-02-26

2018-02-26 Thread Tristan Tarrant
Hi all,

the weekly meeting logs are available:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-02-26-15.02.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.2.0.CR3

2018-02-21 Thread Tristan Tarrant
Dear all,

we have released Infinispan 9.2.0.CR3. Read all about it here:

http://blog.infinispan.org/2018/02/infinispan-920cr3.html

Enjoy !

Tristan
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] ISPN-8798 ByteString places too strict a constraint on cache name length

2018-02-13 Thread Tristan Tarrant
We can cut 9.1.6.Final today.

--
Tristan Tarrant
Infinispan Lead & Data Grid Architect
Red Hat

On 13 Feb 2018 14:21, "Paul Ferraro" <paul.ferr...@redhat.com> wrote:

> Can one of the devs please review this patch?
> https://github.com/infinispan/infinispan/pull/5750
>
> The limit of cache names sizes to 127 bytes is too limiting for
> hibernate/JPA 2nd level cache deployments, which generate cache names
> using fully qualified class names of entity classes, which are user
> generated thus can easily exceed 128 bytes (but are far less likely to
> exceed 255).  This is exacerbated by the JPA integration, which
> additionally appends the deployment name.  We have a long term
> solution for this, but in the meantime, the above patch is sufficient
> to pass the TCK.
>
> We'll also need a 9.1.6.Final release ASAP, lest we revert back to
> Infinispan 8.2.x for WF12, the feature freeze for which is tomorrow
> (they are considering this upgrade a feature, given the scope of its
> impact).
>
> Thanks,
>
> Paul
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] 9.2.0 endgame plan

2018-02-11 Thread Tristan Tarrant
I had originally planned for a release for Wed 14th, but there are a 
number of things I'd like to see landing before Final and looking at the 
list I recommend doing a CR3.

In particular:
- Radim's Hot Rod changes
- Performance regressions as reported by Will
- Ensure that Sanne is happy with the WF modules
- Documentation and quickstarts/simple tutorials for new features
- Quickstarts/simple tutorials work flawlessly

Tristan

P.S.
I'll be on PTO Mon/Tue 12/13 February.
-- 
Tristan Tarrant
Infinispan Lead and Data Grid Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hot Rod secured by default

2018-02-05 Thread Tristan Tarrant
Sorry for reviving this thread, but I want to make sure we all agree on 
the following points.

DEFAULT CONFIGURATIONS
- The endpoints MUST be secure by default (authentication MUST be 
enabled and required) in all of the supplied default configurations.
- We can ship non-secure configurations, but these need to be clearly 
marked as such in the configuration filename (e.g. 
standalone-unsecured.xml).
- Memcached MUST NOT be enabled by default as we do not implement the 
binary protocol which is the only one that can do authn/encryption
- The default configurations (standalone.xml, domain.xml, cloud.xml) 
MUST enable only non-plaintext mechs (e.g. digest et al)

SERVER CHANGES
- Warn if a plain text mech is enabled on an unencrypted endpoint

API
- We MUST NOT add a "trust all certs" switch to the client config as 
that would thwart the whole purpose of encryption.

OPENSHIFT
- In the context of OpenShift, all pods MUST trust the master CA. This 
means that the CA must be injected into the trusted CAs for the pods AND 
into the JDK cacerts file. This MUST be done by the OpenShift JDK image 
automatically. (Debian does this on startup: [1])

Tristan

[1] 
https://git.mikael.io/mikaelhg/ca-certificates-java/blob/debian/20170531/src/main/java/org/debian/security/UpdateCertificates.java

On 9/14/17 5:45 PM, Galder Zamarreño wrote:
> Gustavo's reply was the agreement reached. Secured by default and an easy way 
> to use it unsecured is the best middle ground IMO.
> 
> So, we've done the securing part partially, which needs to be completed by 
> [2] (currently assigned to Tristan).
> 
> More importantly, we also need to complete [3] so that we have ship the 
> unsecured configuration, and then show people how to use that (docus, 
> examples...etc).
> 
> If you want to help, taking ownership of [3] would be best.
> 
> Cheers,
> 
> [2] https://issues.jboss.org/browse/ISPN-7815
> [3] https://issues.jboss.org/browse/ISPN-7818
> 
>> On 6 Sep 2017, at 11:03, Katia Aresti <kare...@redhat.com> wrote:
>>
>> @Emmanuel, sure it't not a big deal, but starting fast and smooth without 
>> any trouble helps adoption. Concerning the ticket, there is already one that 
>> was acted. I can work on that, even if is assigned to Galder now.
>>
>> @Gustavo
>> Yes, as I read - better - now on the security part, it is said for the 
>> console that we need those. My head skipped that paragraph or I read that 
>> badly, and I was wondering if it was more something related to "roles" 
>> rather than a user. My bad, because I read too fast sometimes and skip 
>> things ! Maybe the paragraph of the security in the console should be moved 
>> down to the console part, which is small to read now ?  When I read there 
>> "see the security part bellow" I was like : ok, the security is done !! :)
>>
>> Thank you for your replies !
>>
>> Katia
>>
>>
>> On Wed, Sep 6, 2017 at 10:52 AM, Gustavo Fernandes <gust...@infinispan.org> 
>> wrote:
>> Comments inlined
>>
>> On Tue, Sep 5, 2017 at 5:03 PM, Katia Aresti <kare...@redhat.com> wrote:
>> And then I want to go to the console, requires me to put again the 
>> user/password. And it does not work. And I don't see how to disable 
>> security. And I don't know what to do. And I'm like : why do I need security 
>> at all here ?
>>
>>
>> The console credentials are specified with MGMT_USER/MGMT_PASS env 
>> variables, did you try those? It will not work for APP_USER/APP_PASS.
>>
>>   
>> I wonder if you want to reconsider the "secured by default" point after my 
>> experience.
>>
>>
>> The outcome of the discussion is that the clustered.xml will be secured by 
>> default, but you should be able to launch a container without any security 
>> by simply passing an alternate xml in the startup, and we'll ship this XML 
>> with the server.
>>
>>
>> Gustavo
>>   
>>
>> My 2 cents,
>>
>> Katia
>>
>> On Tue, May 9, 2017 at 2:24 PM, Galder Zamarreño <gal...@redhat.com> wrote:
>> Hi all,
>>
>> Tristan and I had chat yesterday and I've distilled the contents of the 
>> discussion and the feedback here into a JIRA [1]. The JIRA contains several 
>> subtasks to handle these aspects:
>>
>> 1. Remove auth check in server's CacheDecodeContext.
>> 2. Default server configuration should require authentication in all entry 
>> points.
>> 3. Provide an unauthenticated configuration that users can easily switch to.
>> 4. Remove default username+passwords in docker image and instead show an 
>> info/warn message when these are 

[infinispan-dev] Weekly Infinispan IRC logs 2018-01-29

2018-01-29 Thread Tristan Tarrant
Hi all,

the weekly Infinispan logs are here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-01-29-15.01.log.html

Tristan
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly Infinispan IRC meeting logs 2018-01-08

2018-01-09 Thread Tristan Tarrant
The weekly meeting logs are here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2018/infinispan.2018-01-08-15.01.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead and JBoss Data Grid Chief Architect
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly IRC Metting logs 2017-12-04

2017-12-04 Thread Tristan Tarrant
Hi all,

we had our weekly meeting on #infinispan. The logs:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-12-04-15.00.log.html

Enjoy

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] PR labels

2017-12-03 Thread Tristan Tarrant
Following the suggestions, I have removed:

[Ready for review] - Use [Preview] when it ain't ready
[Wait CI  Results] - Always wait for CI results
[Check CI Failures!] - See above
[On Ice] - Just close it and reopen it
[Next release] - Just close it and reopen it

I have left [Backport] in there to mean "already fuly reviewed for 
another branch, only minimal effort required here".

Tristan

On 12/3/17 6:08 PM, Tristan Tarrant wrote:
> Ok, good point.
> 
> Tristan
> 
> On 12/1/17 10:07 AM, Radim Vansa wrote:
>> On 12/01/2017 10:04 AM, Radim Vansa wrote:
>>> On 12/01/2017 09:26 AM, Tristan Tarrant wrote:
>>>> Hello people,
>>>>
>>>> I'd like to rationalize the PR labels because I believe some of them 
>>>> are
>>>> useless:
>>>>
>>>> [Ready for review] - Any PR without the [Preview] label must fall under
>>>> this category
>>>> [Backport] - The burden should be on the PR owner to create relevant
>>>> backport PRs, not on the reviewer
>>>
>>> I think that [Backport] means that this is already in upstream, and
>>> therefore review should be mostly formal (not breaking APIs but not
>>> "this could be done 1% better.
>>
>> Hit send too fast... The complexity of a review indicates time spent
>> with the review; I'd expect a backport review to be a 15 minute job, not
>> 2 hour one, so when looking for a appetizer before lunch these are
>> on-sight good candidates.
>>
>>> Also it is a second warning for reviewer that this shouldn't be
>>> cherry-picked on master (when merging from cmdline).
>>
>>>
>>>> [Wait CI Results] - PRs should only be integrated after a successful CI
>>>> run (or when failures can be proven to be pre-existing)
>>>> [Check CI Failures!] - The CI runs already add failure/success to 
>>>> the PR
>>>> status. Checking CI failures should apply to ALL PRs.
>>>> [On Ice] PR should be closed and reopened when relevant again.
>>>>
>>>> Comments/suggestions ?
>>>>
>>>> Tristan
>>>
>>>
>>
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Exposure location of Minimum Node Required

2017-12-03 Thread Tristan Tarrant
On 12/1/17 4:18 PM, William Burns wrote:
> Recently I have been working on [1]. Calculating this value is not a 
> hard task. However as usual the hardest thing is where does this live 
> and what is its name.
> 
> In regards to its location I have a few places I was thinking:
> 
> 1. CacheImpl/CacheManagerImpl - it would just be an exposed ManagedAttribute
> This is my least favorite.
> 
> 2. CacheMgmtInterceptor/Stats/ClusterStats
> This is available at both cache and cluster levels, but this isn't 
> really a stat. However this one is the only option I found that actually 
> aggregates all nodes data to be presented, which would give a better 
> estimate (since each node can have a varying amount of data they could 
> each have a different idea of required node count).

The "Stats" name is a misnomer already, as we return cache information 
which is not a statistic (e.g. cache size).
In any case this is the right place.

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] PR labels

2017-12-03 Thread Tristan Tarrant
Ok, good point.

Tristan

On 12/1/17 10:07 AM, Radim Vansa wrote:
> On 12/01/2017 10:04 AM, Radim Vansa wrote:
>> On 12/01/2017 09:26 AM, Tristan Tarrant wrote:
>>> Hello people,
>>>
>>> I'd like to rationalize the PR labels because I believe some of them are
>>> useless:
>>>
>>> [Ready for review] - Any PR without the [Preview] label must fall under
>>> this category
>>> [Backport] - The burden should be on the PR owner to create relevant
>>> backport PRs, not on the reviewer
>>
>> I think that [Backport] means that this is already in upstream, and
>> therefore review should be mostly formal (not breaking APIs but not
>> "this could be done 1% better.
> 
> Hit send too fast... The complexity of a review indicates time spent
> with the review; I'd expect a backport review to be a 15 minute job, not
> 2 hour one, so when looking for a appetizer before lunch these are
> on-sight good candidates.
> 
>> Also it is a second warning for reviewer that this shouldn't be
>> cherry-picked on master (when merging from cmdline).
> 
>>
>>> [Wait CI Results] - PRs should only be integrated after a successful CI
>>> run (or when failures can be proven to be pre-existing)
>>> [Check CI Failures!] - The CI runs already add failure/success to the PR
>>> status. Checking CI failures should apply to ALL PRs.
>>> [On Ice] PR should be closed and reopened when relevant again.
>>>
>>> Comments/suggestions ?
>>>
>>> Tristan
>>
>>
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] PR labels

2017-12-03 Thread Tristan Tarrant
You basically just said +1 to all the ones I want to remove :)

Tristan

On 12/1/17 3:13 PM, Sebastian Laskawiec wrote:
> Hey Tristan,
> 
> Comments inlined.
> 
> Thanks,
> Sebastian
> 
> On Fri, Dec 1, 2017 at 9:28 AM Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>> wrote:
> 
> Hello people,
> 
> I'd like to rationalize the PR labels because I believe some of them are
> useless:
> 
> [Ready for review] - Any PR without the [Preview] label must fall under
> this category
> 
> 
> If a PR doesn't fall into Preview category, it must be Ready for Review. 
> In my opinion "Ready for Review" is redundant.
> 
> [Backport] - The burden should be on the PR owner to create relevant
> backport PRs, not on the reviewer
> 
> 
> +1
> 
> [Wait CI Results] - PRs should only be integrated after a successful CI
> run (or when failures can be proven to be pre-existing)
> 
> 
> All PRs should be evaluated by Jenkins. The CI check has 3 icons on 
> Github Pull Request page - green tick, red cross and yellow dot. Yellow 
> dot means that the PR is being built right now (or waiting in the 
> queue). I believe "Wait CI Results" and that yellow dot are identical 
> and "Wait CI Result" is redundant.
> 
> [Check CI Failures!] - The CI runs already add failure/success to the PR
> status. Checking CI failures should apply to ALL PRs.
> [On Ice] PR should be closed and reopened when relevant again.
> 
> 
> Let just close such PRs! Redundant...
> 
> 
> Comments/suggestions ?
> 
> Tristan
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] PR labels

2017-12-01 Thread Tristan Tarrant
Sure: all the labels I haven't mentioned can stay.

Tristan

On 12/1/17 1:03 PM, Pedro Ruivo wrote:
> Hi,
> 
> can we keep the "changes suggested/required"?
> This would be exclusive with "ready for preview" and means someone
> review the PR and requires a reply from the author.
> 
> Cheers,
> Pedro
> 
> On 01-12-2017 08:26, Tristan Tarrant wrote:
>> Hello people,
>>
>> I'd like to rationalize the PR labels because I believe some of them are
>> useless:
>>
>> [Ready for review] - Any PR without the [Preview] label must fall under
>> this category
>> [Backport] - The burden should be on the PR owner to create relevant
>> backport PRs, not on the reviewer
>> [Wait CI Results] - PRs should only be integrated after a successful CI
>> run (or when failures can be proven to be pre-existing)
>> [Check CI Failures!] - The CI runs already add failure/success to the PR
>> status. Checking CI failures should apply to ALL PRs.
>> [On Ice] PR should be closed and reopened when relevant again.
>>
>> Comments/suggestions ?
>>
>> Tristan
>>
> ___________
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] PR labels

2017-12-01 Thread Tristan Tarrant
Hello people,

I'd like to rationalize the PR labels because I believe some of them are 
useless:

[Ready for review] - Any PR without the [Preview] label must fall under 
this category
[Backport] - The burden should be on the PR owner to create relevant 
backport PRs, not on the reviewer
[Wait CI Results] - PRs should only be integrated after a successful CI 
run (or when failures can be proven to be pre-existing)
[Check CI Failures!] - The CI runs already add failure/success to the PR 
status. Checking CI failures should apply to ALL PRs.
[On Ice] PR should be closed and reopened when relevant again.

Comments/suggestions ?

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Weekly IRC Meeting logs 2017-11-06

2017-11-06 Thread Tristan Tarrant
If jbott is down there is not much we can do about it.

Tristan

On 11/6/17 7:11 PM, Sanne Grinovero wrote:
> On 6 November 2017 at 16:07, Dan Berindei <dan.berin...@gmail.com> wrote:
>> Hi everyone
>>
>> JBott wasn't available, so the meeting logs are available here:
>>
>> https://gist.github.com/danberindei/6d4d7e742eba41b0fb1bcba0ee735a8e
> 
> Not a particularly critical detail, but it's quite hard to follow who
> said what in this log format ;)
> 
> Thanks,
> Sanne
> 
>>
>> Cheers
>> Dan
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Prepending internal cache names with org.infinispan instead of triple underscore

2017-11-06 Thread Tristan Tarrant
To add to Adrian's history lesson:

ClusterRegistry (a single, replicated, non-persistent, scoped cache) was 
replaced with the InternalCacheRegistry which provides a common way for 
subsystems to register internal caches with the "traits" they want but 
configured to take into account some global settings. This means setting 
up proper security roles, persistent paths, etc.

We do however have a proliferation of caches and in my ISPN-7776 PR I've 
reintroduced a scoped config/state cache which can be shared by 
interested parties.

I do like the org.infinispan prefix for internal caches (and I've 
amended my PR to use that). I'm not that concerned about the additional 
payload, since most of the internal caches we have at the moment change 
infrequently (schema, script, topology, etc), but we should probably 
come up with a proper way to identify caches with a common short ID.

Tristan

On 11/6/17 10:46 AM, Adrian Nistor wrote:
> Different internal caches have different needs regarding consistency,
> tx, persistence, etc...
> The first incarnation of ClusterRegistry was using a single cache and
> was implemented exactly as you suggested, but had major shortcomings
> satisfying the needs of several unrelated users, so we decided to split.
> 
> On 11/03/2017 10:42 AM, Radim Vansa wrote:
>> Because you would have to duplicate entire Map on each update, unless
>> you used not-100%-so-far functional commands. We've used the ScopedKey
>> that would make this Cache<ScopedKey<PURPOSE, Object>, Object>. This
>> approach was abandoned with ISPN-5932 [1], Adrian and Tristan can
>> elaborate why.
>>
>> Radim
>>
>> [1] https://issues.jboss.org/browse/ISPN-5932
>>
>> On 11/03/2017 09:05 AM, Sebastian Laskawiec wrote:
>>> I'm pretty sure it's a silly question, but I need to ask it :)
>>>
>>> Why can't we store all our internal information in a single,
>>> replicated cache (of a type <PURPOSE, Map<Object, Object>). PURPOSE
>>> could be an enum or a string identifying whether it's scripting cache,
>>> transaction cache or anything else. The value (Map<Object, Object>)
>>> would store whatever you need.
>>>
>>> On Fri, Nov 3, 2017 at 2:24 AM Sanne Grinovero <sa...@infinispan.org
>>> <mailto:sa...@infinispan.org>> wrote:
>>>
>>>   On 2 November 2017 at 22:20, Adrian Nistor <anis...@redhat.com
>>>   <mailto:anis...@redhat.com>> wrote:
>>>   > I like this proposal.
>>>
>>>   +1
>>>
>>>   > On 11/02/2017 03:18 PM, Galder Zamarreño wrote:
>>>   >> Hi all,
>>>   >>
>>>   >> I'm currently going through the JCache 1.1 proposed changes,
>>>   and one that made me think is [1]. In particular:
>>>   >>
>>>   >>> Caches do not use forward slashes (/) or colons (:) as part of
>>>   their names. Additionally it is
>>>   >>> recommended that cache names starting with java. or
>>>   javax.should not be used.
>>>   >> I'm wondering whether in the future we should move away from
>>>   the triple underscore trick we use for internal cache names, and
>>>   instead just prepend them with `org.infinispan`, which is our
>>>   group id. I think it'd be cleaner.
>>>   >>
>>>   >> Thoughts?
>>>   >>
>>>   >> [1] https://github.com/jsr107/jsr107spec/issues/350
>>>   >> --
>>>   >> Galder Zamarreño
>>>   >> Infinispan, Red Hat
>>>   >>
>>>   >>
>>>   >> ___
>>>   >> infinispan-dev mailing list
>>>   >> infinispan-dev@lists.jboss.org
>>>   <mailto:infinispan-dev@lists.jboss.org>
>>>   >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>   >
>>>   >
>>>   > ___
>>>   > infinispan-dev mailing list
>>>   > infinispan-dev@lists.jboss.org
>>>   <mailto:infinispan-dev@lists.jboss.org>
>>>   > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>   ___
>>>   infinispan-dev mailing list
>>>   infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>>>   https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>>
>>>
>>>
>>> ___
>>> infinispan-dev mailing list
>>> infinispan-dev@lists.jboss.org
>>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Beta1 this week

2017-11-06 Thread Tristan Tarrant
Hey all,

we will be releasing Beta1 this week, so please dedicate most of your 
time to reviewing and merging PRs (and implementing requested changes to 
your own PRs).

Adrian is release wrangler.

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Initial Infinispan driver for JNoSQL

2017-10-19 Thread Tristan Tarrant
Hi all,

I have just submitted a pull request for an initial driver for JNoSQL

https://github.com/eclipse/jnosql-diana-driver/pull/49

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Replacing IRC

2017-10-18 Thread Tristan Tarrant
Yes, I'm currently more tempted to wait for Stride (I have signed up for 
early access).

Tristan

On 10/18/17 1:09 PM, Vladimir Blagojevic wrote:
> Some updates on memory issues we talked about 
> https://slack.engineering/reducing-slacks-memory-footprint-4480fec7e8eb
> 
> I looked at Stride briefly and it also looks very promising with its 
> actions and decisions focus and deep integrations with Atlassian stack 
> we use anyway!
> 
> On 2017-10-16 8:45 AM, Katia Aresti wrote:
>> Hi all,
>>
>> I'm a strong adopter of slack and I really like it. I've used it since 
>> it came out with Duchess (we started with an internal use for the 
>> team, and we have a community slack now with more that 150 people 
>> there that is little by little replacing our google group). I've use 
>> it in different communities, startups and big company client's teams. 
>> As far as I know, other open-source projects have already adopted it 
>> successfully.
>> What is cool about it is that joining any team and switching from one 
>> team to another is very easy. If people want to join the community, 
>> and they are already using slack, is very easy.
>>
>> I agree with Sanne, being able to use it from the smartphone is 
>> important and could be considered as a high requirement. I use slack 
>> on my phone, and it helps quit a lot. Most of the time I use it from 
>> my laptop, but there are cases where mobile has really helped us 
>> (specially in Duchess France slack). I use the native slack client on 
>> my mac and works very well.
>>
>> I haven't tested Stride or Gitter.
>>
>> Katia
>>
>>
>>
>>
>>
>> On Mon, Oct 16, 2017 at 11:29 AM, Tristan Tarrant <ttarr...@redhat.com 
>> <mailto:ttarr...@redhat.com>> wrote:
>>
>> HipChat is being replaced by Stride.
>> I have dimissed HipChat in the past because it did not allow for a
>> single account to be shared across multiple groups. I believe this
>> will
>> be solved by Stride.
>>
>> Tristan
>>
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Replacing IRC

2017-10-16 Thread Tristan Tarrant
HipChat is being replaced by Stride.
I have dimissed HipChat in the past because it did not allow for a 
single account to be shared across multiple groups. I believe this will 
be solved by Stride.

Tristan

On 10/16/17 11:16 AM, Wolf Fink wrote:
> Why not use Hipchat as EAP/Wildfly to prevent from too many different 
> platforms ?
> 
> On Mon, Oct 16, 2017 at 10:27 AM, Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>> wrote:
> 
> Dear all,
> 
> last week we discussed the possibility of abandoning IRC in favour of a
> more modern alternative.
> 
> Hard requirements:
> - free (as in beer)
> - hosted (we don't want to maintain it ourselves)
> - multi-platform client: native (Linux, MacOS, Windows), browser
> - persistent logs
> - distinction between channel operators and normal users
> - guest access (without the need for registration)
> - integration with Jira for issue lookup
> - integration with GitHub for PR lookup
> - IRC bridge (so that users can connect with an IRC client)
> - ability to export data in case we want to move somewhere else
> - on-the-fly room creation for mini-teams
> 
> Optionals:
> - Free (as in freedom)
> - offline notifications (i.e. see if I was notified while away)
> - mobile client: Android and iOS
> - proper native client (as most Electron clients are quite fat)
> - chat logs accessible without a client (it is acceptable if this is
> achieved via a bot)
> - integration with Jenkins for CI status
> - XMPP bridge (so that users can connect with an XMPP client)
> 
> Not needed:
> - file sharing, audio/video
> 
> Here is a list of candidates:
> - IRC (i.e. no change)
> - Slack
> - Stride (Atlassian's upcoming replacement for HipChat)
> - Matrix (Matrix.org, unfortunately with funding issues)
> - Gitter
> - Discord
> - Rocket.chat (unfortunately hosting is paid)
> 
> If you have any other suggestions/recommendations, they are more than
> welcome.
> 
> Tristan
> 
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Replacing IRC

2017-10-16 Thread Tristan Tarrant
Dear all,

last week we discussed the possibility of abandoning IRC in favour of a 
more modern alternative.

Hard requirements:
- free (as in beer)
- hosted (we don't want to maintain it ourselves)
- multi-platform client: native (Linux, MacOS, Windows), browser
- persistent logs
- distinction between channel operators and normal users
- guest access (without the need for registration)
- integration with Jira for issue lookup
- integration with GitHub for PR lookup
- IRC bridge (so that users can connect with an IRC client)
- ability to export data in case we want to move somewhere else
- on-the-fly room creation for mini-teams

Optionals:
- Free (as in freedom)
- offline notifications (i.e. see if I was notified while away)
- mobile client: Android and iOS
- proper native client (as most Electron clients are quite fat)
- chat logs accessible without a client (it is acceptable if this is 
achieved via a bot)
- integration with Jenkins for CI status
- XMPP bridge (so that users can connect with an XMPP client)

Not needed:
- file sharing, audio/video

Here is a list of candidates:
- IRC (i.e. no change)
- Slack
- Stride (Atlassian's upcoming replacement for HipChat)
- Matrix (Matrix.org, unfortunately with funding issues)
- Gitter
- Discord
- Rocket.chat (unfortunately hosting is paid)

If you have any other suggestions/recommendations, they are more than 
welcome.

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] CFV: Close Infinispan user forum in favour of StackOverflow

2017-10-16 Thread Tristan Tarrant
Dear all,

last week we discussed the possibility of closing our Infinispan user 
forum [1] and redirecting users to StackOverflow instead. This would 
have a number of advantages:

- wider audience
- encourages non-team members to provide answers
- better visibility in search engines


We have also discussed the possibility of a user-oriented "mailing list" 
on Google Groups for more articulate discussions. We can even consider 
migrating this list over there.

Comments are welcome and encouraged

Tristan

[1] https://developer.jboss.org/en/infinispan/content

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan Weekly IRC Meeting Logs 2017-10-02

2017-10-02 Thread Tristan Tarrant
Hi all,

here are the logs for this week's meeting:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-10-02-14.00.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly Infinispan IRC Meeting Logs 2017-09-25

2017-09-25 Thread Tristan Tarrant
Howdy,

the weekly infinispan meeting happened on IRC like every Monday, and the 
logs are here for your perusal:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-09-25-14.06.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Code examples in multiple languages

2017-09-20 Thread Tristan Tarrant
One thing that I wish we had is the ability, when possible, to give code 
examples for our API in all of our implementations (embedded, hotrod 
java, c++, c#, node.js and REST).

Currently each one handles documentation differently and we are not very 
consistent with structure, content and examples.

I've been looking at Slate [1] which uses Markdown and is quite nice, 
but has the big disadvantage that it would create something which is 
separate from our current documentation...

An alternative approach would be to implement an asciidoctor plugin 
which provides some kind of tabbed code block.

Any other ideas ?


Tristan

[1] https://lord.github.io/slate/
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.2 schedule

2017-09-20 Thread Tristan Tarrant
With the release of 9.1.1.Final, and the delay it introduced, I have 
updated the 9.2.x schedule and roadmap.

These are the expected release dates:

9.2.0.Alpha1Oct 4th
9.2.0.Alpha2Oct 18th
9.2.0.Beta1 Nov 1st
9.2.0.Beta2 Nov 15th (feature freeze)
9.2.0.CR1   Nov 29th (component upgrade freeze)
9.2.0.Final Dec 13th

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.1.1.Final is out

2017-09-20 Thread Tristan Tarrant
We have just released Infnispan 9.1.1.Final. Read about it here:

http://blog.infinispan.org/2017/09/infinispan-911final-is-out.html

Enjoy !

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Why do we need separate Infinispan OpenShift template repo?

2017-09-19 Thread Tristan Tarrant
On 9/19/17 9:42 AM, Galder Zamarreño wrote:
> Hi,
> 
> I was looking at the Infinispan OpenShift template repo [1], and I started 
> questioning why this repo contains Infinispan configurations for the cloud 
> [2]. Shouldn't these be part of the Infinispan Server distribution? Otherwise 
> this repo is going to somehow versioned depending on the Infinispan version...
> 
> Which lead me to think, should repo [1] exist at all? Why aren't all its 
> contents part of infinispan/infinispan? The only reason that I could think 
> for keeping a different repo is maybe if you want to version it according to 
> different OpenShift versions, but that could easily be achieved in 
> infinispan/infinispan with different folders.

It was created separately because its release cycle can be much faster. 
Once things settle we can bring it in.

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Weekly Infninispan IRC Meeting Logs 2017-09-11

2017-09-12 Thread Tristan Tarrant
Dear all,

the logs for yesterday's IRC meeting are here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-09-11-14.00.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] How about moving Infinispan forums to StackOverflow?

2017-09-08 Thread Tristan Tarrant
Yes, I think it would be a good idea. I've seen a number of users post 
in both places, but SO is definitely more discoverable by the wider 
community and has a lower barrier to entry.

Tristan

On 9/8/17 9:04 AM, Sebastian Laskawiec wrote:
> Hey guys,
> 
> I'm pretty sure you have seen: https://developer.jboss.org/thread/275956
> 
> How about moving Infinispan questions too?
> 
> Thanks,
> Sebastian
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Current 'master' status

2017-08-31 Thread Tristan Tarrant
Dear all,

it's now been a while since we've embarked on a mission to kill all 
those pesky unreliable tests and, while we've achieved quite some 
progress, we're still not in an ideal and reliable situation.

I think we are quite close with the latest round of fixes, but there is 
still one outstanding offender:

ISPN-6827 ReplTotalOrderVersionedStateTransferTest.testStateTransfer 
random failures

Any other failures you wish to single out ?

I propose we move this to the unstable group, merge some of the pending 
PRs and release 9.1.1 and branch for 9.2. At this point I'll rework the 
9.x roadmap and schedule.

At this point we can reopen all of the PRs that were closed: make sure 
you open them before rebasing, otherwise GitHub will complain.



Tristan






-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan weekly IRC meeting log 2017-08-29

2017-08-29 Thread Tristan Tarrant
Hey all,

because the bot was down, I am pasting them as a Gist:

https://gist.github.com/tristantarrant/ee46a7c392001e53e086fec430c929e2

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan weekly IRC meeting log 2017-08-29

2017-08-29 Thread Tristan Tarrant
Hey all,

because the bot was down, I am pasting the logs as a Gist:

https://gist.github.com/tristantarrant/ee46a7c392001e53e086fec430c929e2

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] A tool for adjusting configuration

2017-08-28 Thread Tristan Tarrant
For server the tool already exists: the server CLI can work in offline 
mode and manipulate a configuration using DMR ops.

Tristan

On 8/28/17 1:41 PM, Sebastian Laskawiec wrote:
> Hey,
> 
> Our cloud integration bits require a tool for adjusting the 
> configuration for certain use cases. A common example would be - take 
> this `cloud.xml` file, remove all caches, add a new, replicated cache as 
> default one.
> 
> The tool should take either configuration or a file name as input (e.g. 
> `config-tool --add-default-cache -f cloud.xml` or `cat cloud.xml | 
> config-tool --add-default-cache > cloud-new.xml`) and print out 
> configuration either to System Out or to a file.
> 
> Do you have any ideas what could I use to write such a tool? Those 
> technologies come into my mind:
> 
>   * Perl
>   * Python
>   * Java (probably with some XPath library)
> 
> Thoughts? Ideas? Recommendations?
> 
> Thanks,
> Sebastian
> -- 
> 
> SEBASTIANŁASKAWIEC
> 
> INFINISPAN DEVELOPER
> 
> Red HatEMEA <https://www.redhat.com/>
> 
> <https://red.ht/sig>
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Rest server storage nuances

2017-08-11 Thread Tristan Tarrant
On 10/08/17 17:43, Gustavo Fernandes wrote:

> Proposal:
> 
> Remove this behavior completely. For a certain cache, all entries will 
> be homogeneous, just like
> Hot Rod, Memcached and embedded. The user can optionally configure the 
> MimeType at cache level.

Yes, this is my preference and I believe that was the consensus last 
time we talked about this.


Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] URGENT: Master test failures

2017-08-02 Thread Tristan Tarrant
Since we have many open PRs and CI is rebuilding each one on every 
master commit, I'm going to close all feature PRs temporarily so that we 
can move faster.

Tristan


-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] URGENT: Master test failures

2017-07-31 Thread Tristan Tarrant
Hi all,

these are some of the failures I have seen recently in master. Some of 
these are already being ignored. If you know of more, please add them.
It seems like there are some recurring failures with some rehashing 
tests. We REALLY REALLY need to bring this list down to 0 ASAP !
Let us stop every other activity until we get there. Please feel free to 
comment, disable, add any missing known failures.

Tristan

OptimisticPrimaryOwnerCrashDuringPrepareTest


Tracked by ISPN-8139.

http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.distribution.rehash/OptimisticPrimaryOwnerCrashDuringPrepareTest/testPrimaryOwnerCrash/

JCacheTwoCachesBasicOpsTest
---

I hadn't seen this in a while. Tracked by ISPN-6952

http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.jcache/JCacheTwoCachesBasicOpsTest/testRemovedListener_remote_/

DistributedStreamIteratorWithPassivationTest


http://ci.infinispan.org/job/Infinispan/job/master/60/testReport/junit/org.infinispan.stream/DistributedStreamIteratorWithPassivationTest/testConcurrentActivationWithFilter_DIST_SYNC__tx_false_/


HotRodCustomMarshallerIteratorIT

Marked as ignored. Tracked by ISPN-8001. This fails because of a race 
condition in the deployment of the marshaller.

http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.server.test.client.hotrod/HotRodCustomMarshallerIteratorIT(localmode-udp)/testIteration/


EmbeddedHotRodCacheListenerTest
---

http://ci.infinispan.org/job/Infinispan/job/master/60/testReport/junit/org.infinispan.it.compatibility/EmbeddedHotRodCacheListenerTest/setup/

ScatteredCrashInSequenceTest

Marked as ignored. Tracked by ISPN-8097

http://ci.infinispan.org/job/Infinispan/job/master/59/testReport/junit/org.infinispan.partitionhandling/ScatteredCrashInSequenceTest/testSplit2_SCATTERED_SYNC_/

RehashWithL1Test


Tracked by ISPN-7801.

http://ci.infinispan.org/job/Infinispan/job/master/58/testReport/junit/org.infinispan.distribution.rehash/RehashWithL1Test/testPutWithRehashAndCacheClear/

NonTxPutIfAbsentDuringLeaveStressTest
-

Tracked by ISPN-6451.

http://ci.infinispan.org/job/Infinispan/job/master/57/testReport/junit/org.infinispan.distribution.rehash/NonTxPutIfAbsentDuringLeaveStressTest/testNodeLeavingDuringPutIfAbsent_DIST_SYNC_/

ReplTotalOrderVersionedStateTransferTest


Tracked by ISPN-6827.

http://ci.infinispan.org/job/Infinispan/job/master/57/testReport/junit/org.infinispan.tx.totalorder.statetransfer/ReplTotalOrderVersionedStateTransferTest/testStateTransfer/

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Transactional consistency of query

2017-07-31 Thread Tristan Tarrant
 to index it right away. The thread that executes the
>  > interceptor handler is also dependent on ownership (due to remote
>  > LockCommand execution), so I think that it does not fail the
> local-mode
>  > tests.
>  >
>  > [B] ... and it does so twice as a regression after ISPN-7840 but
> that's
>  > easy to fix.
>  >
>  > [C] Indexing in prepare command was OK before ISPN-7840 with
> pessimistic
>  > locking which does not send the CommitCommand, but now that the
> QI has
>  > been moved below EWI it means that we're indexing before storing the
>  > actual values. Optimistic locking was not correct, though.
>  >
>  > [1]
>  >
> 
> https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546
> 
> <https://github.com/rvansa/infinispan/commit/1d62c9b84888c7ac21a9811213b5657aa44ff546>
>  >
>  >
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> 
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] 9.1.1, 9.2 and 9.1.x branch

2017-07-21 Thread Tristan Tarrant
Hey all,

i just wanted to clarify the situation with master and the releases.
I would like to tag 9.1.1 as soon as possible with 0 testsuite failures 
(other fixes are also acceptable).
As soon as that is done we can branch for 9.2.

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.1.0.Final

2017-07-14 Thread Tristan Tarrant
Dear all,

it is with great pleasure that we are announcing the release of 
Infinispan 9.1.

This release contains a number of great features:

- conflict resolution
- scattered caches
- clustered counters
- HTTP/2 support for the REST endpoint
- batching support for cache stores
- locked streams
- cache creation/removal over Hot Rod
- endpoint admin through the console
- ... and much more

So please check out the full announcement:

http://blog.infinispan.org/2017/07/infinispan-91-bastille.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly IRC Meeting Logs 2017-07-03

2017-07-03 Thread Tristan Tarrant
Hi all,

the weekly meeting logs are here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-07-03-14.01.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Feedback for PR 5233 needed

2017-07-03 Thread Tristan Tarrant
I like it a lot.
To follow up on my comment on the PR, but to get a wider distribution, 
we really need to think about how to deal with redeployments and 
resource restarts.
I think restarts are unavoidable: a redeployment means dumping and 
replacing a classloader with all of its classes. There are two 
approaches I can think of:

- "freezing" and "thawing" a cache via some form of persistence (which 
could also mean adding a temporary cache store
- separate the wildfly service lifecycle from the cache lifecycle, 
detaching/reattaching a cache without stopping when the wrapping service 
is restarted.

Tristan

On 6/29/17 5:20 PM, Adrian Nistor wrote:
> People, don't be shy, the PR is in now, but things can still change
> based on you feedback. We still have two weeks until we release the Final.
> 
> On 06/29/2017 03:45 PM, Adrian Nistor wrote:
>> This pr [1] adds a new approach for defining the compat marshaller class
>> and the indexed entity classes (in server), and the same approach could
>> be used in future for deployment of encoders,  lucene analyzers and
>> possilby other code bits that a user would want to add a server in order
>> to implement an extension point that we support.
>>
>> Your feedback is wellcome!
>>
>> [1] https://github.com/infinispan/infinispan/pull/5233
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] 9.1.0 endgame

2017-06-21 Thread Tristan Tarrant
The release wrangler for 9.1.0.CR1 will be Will.
Will willl (haha !) perform the release on the 29th. This is a hard date.

Tristan

On 6/19/17 12:22 PM, Tristan Tarrant wrote:
> Hi all,
> 
> I have updated Jira with the next milestones for 9.1.0
> 
> 9.1.0.CR1 - 30th June
> 9.1.0.Final - 14th July
> 
> This extends the traditional minor-release timebox by two weeks, 
> essential because the features we wanted for Beta were a bit late.
> 
>  From now until the end we should only consider features whose PR have 
> been open for a while (e.g. scattered cache). Bug fixes and minor 
> enhancements to existing features are obviously allowed.
> 
> Anything else will need to be shunted to 9.2.0.
> 
> Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] 9.1.0 endgame

2017-06-19 Thread Tristan Tarrant
Awesome name  :)

Tristan

On 6/19/17 1:34 PM, Emmanuel Bernard wrote:
> 
>> On 19 Jun 2017, at 12:22, Tristan Tarrant <ttarr...@redhat.com> wrote:
>>
>> 9.1.0.Final - 14th July
> 
> Better be named Bastille one way or another.
> https://www.ratebeer.com/beer/revolution-bastille/179085/
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] 9.1.0 endgame

2017-06-19 Thread Tristan Tarrant
Hi all,

I have updated Jira with the next milestones for 9.1.0

9.1.0.CR1 - 30th June
9.1.0.Final - 14th July

This extends the traditional minor-release timebox by two weeks, 
essential because the features we wanted for Beta were a bit late.

 From now until the end we should only consider features whose PR have 
been open for a while (e.g. scattered cache). Bug fixes and minor 
enhancements to existing features are obviously allowed.

Anything else will need to be shunted to 9.2.0.

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-09 Thread Tristan Tarrant
On 6/9/17 9:29 AM, Radim Vansa wrote:
> An umbrella module that would contain this 'discovery API' would need
> all the dependencies, so that would be a perfect replacement for the
> embedded maven artifact. Shouldn't be that much of a work to hack this
> together - how do you think that should be called? infinispan-api (but
> it would be nicer to reserve this if we ever manage to create the
> 'public API' module, with interfaces only), infinispan-facade,
> infinispan-surface? We could even use infinispan-embedded, but that
> would cause some confusion if we distributed infinispan-embedded uberjar
> and infinispan-embedded umbrella artifact.
org.infinispan:infinispan

Nothing else.

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-09 Thread Tristan Tarrant
They would be shipped in the zip distributions.

Tristan

On 6/9/17 8:37 AM, Sebastian Laskawiec wrote:
> I agree with Alan here. Maven Central is a free "download area", so I 
> wouldn't give it up for free. BTW, what is the point of creating and not 
> shipping them?
> 
> I would lean towards to removing them completely or limiting the number 
> of use cases to the minimum e.g. we shouldn't support using 
> infinispan-embedded and jcache; if jcache is essential it should be 
> inside infinispan-embedded; the same for Spring integration modules - 
> either we should put them in uber jars or say that you can use Spring 
> integration with small jars.
> 
> On Fri, Jun 9, 2017 at 5:05 AM Alan Field <afi...@redhat.com 
> <mailto:afi...@redhat.com>> wrote:
> 
> Wasn't the ability to add a single dependency to a project to start
> using Infinispan the whole purpose for the uber jars? I'm not trying
> to make an argument for keeping them, because I know they have
> caused many issues. I just think that if we are going to remove them
> from Maven, then there should be a way to achieve the same easy
> developer on boarding that uber jars were supposed to provide.
> Whether this is Maven project templates, or something else doesn't
> matter.
> 
> Thanks,
> Alan
> 
> - Original Message -
>  > From: "Tristan Tarrant" <ttarr...@redhat.com
> <mailto:ttarr...@redhat.com>>
>  > To: infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>
>  > Sent: Thursday, June 8, 2017 4:05:08 AM
>  > Subject: Re: [infinispan-dev] Why JCache embedded has core as
> provided dependency
>  >
>  > I think we should turn off maven deployment for uber jars.
>  >
>  > Tristan
>  >
>  > On 6/7/17 5:10 PM, Gustavo Fernandes wrote:
>  > > On Wed, Jun 7, 2017 at 11:02 AM, Galder Zamarreño
> <gal...@redhat.com <mailto:gal...@redhat.com>
>  > > <mailto:gal...@redhat.com <mailto:gal...@redhat.com>>> wrote:
>  > >
>  > > As far as I see it:
>  > >
>  > > * infinispan-embedded should never be a dependency in a
> Maven project.
>  > >
>  > > * No uber jars should really be used as Maven dependencies
> because
>  > > all the exclusion that fine grained dependencies allow you
> to do
>  > > goes out of the window when all classes are inside a jar.
> This is
>  > > not just theory, I've personally had such issues.
>  > >
>  > > * Uber jars are designed for Ant or other build tool users that
>  > > don't have a dependency resolution engine in place.
>  > >
>  > > Cheers,
>  > >
>  > > p.s. I thought we had already discussed this before?
>  > >
>  > >
>  > >
>  > > I totally agree. In addition, uberjars should not be an osgi
> bundle or a
>  > > jboss module, for similar reasons.
>  > >
>  > > P.S: Even Ant has a dependency mgmt available, which is Ivy.
>  > >
>  > > Cheers,
>  > > Gustavo
>  > >
>  > > --
>  > > Galder Zamarreño
>  > > Infinispan, Red Hat
>  > >
>  > >  > On 7 Jun 2017, at 11:50, Sebastian Laskawiec
> <slask...@redhat.com <mailto:slask...@redhat.com>
>  > > <mailto:slask...@redhat.com <mailto:slask...@redhat.com>>>
> wrote:
>  > >  >
>  > >  > Hey,
>  > >  >
>  > >  > The change was introduced by this commit [1] and relates
> to this
>  > > JIRAs [2][3]. The root cause is in [3].
>  > >  >
>  > >  > Imagine a scenario where you add JCache module to your
> together
>  > > infinispan-embedded. If your classpath was constructed in
> such a way
>  > > that infinispan-embedded was before infinispan-core
> (classpath is
>  > > scanned from left to right in standalone apps), we could get a
>  > > relocated (uber jars move some classes into other packages)
> logger.
>  > > That caused class mismatch errors. It is worth to mention
> that it
>  > > will happen to all relocated classes, logger was just an
> example.
>  > > 

[infinispan-dev] Test. Don't read.

2017-06-09 Thread Tristan Tarrant
I told you not to read it.
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Why JCache embedded has core as provided dependency

2017-06-08 Thread Tristan Tarrant
ore details behind this decision?
>  >
>  > Cheers,
>  > --
>  > Galder Zamarreño
>  > Infinispan, Red Hat
>  >
>  > --
>  > SEBASTIAN ŁASKAWIEC
>  > INFINISPAN DEVELOPER
>  > Red Hat EMEA
>  >
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> 
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Weekly IRC Meeting logs 2017-06-05

2017-06-05 Thread Tristan Tarrant
Hi all,

the logs for this week's meeting are here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-06-05-14.01.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Netty SSL Context, was [Hot Rod secured by default]

2017-06-05 Thread Tristan Tarrant
Actually, WildFly 11 will allow this.
Additionally, in our restructured server, we can do whatever we want.

Tristan

On 6/5/17 12:29 PM, Sebastian Laskawiec wrote:
> We actually have more alternatives - e.g. we could use OpenSSL via 
> Boring SSL library [1]. The root problem remains the same - we can use 
> only what we obtain from the WF server. And currently we obtain 
> only JSSE SSLContext...
> 
> [1] http://netty.io/wiki/forked-tomcat-native.html
> 
> On Mon, Jun 5, 2017 at 10:34 AM Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>> wrote:
> 
> We should use this:
> 
> https://github.com/wildfly/wildfly-openssl
> 
> Tristan
> 
> On 6/1/17 1:17 PM, Gustavo Fernandes wrote:
>  > On Thu, Jun 1, 2017 at 10:51 AM, Sebastian Laskawiec
>  > <slask...@redhat.com <mailto:slask...@redhat.com>
> <mailto:slask...@redhat.com <mailto:slask...@redhat.com>>> wrote:
>  >
>  > I think I've just found the reason why we can not migrate in
> OpenSSL
>  > by default :(
>  >
>  > In server scenario we obtain S*SL*Context (the one from JDK;
> Netty
>  > has similar S*sl*Context) from WildFly. It is already configured
>  > along with sercurity realms, domains etc. We then get into this
>  > branch of code [1].
>  >
>  > In order to do fancy things like SNI we need to remap JDK's
>  > SSLContext into Netty's SslContext and the only
> implementation that
>  > can consume SSLContext we have at hand is JdkSslContext.
>  >
>  > I honestly have no idea how we could refactor this... And
> that's a
>  > shame because OpenSSL is way faster...
>  >
>  >
>  >
>  > I tried migrating the SSL engine to Netty's in [1] and hit the same
>  > wall. What I was told is that the SSLContext in Wildfly is now
> (version
>  > 11?) a capability under 'org.wildfly.security.ssl-context'  and
>  > can be replaced, but I did not try doing that.
>  >
>  >
>  > [1] https://issues.jboss.org/browse/ISPN-6990
>  > <https://issues.jboss.org/browse/ISPN-6990>
>  >
>  > Gustavo
>      >
>  >
>  > ___
>  > infinispan-dev mailing list
>  > infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>
>  > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>  >
> 
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> -- 
> 
> SEBASTIANŁASKAWIEC
> 
> INFINISPAN DEVELOPER
> 
> Red HatEMEA <https://www.redhat.com/>
> 
> <https://red.ht/sig>
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Netty SSL Context, was [Hot Rod secured by default]

2017-06-05 Thread Tristan Tarrant
We should use this:

https://github.com/wildfly/wildfly-openssl

Tristan

On 6/1/17 1:17 PM, Gustavo Fernandes wrote:
> On Thu, Jun 1, 2017 at 10:51 AM, Sebastian Laskawiec 
> <slask...@redhat.com <mailto:slask...@redhat.com>> wrote:
> 
> I think I've just found the reason why we can not migrate in OpenSSL
> by default :(
> 
> In server scenario we obtain S*SL*Context (the one from JDK; Netty
> has similar S*sl*Context) from WildFly. It is already configured
> along with sercurity realms, domains etc. We then get into this
> branch of code [1].
> 
> In order to do fancy things like SNI we need to remap JDK's
> SSLContext into Netty's SslContext and the only implementation that
> can consume SSLContext we have at hand is JdkSslContext.
> 
> I honestly have no idea how we could refactor this... And that's a
> shame because OpenSSL is way faster...
> 
> 
> 
> I tried migrating the SSL engine to Netty's in [1] and hit the same 
> wall. What I was told is that the SSLContext in Wildfly is now (version 
> 11?) a capability under 'org.wildfly.security.ssl-context'  and
> can be replaced, but I did not try doing that.
> 
> 
> [1] https://issues.jboss.org/browse/ISPN-6990 
> <https://issues.jboss.org/browse/ISPN-6990>
> 
> Gustavo
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly IRC Meeting Logs 2017-05-22

2017-05-23 Thread Tristan Tarrant
Hi all,

the logs for this week's meeting:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-22-14.01.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Exposing cluster deployed in the cloud

2017-05-22 Thread Tristan Tarrant
We would need to provide a way to supply the external address at 
runtime, e.g. via JMX.

Tristan

On 5/22/17 2:50 PM, Sebastian Laskawiec wrote:
> Hey Tristan!
> 
> I checked this part and it won't do the trick. The problem is that the 
> server does not know which address is used for exposing its services. 
> Moreover, this address can change with time.
> 
> Thanks,
> Sebastian
> 
> On Tue, May 9, 2017 at 3:28 PM Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>> wrote:
> 
> Sebastian,
> are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is
> configured using external-host / external-port attributes on the
> topology-state-transfer element [2]
> 
> 
> 
> [1]
> 
> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43
> [2]
> 
> https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203
> 
> 
> On 5/8/17 9:57 AM, Sebastian Laskawiec wrote:
>  > Hey guys!
>  >
>  > A while ago I started working on exposing Infinispan Cluster which is
>  > hosted in Kubernetes to the outside world:
>  >
>  > pasted1
>  >
>  > I'm currently struggling to get solution like this into the
> platform [1]
>  > but in the meantime I created a very simple POC and I'm testing it
>  > locally [2].
>  >
>  > There are two main problems with the scenario described above:
>  >
>  >  1. Infinispan server announces internal addresses (172.17.x.x)
> to the
>  > client. The client needs to remap them into external ones
> (172.29.x.x).
>  >  2. A custom Consistent Hash needs to be supplied to the Hot Rod
> client.
>  > When accessing cache, the Hot Rod Client needs to calculate
> server
>  > id for internal address and then map it to the external one.
>  >
>  > If there will be no strong opinions regarding to this, I plan to
>  > implement this shortly. There will be additional method in Hot Rod
>  > Client configuration (ConfigurationBuilder#addServerMapping(String
>  > mappingClass)) which will be responsible for mapping external
> addresses
>  > to internal and vice-versa.
>  >
>  > Thoughts?
>  >
>  > Thanks,
>  > Sebastian
>  >
>  > [1] https://github.com/kubernetes/community/pull/446
>  > [2] https://github.com/slaskawi/external-ip-proxy
>  >
>  >
>  > ___
>  > infinispan-dev mailing list
>  > infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>
>  > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>  >
> 
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a division of Red Hat
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>     https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> -- 
> 
> SEBASTIANŁASKAWIEC
> 
> INFINISPAN DEVELOPER
> 
> Red HatEMEA <https://www.redhat.com/>
> 
> <https://red.ht/sig>
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

[infinispan-dev] Cache migration (rolling upgrades, dump/restore, etc)

2017-05-17 Thread Tristan Tarrant
Hey all,

Infinispan has historically had two ways of performing live migration 
between two clusters, via Hot Rod and via REST. We do not currently 
provide an offline migration, although we do have a cache store 
migration tool.

Gustavo has recently made several changes to the Hot Rod implementation 
which have improved it greatly. The REST implementation is still not 
robust enough, but I think we can abandon it and just focus on the Hot 
Rod one ever for servers using REST.

The following is a list of stuff, mostly compiled by Gustavo, that needs 
to be done to make everything smooth and robust:

1) Need a way to automate client redirection to the new cluster. I've 
often referred to this as L4 client intelligence, which can also be used 
for server-assisted cross-site failover.
2) Need a way to "rollback" the process in case of failures during the 
migration: redirecting the clients back to the original cluster without 
data loss. This would use the above L4 strategy.
3) Expose metrics and progress
4) Expose a way to cancel the process
5) Expose a container-wide migration process which can be applied to all 
caches instead of one cache at a time.
6) The migration process should also take care of automatically 
configuring the endpoints / remote cache stores at the beginning of the 
process and removing any changes at the end.
6) Provide a future-proof format for the entries
7) Implement dump and restore capabilities which can export the contents 
of a cluster to a file (compressed, encrypted, etc) or a collection of 
files (one per cache).

Anything else ?

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Weekly meeting logs 2017-05-15

2017-05-15 Thread Tristan Tarrant
Hi all,

the weekly IRC meeting logs are available here:

http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-05-15-14.02.log.html

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Exposing cluster deployed in the cloud

2017-05-09 Thread Tristan Tarrant
Sebastian,
are you familiar with Hot Rod's proxyHost/proxyPort [1]. In server it is 
configured using external-host / external-port attributes on the 
topology-state-transfer element [2]



[1] 
https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/configuration/HotRodServerConfigurationBuilder.java#L43
[2] 
https://github.com/infinispan/infinispan/blob/master/server/integration/endpoint/src/main/resources/schema/jboss-infinispan-endpoint_9_0.xsd#L203


On 5/8/17 9:57 AM, Sebastian Laskawiec wrote:
> Hey guys!
> 
> A while ago I started working on exposing Infinispan Cluster which is 
> hosted in Kubernetes to the outside world:
> 
> pasted1
> 
> I'm currently struggling to get solution like this into the platform [1] 
> but in the meantime I created a very simple POC and I'm testing it 
> locally [2].
> 
> There are two main problems with the scenario described above:
> 
>  1. Infinispan server announces internal addresses (172.17.x.x) to the
> client. The client needs to remap them into external ones (172.29.x.x).
>  2. A custom Consistent Hash needs to be supplied to the Hot Rod client.
> When accessing cache, the Hot Rod Client needs to calculate server
> id for internal address and then map it to the external one.
> 
> If there will be no strong opinions regarding to this, I plan to 
> implement this shortly. There will be additional method in Hot Rod 
> Client configuration (ConfigurationBuilder#addServerMapping(String 
> mappingClass)) which will be responsible for mapping external addresses 
> to internal and vice-versa.
> 
> Thoughts?
> 
> Thanks,
> Sebastian
> 
> [1] https://github.com/kubernetes/community/pull/446
> [2] https://github.com/slaskawi/external-ip-proxy
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Documentation code snippets

2017-05-05 Thread Tristan Tarrant
>>> Having a good documentation is IMHO crucial for people to like our
>>> technology
>>> and the key point is having code snippets in the documentation up to date
>>> and working. During review of my parts, I found out many and many outdated
>>> code snippets, either non-compilable or using deprecated methods. I would
>>> like to eliminate this issue in the future, so it would make our
>>> documentation better and also remove burden when doing documentation
>>> review.
>>>
>>> I did some research and I found out that Hibernate team (thanks Radim,
>>> Sanne
>>> for the information) does a very cool thing and that is that the code
>>> snippets are taken right from testsuite. This way they know that the code
>>> snippet can always compile and also make sure that it's working properly. I
>>> would definitely love to see the same in Infinispan.
>>>
>>> It works extremely simply that you mark by comment in the test the part,
>>> you
>>> want to include in the documentation, see an example here for the AsciiDoc
>>> part [1] and here for the test part [2]. There are two ways of how to
>>> organize that:
>>> 1) create a separate "documentation testsuite", with as simple as possible
>>> test classes - Hibernate team does it this way. Pros: documentation is
>>> easily separated. Cons: possible duplication.
>>> 2) use existing testsuite, marking the parts in the existing testsuite.
>>> Pros:
>>> no duplication. Cons: documentation snippets are spread all across the
>>> testsuite.
>>>
>>> I would definitely volunteer to make this happen in Infinispan
>>> documentation.
>>>
>>> What do you guys think about it?
>>>
>>> Cheers,
>>> Jiri
>>>
>>> [1]
>>> https://raw.githubusercontent.com/hibernate/hibernate-validator/master/documentation/src/main/asciidoc/ch03.asciidoc
>>> [2]
>>> https://github.com/hibernate/hibernate-orm/blob/master/documentation/src/test/java/org/hibernate/userguide/caching/FirstLevelCacheTest.java
>>>
>>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> --
>>
>>
>> SEBASTIAN ŁASKAWIEC
>>
>> INFINISPAN DEVELOPER
>>
>> Red Hat EMEA
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev

Re: [infinispan-dev] Jenkins migration

2017-04-24 Thread Tristan Tarrant
You mean exactly like the current CI is being handled? :-)

On 24 Apr 2017 13:33, "Sanne Grinovero"  wrote:

>
>
> On 24 April 2017 at 12:19, Sebastian Laskawiec 
> wrote:
>
>> Hey!
>>
>> I uninstalled Blue Ocean plugin. I think it's worth to have another look
>> at it as soon as 1.1.0 is released [1].
>>
>> I also plan to migrate 2 TeamCity Agents into Jenkins very shortly (in 30
>> mins).
>>
>> @Tristan - may I ask you to redirect ci.infinispan.org to our new
>> installation: http://ec2-52-215-14-157.eu-west-1.compute.amazonaws.com/
>>
>
> ​Please don't assign a redirect to the volatile EC2 machine names as
> you'll regret it as soon as you have to do some maintenance on AWS. Assign
> a public floating IP to that machine first, then bind the
> ci.infinispan.org domain to that IP.​
>
>
>
>>
>> More comments inlined.
>>
>> Thanks,
>> Sebastian
>>
>> [1] https://issues.jenkins-ci.org/browse/JENKINS-43751?focus
>> edCommentId=296703
>>
>> On Mon, Apr 24, 2017 at 9:50 AM Radim Vansa  wrote:
>>
>>> I've heard that the default UI in Jenkins was the reason why we went
>>> with TC, and Blue Ocean was supposed to be the cure. Why was the default
>>> UI dismissed in the first place?
>>>
>>
>> Once BlueOcean was installed (I think by Dan or Gustavo), it replaced the
>> default UI without asking.
>>
>>
>>>
>>> R.
>>>
>>> On 04/23/2017 07:14 PM, Adrian Nistor wrote:
>>> > I also do not see much value in the current state of Blue Ocean.
>>> > Better stick with the default ui.
>>>
>>
>> +1
>>
>>
>>> >
>>> > On 04/21/2017 06:11 PM, Dan Berindei wrote:
>>> >> Looks like the invalid "control characters from U+ through
>>> >> U+001F" are the  ANSI escape codes used by WildFly to color output.
>>> >> So we might be able to work around this by disabling the color output
>>> >> in WildFly in our integration tests.
>>>
>>
>> I think it's not worth to invest more time in this. Let's switch to
>> default UI and then try out BlueOcean once 1.1.0 is out.
>>
>>
>>> >>
>>> >> OTOH I'm fine with removing the Blue Ocean plugin for now, because
>>> >> its usability is sometime worse than the default UI's. E.g. when I
>>> >> click on the build results link in GitHub, 99.999% of the time I want
>>> >> to see the test results, but Blue Ocean thinks it's much better to
>>> >> show me some circles with question marks and exclamation points
>>> >> instead, and then keep me waiting for half a minute after I click on
>>> >> the tests link :)
>>>
>>
>> +1
>>
>>
>>> >>
>>> >> Cheers
>>> >> Dan
>>> >>
>>> >>
>>> >> On Fri, Apr 21, 2017 at 4:55 PM, Sebastian Laskawiec
>>> >> > wrote:
>>> >>
>>> >> Hey!
>>> >>
>>> >> As you probably have heard I'm migrating our TeamCity
>>> >> installation [1] into Jenkins (temporarily in [2]).
>>> >>
>>> >> So far I've managed to migrate all Infinispan builds (with pull
>>> >> requests), C++/C# clients, JGroups and JGroups Kubernetes. I
>>> >> decided to use the new Pipeline [3] approach for the builds and
>>> >> keep the configuration along with the code (here's an example
>>> [4]).
>>> >>
>>> >> The configuration builds /refs/pull//head/ for Pull Requests
>>> >> at the moment. I will switch it back to /refs/pull//merge/ as
>>> >> soon as our PR queue size is ~20.
>>> >>
>>> >> Current pain points are:
>>> >>
>>> >>   * Blue Ocean UI doesn't show tests. It has been reported in
>>> >> [5]. The workaround is to use the old Jenkins UI.
>>> >>   * Windows VM doesn't start on demand (together with Vittorio we
>>> >> will be working on this)
>>> >>
>>> >> The rough plan is:
>>> >>
>>> >>   * Apr 24th, move other 2 agents from TeamCity to Jenkins
>>> >>   * Apr 24th, redirect ci.infinispan.org
>>> >>  domain
>>> >>   * May 4th, remove TeamCity
>>> >>
>>> >> Please let me know if you have any questions or concerns.
>>> >>
>>> >> Thanks,
>>> >> Sebastian
>>> >>
>>> >> [1] http://ci.infinispan.org/
>>> >> [2] http://ec2-52-215-14-157.eu-west-1.compute.amazonaws.com
>>> >> 
>>> >> [3] https://jenkins.io/doc/book/pipeline/
>>> >> 
>>> >> [4]
>>> >> https://github.com/infinispan/infinispan/blob/master/Jenkinsfile
>>> >> >> >
>>> >> [5] https://issues.jenkins-ci.org/browse/JENKINS-43751
>>> >> 
>>> >> --
>>> >>
>>> >> SEBASTIANŁASKAWIEC
>>> >>
>>> >> INFINISPAN DEVELOPER
>>> >>
>>> >> Red HatEMEA 
>>> >>
>>> >> 
>>> >>
>>> >>
>>> >> ___
>>> >> infinispan-dev mailing list
>>> >> infinispan-dev@lists.jboss.org
>>> >>   

Re: [infinispan-dev] Jenkins migration

2017-04-24 Thread Tristan Tarrant
Tristan is on PTO. He'll fix DNS on Wednesday :-)

On 24 Apr 2017 13:26, "Sebastian Laskawiec"  wrote:

> Hey!
>
> I uninstalled Blue Ocean plugin. I think it's worth to have another look
> at it as soon as 1.1.0 is released [1].
>
> I also plan to migrate 2 TeamCity Agents into Jenkins very shortly (in 30
> mins).
>
> @Tristan - may I ask you to redirect ci.infinispan.org to our new
> installation: http://ec2-52-215-14-157.eu-west-1.compute.amazonaws.com/
>
> More comments inlined.
>
> Thanks,
> Sebastian
>
> [1] https://issues.jenkins-ci.org/browse/JENKINS-43751?
> focusedCommentId=296703
>
> On Mon, Apr 24, 2017 at 9:50 AM Radim Vansa  wrote:
>
>> I've heard that the default UI in Jenkins was the reason why we went
>> with TC, and Blue Ocean was supposed to be the cure. Why was the default
>> UI dismissed in the first place?
>>
>
> Once BlueOcean was installed (I think by Dan or Gustavo), it replaced the
> default UI without asking.
>
>
>>
>> R.
>>
>> On 04/23/2017 07:14 PM, Adrian Nistor wrote:
>> > I also do not see much value in the current state of Blue Ocean.
>> > Better stick with the default ui.
>>
>
> +1
>
>
>> >
>> > On 04/21/2017 06:11 PM, Dan Berindei wrote:
>> >> Looks like the invalid "control characters from U+ through
>> >> U+001F" are the  ANSI escape codes used by WildFly to color output.
>> >> So we might be able to work around this by disabling the color output
>> >> in WildFly in our integration tests.
>>
>
> I think it's not worth to invest more time in this. Let's switch to
> default UI and then try out BlueOcean once 1.1.0 is out.
>
>
>> >>
>> >> OTOH I'm fine with removing the Blue Ocean plugin for now, because
>> >> its usability is sometime worse than the default UI's. E.g. when I
>> >> click on the build results link in GitHub, 99.999% of the time I want
>> >> to see the test results, but Blue Ocean thinks it's much better to
>> >> show me some circles with question marks and exclamation points
>> >> instead, and then keep me waiting for half a minute after I click on
>> >> the tests link :)
>>
>
> +1
>
>
>> >>
>> >> Cheers
>> >> Dan
>> >>
>> >>
>> >> On Fri, Apr 21, 2017 at 4:55 PM, Sebastian Laskawiec
>> >> > wrote:
>> >>
>> >> Hey!
>> >>
>> >> As you probably have heard I'm migrating our TeamCity
>> >> installation [1] into Jenkins (temporarily in [2]).
>> >>
>> >> So far I've managed to migrate all Infinispan builds (with pull
>> >> requests), C++/C# clients, JGroups and JGroups Kubernetes. I
>> >> decided to use the new Pipeline [3] approach for the builds and
>> >> keep the configuration along with the code (here's an example [4]).
>> >>
>> >> The configuration builds /refs/pull//head/ for Pull Requests
>> >> at the moment. I will switch it back to /refs/pull//merge/ as
>> >> soon as our PR queue size is ~20.
>> >>
>> >> Current pain points are:
>> >>
>> >>   * Blue Ocean UI doesn't show tests. It has been reported in
>> >> [5]. The workaround is to use the old Jenkins UI.
>> >>   * Windows VM doesn't start on demand (together with Vittorio we
>> >> will be working on this)
>> >>
>> >> The rough plan is:
>> >>
>> >>   * Apr 24th, move other 2 agents from TeamCity to Jenkins
>> >>   * Apr 24th, redirect ci.infinispan.org
>> >>  domain
>> >>   * May 4th, remove TeamCity
>> >>
>> >> Please let me know if you have any questions or concerns.
>> >>
>> >> Thanks,
>> >> Sebastian
>> >>
>> >> [1] http://ci.infinispan.org/
>> >> [2] http://ec2-52-215-14-157.eu-west-1.compute.amazonaws.com
>> >> 
>> >> [3] https://jenkins.io/doc/book/pipeline/
>> >> 
>> >> [4]
>> >> https://github.com/infinispan/infinispan/blob/master/Jenkinsfile
>> >> 
>> >> [5] https://issues.jenkins-ci.org/browse/JENKINS-43751
>> >> 
>> >> --
>> >>
>> >> SEBASTIANŁASKAWIEC
>> >>
>> >> INFINISPAN DEVELOPER
>> >>
>> >> Red HatEMEA 
>> >>
>> >> 
>> >>
>> >>
>> >> ___
>> >> infinispan-dev mailing list
>> >> infinispan-dev@lists.jboss.org
>> >> 
>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> >> 
>> >>
>> >>
>> >>
>> >>
>> >> ___
>> >> infinispan-dev mailing list
>> >> infinispan-dev@lists.jboss.org
>> >> https://lists.jboss.org/mailman/listinfo/infinispan-dev
>> >
>> >
>> >
>> >
>> > ___
>> > infinispan-dev mailing 

Re: [infinispan-dev] Infinispan Query API simplification

2017-04-20 Thread Tristan Tarrant
On 20/04/2017 15:34, Dan Berindei wrote:
>> How big is the DSL API surface (which will be brought into commons)?
> 
> -1 from me to add anything in commons, I don't think allowing the
> users to query both embedded caches and remote caches with the same
> code is that important. I'd rather go the opposite way and remove the
> BasicCache interface completely.

Actually, we've had requests for interchangeable APIs...

So, according to your strategy we either have each feature implemented 
with a divergent specific embedded or remote API, or each feature has 
its own feature-api with two separate feature-embedded and 
feature-remote implementations. Both plans sound terrible.

Alternatively, we could go with an infinispan-api package (which Paul 
has been advocating for a long time) which would contain the various 
interfaces.

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Infinispan Query API simplification

2017-04-20 Thread Tristan Tarrant
Actually, the API for counters et alia is going into commons (so that it 
can be shared between embedded and remote). Additionally, something like 
a counter has no API relationship with a Cache, whereas query does.

Tristan

On 20/04/2017 14:56, Radim Vansa wrote:
> That's completely opposite approach from the one outlined for
> distributed counters and other "on-top" functionality (the same approach
> was later suggested for conflict resolution manager, multimap and maybe
> others). Why is query 1st level citizen and those others are not?
> 
> I am not opposing the idea but let's define the line between patriarchs
> and plebeians.
> 
> How big is the DSL API surface (which will be brought into commons)?
> 
> R.
> 
> On 04/20/2017 02:08 PM, Tristan Tarrant wrote:
>> Querying an Infinispan cache is currently a bit cumbersome. There are
>> two paths:
>>
>> Ickle:
>> Search.getQueryFactory(cache).create("...").list();
>>
>> DSL (one possible example):
>> Search.getQueryFactory(cache).from(class).[filters].build().list();
>>
>> Ideally we should have something like:
>>
>> Ickle:
>> cache.query("...").list();
>>
>> DSL:
>> cache.query(class).[filters].list();
>>
>> Additionally, the query module is separate from infinispan-core. While
>> this made sense when we didn't have non-indexed query capabilities (and
>> is somewhat mitigated by the uberjars), I feel that query should be a
>> 1st class citizen API-wise.
>> For this reason I propose that we extract the query API to
>> infinispan-commons,  put the query SPI in infinispan-core together with
>> the non-indexed implementation and have the hibernate-search backend as
>> a pluggable implementation.
>>
>> Thoughts ?
>>
>> Tristan
>>
> 
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan Query API simplification

2017-04-20 Thread Tristan Tarrant
Querying an Infinispan cache is currently a bit cumbersome. There are 
two paths:

Ickle:
Search.getQueryFactory(cache).create("...").list();

DSL (one possible example):
Search.getQueryFactory(cache).from(class).[filters].build().list();

Ideally we should have something like:

Ickle:
cache.query("...").list();

DSL:
cache.query(class).[filters].list();

Additionally, the query module is separate from infinispan-core. While 
this made sense when we didn't have non-indexed query capabilities (and 
is somewhat mitigated by the uberjars), I feel that query should be a 
1st class citizen API-wise.
For this reason I propose that we extract the query API to 
infinispan-commons,  put the query SPI in infinispan-core together with 
the non-indexed implementation and have the hibernate-search backend as 
a pluggable implementation.

Thoughts ?

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hot Rod secured by default

2017-04-19 Thread Tristan Tarrant
That is caused by not wrapping the calls in PrivilegedActions in all the 
correct places and is a bug.

Tristan

On 19/04/2017 11:34, Sebastian Laskawiec wrote:
> The proposal look ok to me.
> 
> But I would also like to highlight one thing - it seems you can't access 
> secured cache properties using CLI. This seems wrong to me (if you can 
> invoke the cli, in 99,99% of the cases you have access to the machine, 
> so you can do whatever you want). It also breaks healthchecks in Docker 
> image.
> 
> I would like to make sure we will address those concerns.
> 
> On Wed, Apr 19, 2017 at 10:59 AM Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>> wrote:
> 
> Currently the "protected cache access" security is implemented as
> follows:
> 
> - if authorization is enabled || client is on loopback
> allow
> 
> The first check also implies that authentication needs to be in place,
> as the authorization checks need a valid Subject.
> 
> Unfortunately authorization is very heavy-weight and actually overkill
> even for "normal" secure usage.
> 
> My proposal is as follows:
> - the "default" configuration files are "secure" by default
> - provide clearly marked "unsecured" configuration files, which the user
> can use
> - drop the "protected cache" check completely
> 
> And definitely NO to a dev switch.
> 
> Tristan
> 
> On 19/04/2017 10:05, Galder Zamarreño wrote:
>  > Agree with Wolf. Let's keep it simple by just providing extra
> configuration files for dev/unsecure envs.
>  >
>  > Cheers,
>  > --
>  > Galder Zamarreño
>  > Infinispan, Red Hat
>  >
>  >> On 15 Apr 2017, at 12:57, Wolf Fink <wf...@redhat.com
> <mailto:wf...@redhat.com>> wrote:
>  >>
>  >> I would think a "switch" can have other impacts as you need to
> check it in the code - and might have security leaks here
>  >>
>  >> So what is wrong with some configurations which are the default
> and secured.
>  >> and a "*-dev or *-unsecure" configuration to start easy.
>  >> Also this can be used in production if there is no need for security
>  >>
>  >> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec
> <slask...@redhat.com <mailto:slask...@redhat.com>> wrote:
>  >> I still think it would be better to create an extra switch to
> run infinispan in "development mode". This means no authentication,
> no encryption, possibly with JGroups stack tuned for fast discovery
> (especially in Kubernetes) and a big warning saying "You are in
> development mode, do not use this in production".
>  >>
>  >> Just something very easy to get you going.
>  >>
>  >> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarreño
> <gal...@redhat.com <mailto:gal...@redhat.com>> wrote:
>  >>
>  >> --
>  >> Galder Zamarreño
>  >> Infinispan, Red Hat
>  >>
>  >>> On 13 Apr 2017, at 09:50, Gustavo Fernandes
> <gust...@infinispan.org <mailto:gust...@infinispan.org>> wrote:
>  >>>
>  >>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarreño
> <gal...@redhat.com <mailto:gal...@redhat.com>> wrote:
>  >>> Hi all,
>  >>>
>  >>> As per some discussions we had yesterday on IRC w/ Tristan,
> Gustavo and Sebastian, I've created a docker image snapshot that
> reverts the change stop protected caches from requiring security
> enabled [1].
>  >>>
>  >>> In other words, I've removed [2]. The reason for temporarily
> doing that is because with the change as is, the changes required
> for a default server distro require that the entire cache manager's
> security is enabled. This is in turn creates a lot of problems with
> health and running checks used by Kubernetes/OpenShift amongst other
> things.
>  >>>
>  >>> Judging from our discussions on IRC, the idea is for such
> change to be present in 9.0.1, but I'd like to get final
> confirmation from Tristan et al.
>  >>>
>  >>>
>  >>> +1
>  >>>
>  >>> Regarding the "security by default" discussion, I think we
> should ship configurations cloud.xml, clustered.xml and
> standal

Re: [infinispan-dev] Hot Rod secured by default

2017-04-19 Thread Tristan Tarrant
Currently the "protected cache access" security is implemented as follows:

- if authorization is enabled || client is on loopback
   allow

The first check also implies that authentication needs to be in place, 
as the authorization checks need a valid Subject.

Unfortunately authorization is very heavy-weight and actually overkill 
even for "normal" secure usage.

My proposal is as follows:
- the "default" configuration files are "secure" by default
- provide clearly marked "unsecured" configuration files, which the user 
can use
- drop the "protected cache" check completely

And definitely NO to a dev switch.

Tristan

On 19/04/2017 10:05, Galder Zamarreño wrote:
> Agree with Wolf. Let's keep it simple by just providing extra configuration 
> files for dev/unsecure envs.
> 
> Cheers,
> --
> Galder Zamarreño
> Infinispan, Red Hat
> 
>> On 15 Apr 2017, at 12:57, Wolf Fink <wf...@redhat.com> wrote:
>>
>> I would think a "switch" can have other impacts as you need to check it in 
>> the code - and might have security leaks here
>>
>> So what is wrong with some configurations which are the default and secured.
>> and a "*-dev or *-unsecure" configuration to start easy.
>> Also this can be used in production if there is no need for security
>>
>> On Thu, Apr 13, 2017 at 4:13 PM, Sebastian Laskawiec <slask...@redhat.com> 
>> wrote:
>> I still think it would be better to create an extra switch to run infinispan 
>> in "development mode". This means no authentication, no encryption, possibly 
>> with JGroups stack tuned for fast discovery (especially in Kubernetes) and a 
>> big warning saying "You are in development mode, do not use this in 
>> production".
>>
>> Just something very easy to get you going.
>>
>> On Thu, Apr 13, 2017 at 12:16 PM Galder Zamarreño <gal...@redhat.com> wrote:
>>
>> --
>> Galder Zamarreño
>> Infinispan, Red Hat
>>
>>> On 13 Apr 2017, at 09:50, Gustavo Fernandes <gust...@infinispan.org> wrote:
>>>
>>> On Thu, Apr 13, 2017 at 6:38 AM, Galder Zamarreño <gal...@redhat.com> wrote:
>>> Hi all,
>>>
>>> As per some discussions we had yesterday on IRC w/ Tristan, Gustavo and 
>>> Sebastian, I've created a docker image snapshot that reverts the change 
>>> stop protected caches from requiring security enabled [1].
>>>
>>> In other words, I've removed [2]. The reason for temporarily doing that is 
>>> because with the change as is, the changes required for a default server 
>>> distro require that the entire cache manager's security is enabled. This is 
>>> in turn creates a lot of problems with health and running checks used by 
>>> Kubernetes/OpenShift amongst other things.
>>>
>>> Judging from our discussions on IRC, the idea is for such change to be 
>>> present in 9.0.1, but I'd like to get final confirmation from Tristan et al.
>>>
>>>
>>> +1
>>>
>>> Regarding the "security by default" discussion, I think we should ship 
>>> configurations cloud.xml, clustered.xml and standalone.xml with security 
>>> enabled and disabled variants, and let users
>>> decide which one to pick based on the use case.
>>
>> I think that's a better idea.
>>
>> We could by default have a secured one, but switching to an insecure 
>> configuration should be doable with minimal effort, e.g. just switching 
>> config file.
>>
>> As highlighted above, any secured configuration should work out-of-the-box 
>> with our docker images, e.g. WRT healthy/running checks.
>>
>> Cheers,
>>
>>>
>>> Gustavo.
>>>
>>>
>>> Cheers,
>>>
>>> [1] https://hub.docker.com/r/galderz/infinispan-server/tags/ 
>>> (9.0.1-SNAPSHOT tag for anyone interested)
>>> [2] 
>>> https://github.com/infinispan/infinispan/blob/master/server/hotrod/src/main/java/org/infinispan/server/hotrod/CacheDecodeContext.java#L114-L118
>>> --
>>> Galder Zamarreño
>>> Infinispan, Red Hat
>>>
>>>> On 30 Mar 2017, at 14:25, Tristan Tarrant <ttarr...@redhat.com> wrote:
>>>>
>>>> Dear all,
>>>>
>>>> after a mini chat on IRC, I wanted to bring this to everybody's attention.
>>>>
>>>> We should make the Hot Rod endpoint require authentication in the
>>>> out-of-the-box configuration.
>>>> The proposal is to enab

Re: [infinispan-dev] Branching 9.0.x

2017-04-12 Thread Tristan Tarrant
master is now 9.1.0-SNAPSHOT.
9.0.x is 9.0.1-SNAPSHOT

Tristan

On 11/04/2017 21:05, Tristan Tarrant wrote:
> Hi all,
> 
> I am going to branch 9.0.x tomorrow at 12:00 CEST. Let me know if I 
> should delay this.
> 
> Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Branching 9.0.x

2017-04-11 Thread Tristan Tarrant
Hi all,

I am going to branch 9.0.x tomorrow at 12:00 CEST. Let me know if I 
should delay this.

Tristan
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Weekly IRC meeting logs 2017-04-10

2017-04-10 Thread Tristan Tarrant
My activities:

- Server restructuring proposal [1]
- ISPN-7712 Common Name role mapper case, PR @ [2]
- ISPN-7706 If mech EXTERNAL and no callback supplied, use 
VoidCallbackHandler, PR coming soon
- ISPN-7707 Finer configuration options for keystores/truststores, PR 
coming soon
- ISPN-7709 Generate certificates/keystores on the fly so they work on 
all JDKs, PR coming soon

I'd like to finish these and also fix the ISPN-7678 PR [3]

This week will be short: PTO Thursday and Friday, holiday on Easter Monday.

Tristan

[1] 
https://github.com/infinispan/infinispan-designs/blob/master/Server%20Restructuring.md
[2] https://github.com/infinispan/infinispan/pull/5066
[3] https://github.com/infinispan/infinispan/pull/5032


On 10/04/2017 17:03, Dan Berindei wrote:
> Hi everyone
> 
> We had another weekly IRC meeting today, and the logs are here:
> 
> http://transcripts.jboss.org/meeting/irc.freenode.org/infinispan/2017/infinispan.2017-04-10-14.00.html
> 
> Cheers
> Dan
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Native Infinispan Multimap support

2017-04-05 Thread Tristan Tarrant


On 05/04/2017 10:05, Sebastian Laskawiec wrote:
> I love the idea of starting with a simple interface, so +1000 from me.
> 
> I'm also assuming that our new MultiMap will be accessible in both 
> Embedded and Client/Server mode, am I correct? 

Agreed. The design must take into account both variants before it can be 
considered ready.

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] My Weekly report

2017-04-03 Thread Tristan Tarrant
I won't be able to attend today's meeting, so here's my summary / plan:

- ISPN-7683 unregister the topology cache on Hot Rod server stop
- ISPN-7678 prevent endpoint restart on default cache reconfiguration
   - still some snags due to the Hot Rod server wanting to prestart 
caches, which I want to remove
- Did the 9.0 release
   - Updated the embedded tutorial
   - Made some fixes to the website (roadmap, features, etc)
- Some security work
   - Hot Rod authn by default (Digest)
   - generate the certificate chains on the fly for the testsuite to 
avoid cert rot and issues with alternate JDKs
   - Initial design for fine-grained security, will share soon
   - Resumed work on the threadlocal-less secure cache for better server 
performance
   - I also need to introduce "identity-shipping", so that running 
streams in a secure context will apply the same identity across all nodes.
-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hot Rod secured by default

2017-03-31 Thread Tristan Tarrant
You want to use OpenSSL with Netty:

http://netty.io/wiki/requirements-for-4.x.html#wiki-h4-4

Tristan

On 31/03/2017 15:55, Sebastian Laskawiec wrote:
> Unfortunately TLS still slows down stuff (a lot). When I was doing tests 
> for the multi-tenancy router (which is based on TLS/SNI), my average 
> results were like this:
> 
> Use-caseTypeAvgError
> initConnectionAndPerform10KPutsSingleServerNoSsl1034.81714.424
> initConnectionAndPerform10KPutsSingleServerWithSsl1567.55324.872
> initConnectionAndPerform10KPutsTwoServersWithSslSni1563.22934.05
> initConnectionOnlySingleServerNoSsl*3.389*0.198
> initConnectionOnlySingleServerWithSsl*14.086*0.794
> initConnectionOnlyTwoServersWithSslSni*14.722*0.684
> perform10KPutsSingleServerNoSsl*4.602*0.585
> perform10KPutsSingleServerWithSsl*16.583*0.198
> perform10KPutsTwoServersWithSslSni*17.02*0.794
> 
> This is nothing new, but initializing Hot Rod connection took was ~4 
> times slower and putting 10K random strings (UUIDs) was also ~4 times 
> slower. But what's worth to mention, there is no significant difference 
> between TLS and TLS+SNI.
> 
> As far as I know, it is possible to install specialized hardware to deal 
> with encryption in data centers. It is called SSL Acceleration [1]. 
> However I'm not aware of any special processor instructions that can 
> help you with that. But the implementations are getting better and 
> better, so who knows...
> 
> But getting back to the original question, I think the problem we are 
> trying to solve (correct me if I'm wrong) is to prevent unauthorized 
> folks to put their hands on a victims data (either pushing something 
> malicious/corrupted to the cache or obtaining something from the cache). 
> Another problem is transmission security - encryption. If we want our 
> new devs to be secured out of the box, I think we should do both - use 
> TLS (without trusting all certificated) and authentication. This makes 
> Infinispan harder to use of course. So the other extremum is to turn 
> both things off.
> 
> I voted for the latter, making Infinispan super easy to use. But you 
> guys convinced me that we should care about the security in this case 
> too, so I would use PLAIN authentication + TLS. I would also love to see 
> one magic switch, for example `./bin/standalone.sh --dev-mode`, which 
> would turn all security off.
> 
> Thanks,
> Sebastian
> 
> [1] https://en.wikipedia.org/wiki/SSL_acceleration
> 
> 
> On Thu, Mar 30, 2017 at 9:22 PM Dan Berindei <dan.berin...@gmail.com 
> <mailto:dan.berin...@gmail.com>> wrote:
> 
> I agree with Radim, PLAIN authentication without encryption makes it
> too easy to sniff the password from another machine.
> 
> I have no idea how expensive SSL encryption is in WildFly, but I think
> all recent processors have specialized instructions for helping with
> encryption, so it may not be that bad.
> 
> Even with encryption, if the client trusts all certs, it may be
> possible for an attacker to insert itself in the middle and decode
> everything -- depending on network topology and what kind of access
> the attacker already has. I think it only makes sense to trust all
> certs if we also implement something like HPKP [1], to make it more
> like ssh.
> 
> [1]: https://en.wikipedia.org/wiki/HTTP_Public_Key_Pinning
> 
> Cheers
> Dan
> 
> 
> 
> On Thu, Mar 30, 2017 at 7:07 PM, Wolf Fink <wf...@redhat.com
> <mailto:wf...@redhat.com>> wrote:
>  > +1 to make the default secure.
>  >
>  > -1 SSL by default as it makes it slower and I think not most will
> use it
>  >
>  > -1 easy trust all certs, That sounds to me we close one door and
> make it
>  > possible to open another one
>  >
>  >
>  > What if we add an example configuration unsecured which can be
> simple copied
>  > for examples and to start.
>  >
>  >
>  > On Thu, Mar 30, 2017 at 5:31 PM, Dennis Reed <der...@redhat.com
> <mailto:der...@redhat.com>> wrote:
>  >>
>  >> +1 to authentication and encryption by default.
>  >>   This is 2017, that's how *everything* should be configured.
>  >>
>  >> -1 to making it easy to trust all certs.  That negates the point of
>  >> using encryption in the first place and should really never be done.
>  >>
>  >> If it's too hard to configure the correct way that we think it would
>  >> turn users away, that's a usability problem that needs to be fixed.
>  >>
>  >> -Dennis

Re: [infinispan-dev] Infinispan Designs repository

2017-03-31 Thread Tristan Tarrant
Already done sir.

Tristan

On 31/03/2017 15:20, Emmanuel Bernard wrote:
> Should we (as in someone specific that is not me) migrate all the GitHub 
> proposal / design pages from the Wiki?
> 
>> On 31 Mar 2017, at 14:25, Tristan Tarrant <ttarr...@redhat.com> wrote:
>>
>> As was pointed out by Sebastian, GitHub's wiki doesn't really take
>> advantage of the wonderful review tools that are instead available for
>> traditional repos.
>> Instead of creating noise in the main infinispan repo, I have setup a
>> dedicated repo for design ideas. Let's fill it up !
>>
>> https://github.com/infinispan/infinispan-designs
>>
>> Tristan
>>
>> -- 
>> Tristan Tarrant
>> Infinispan Lead
>> JBoss, a division of Red Hat
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 
> _______
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan Designs repository

2017-03-31 Thread Tristan Tarrant
As was pointed out by Sebastian, GitHub's wiki doesn't really take 
advantage of the wonderful review tools that are instead available for 
traditional repos.
Instead of creating noise in the main infinispan repo, I have setup a 
dedicated repo for design ideas. Let's fill it up !

https://github.com/infinispan/infinispan-designs

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Single Endpoint design

2017-03-31 Thread Tristan Tarrant
No, once the connection is established, I believe the netty pipeline can 
be trimmed to the necessary elements.

Tristan

On 31/03/2017 13:57, Gustavo Fernandes wrote:
> On Fri, Mar 31, 2017 at 11:02 AM, Tristan Tarrant <ttarr...@redhat.com 
> <mailto:ttarr...@redhat.com>> wrote:
> 
> You understood incorrectly.
> The only change to the Hot Rod clients is that, if they get a 400 error
> from a HR PING request, they will initiate an upgrade to Hot Rod and
> then proceed with the usual Hot Rod protocol after that.
> 
> 
> Thanks for the clarification. Still, after the HR protocol is 
> negotiated, communication will go
> through a router, thus adding an extra hop?
> 
> Gustavo
> 
> Tristan
> 
> On 31/03/2017 11:58, Gustavo Fernandes wrote:
> > Hi Sebastian,
> >
> > If I understood it correctly, all the Hot Rod clients will be changed
> > from using:
> >
> > - Binary over TCP, circa 40 bytes header, no hops to contact the server,
> > no protocol negotiation, no encryption (default)
> >
> > to
> >
> > - HTTP/2 with SSL, protocol upgrade negotiation, and a hop (router) to
> > connect to the server.
> >
> >
> > Any idea of how significant would be this extra overhead introduced?
> >
> >
> > Thanks,
> > Gustavo
> >
> >
> > On Thu, Mar 30, 2017 at 2:01 PM, Sebastian Laskawiec
>  > <slask...@redhat.com <mailto:slask...@redhat.com>
> <mailto:slask...@redhat.com <mailto:slask...@redhat.com>>> wrote:
>  >
>  > Hey!
>  >
>  > My plan is to start working on a Single Point support for
> Infinispan
>  > Server very soon and I prepared a design:
>  > https://github.com/infinispan/infinispan/pull/5041
> <https://github.com/infinispan/infinispan/pull/5041>
>  > <https://github.com/infinispan/infinispan/pull/5041
> <https://github.com/infinispan/infinispan/pull/5041>>
>  >
>  > As you can see I did not use our Wiki (as we used to) because it
>  > doesn't support inline comments (which is pretty bad in my
> opinion).
>  > I would like to propose to keep all the designs along with our
>  > source code. This approach has been successfully used by the
>  > Kubernetes [1] folks (although they migrated designs into the new
>  > Community repository [2] recently). I think it might be a
> good idea
>  > to do something similar.
>  >
>  > Feedback on both items is more than welcome.
>  >
>  > Thanks,
>  > Sebastian
>  >
>  > [1]
>  >
> https://github.com/kubernetes/kubernetes/tree/master/docs/proposals
> <https://github.com/kubernetes/kubernetes/tree/master/docs/proposals>
>  >   
>   <https://github.com/kubernetes/kubernetes/tree/master/docs/proposals 
> <https://github.com/kubernetes/kubernetes/tree/master/docs/proposals>>
>  > [2]
>  >
> 
> https://github.com/kubernetes/community/tree/master/contributors/design-proposals
> 
> <https://github.com/kubernetes/community/tree/master/contributors/design-proposals>
>  >   
>   
> <https://github.com/kubernetes/community/tree/master/contributors/design-proposals
>  
> <https://github.com/kubernetes/community/tree/master/contributors/design-proposals>>
>  >
>  > ___
>  > infinispan-dev mailing list
>  > infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>
> <mailto:infinispan-dev@lists.jboss.org
> <mailto:infinispan-dev@lists.jboss.org>>
>  > https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>  > <https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>>
> >
> >
> >
> >
> > ___
> > infinispan-dev mailing list
> > infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> > https://lists.jboss.org/mailman/listinfo/infinispan-dev
>     <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> >
> 
> --
> Tristan Tarrant
> Infinispan Lead
> JBoss, a divisi

Re: [infinispan-dev] Single Endpoint design

2017-03-31 Thread Tristan Tarrant
You understood incorrectly.
The only change to the Hot Rod clients is that, if they get a 400 error 
from a HR PING request, they will initiate an upgrade to Hot Rod and 
then proceed with the usual Hot Rod protocol after that.

Tristan

On 31/03/2017 11:58, Gustavo Fernandes wrote:
> Hi Sebastian,
> 
> If I understood it correctly, all the Hot Rod clients will be changed 
> from using:
> 
> - Binary over TCP, circa 40 bytes header, no hops to contact the server, 
> no protocol negotiation, no encryption (default)
> 
> to
> 
> - HTTP/2 with SSL, protocol upgrade negotiation, and a hop (router) to 
> connect to the server.
> 
> 
> Any idea of how significant would be this extra overhead introduced?
> 
> 
> Thanks,
> Gustavo
> 
> 
> On Thu, Mar 30, 2017 at 2:01 PM, Sebastian Laskawiec 
> <slask...@redhat.com <mailto:slask...@redhat.com>> wrote:
> 
> Hey!
> 
> My plan is to start working on a Single Point support for Infinispan
> Server very soon and I prepared a design:
> https://github.com/infinispan/infinispan/pull/5041
> <https://github.com/infinispan/infinispan/pull/5041>
> 
> As you can see I did not use our Wiki (as we used to) because it
> doesn't support inline comments (which is pretty bad in my opinion).
> I would like to propose to keep all the designs along with our
> source code. This approach has been successfully used by the
> Kubernetes [1] folks (although they migrated designs into the new
> Community repository [2] recently). I think it might be a good idea
> to do something similar.
> 
> Feedback on both items is more than welcome.
> 
> Thanks,
> Sebastian
> 
> [1]
> https://github.com/kubernetes/kubernetes/tree/master/docs/proposals
> <https://github.com/kubernetes/kubernetes/tree/master/docs/proposals>
> [2]
> 
> https://github.com/kubernetes/community/tree/master/contributors/design-proposals
> 
> <https://github.com/kubernetes/community/tree/master/contributors/design-proposals>
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
> 
> 
> 
> 
> ___
> infinispan-dev mailing list
> infinispan-dev@lists.jboss.org
> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


[infinispan-dev] Infinispan 9.0 Final

2017-03-31 Thread Tristan Tarrant
Dear all,

we are proud to announce Infinispan 9.0 Final.
This release includes many new features and improvements:

- much improved performance in all scenarios
- off-heap data container, to avoid GC pauses
- Ickle, a new query language based on JP-QL with full-text capabilities
- multi-tenancy with SNI support for the server
- vastly improved cloud and container integrations

Read more about it in our announcement [1]
As usual you can find all the downloads, documentation and community 
links on our website: http://infinispan.org

Enjoy !

The Infinispan Team

[1] http://blog.infinispan.org/2017/03/infinispan-9.html

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Strategy to adopting Optional in APIs

2017-03-31 Thread Tristan Tarrant
I was about to say the same: in the typical use case of returning an 
optional and using it immediately it would probably end up on the stack 
anyway...

Tristan

On 31/03/2017 09:57, Radim Vansa wrote:
> I secretly hope that all these allocations would be inlined and
> eliminated. If we find out that it really allocates the objects (from
> JFR's allocation stats), it's a reason to rewrite that piece of code to
> the dull optionless version.
> TBH I am rather afraid that the JVM will allocate the consumer which
> will need some captured variables. Maybe I trust C2 compiler too much,
> believing that if the handler isn't too big, it will generate similar
> instructions with nicer source code :-/
> 
> R.
> 
> 
> On 03/30/2017 11:08 PM, Sanne Grinovero wrote:
>> I'm for "at discretion" and "avoid if not really needed" : not cool to
>> allocate objects for no reason.
>>
>> On 30 Mar 2017 16:57, "Radim Vansa" <rva...@redhat.com
>> <mailto:rva...@redhat.com>> wrote:
>>
>>  Hi,
>>
>>  I was wondering what's the common attitude towards using Optional in
>>  APIs, and what naming pattern should we use. As an example, I dislike
>>  calling
>>
>>  if (entry.getMetadata() != null && entry.getMetadata().version()
>>  != null) {
>>   foo.use(entry.getMetadata().version())
>>  }
>>
>>  where I could just do
>>
>>  entry.metadata().flatMap(Metadata::optionalVersion).ifPresent(foo::use)
>>
>>  Here I have proposed metadata() method returning Optional
>>  (regular getter method is called getMetadata()) and annoying
>>  optionalVersion() as version() is the regular getter.
>>
>>  Shall we adopt some common stance (use/don't use/use at developer's
>>  discretion) and naming conventions? Is it acceptable to start adding
>>
>>  default Optional foo() { Optional.ofNullable(getFoo()); }
>>
>>  whenever we feel the urge to chain Optionals?
>>
>>  Radim
>>
>>  --
>>  Radim Vansa <rva...@redhat.com <mailto:rva...@redhat.com>>
>>  JBoss Performance Team
>>
>>  ___
>>  infinispan-dev mailing list
>>  infinispan-dev@lists.jboss.org <mailto:infinispan-dev@lists.jboss.org>
>>  https://lists.jboss.org/mailman/listinfo/infinispan-dev
>>  <https://lists.jboss.org/mailman/listinfo/infinispan-dev>
>>
>>
>>
>> ___
>> infinispan-dev mailing list
>> infinispan-dev@lists.jboss.org
>> https://lists.jboss.org/mailman/listinfo/infinispan-dev
> 
> 

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


Re: [infinispan-dev] Hot Rod secured by default

2017-03-30 Thread Tristan Tarrant


On 30/03/2017 17:31, Dennis Reed wrote:
> +1 to authentication and encryption by default.
>This is 2017, that's how *everything* should be configured.
> 
> -1 to making it easy to trust all certs.  That negates the point of
> using encryption in the first place and should really never be done.
> 
> If it's too hard to configure the correct way that we think it would
> turn users away, that's a usability problem that needs to be fixed.
Well, none of the databases I know of require you to set up client side 
truststores, so that is already a usability hurdle.

Tristan

-- 
Tristan Tarrant
Infinispan Lead
JBoss, a division of Red Hat
___
infinispan-dev mailing list
infinispan-dev@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/infinispan-dev


  1   2   3   4   5   6   >