Did you look at the ignite queue API link I sent

Sent from my iPad

> On Sep 26, 2016, at 8:21 PM, Gary Gregory <garydgreg...@gmail.com> wrote:
> 
> The IgniteCache looks richer than both the stock Cache and EhCache for sure: 
> https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/IgniteCache.html
> 
> I am not sure I like having to basically use a map with a AtomicLong sequence 
> key I need to manage AND THEN sort the map keys when what I really want is a 
> List or a Queue. I feels like I have to work extra hard for a simpler use 
> case. What I want is a cache that behaves like a queue and not like a map. 
> Using JMS is too heavy. 
> 
> So I am still considering a Collection Appender.
> 
> Gary 
> 
>> On Mon, Sep 26, 2016 at 7:55 PM, Ralph Goers <ralph.go...@dslextreme.com> 
>> wrote:
>> Ignite is a JSR 107 cache and has some benefits over ehcache.  Ehcache 
>> requires you set preferIPv4Stack to true for it to work.  That might be a 
>> problem for your client.
>> 
>> Sent from my iPad
>> 
>>> On Sep 26, 2016, at 7:18 PM, Gary Gregory <garydgreg...@gmail.com> wrote:
>>> 
>>> 
>>>> On Mon, Sep 26, 2016 at 6:10 PM, Gary Gregory <garydgreg...@gmail.com> 
>>>> wrote:
>>>>> On Mon, Sep 26, 2016 at 6:09 PM, Gary Gregory <garydgreg...@gmail.com> 
>>>>> wrote:
>>>>>> On Mon, Sep 26, 2016 at 5:21 PM, Ralph Goers 
>>>>>> <ralph.go...@dslextreme.com> wrote:
>>>>>> I thought you didn’t want to write to a file?
>>>>> 
>>>>> I do not but if the buffer is large enough, log events should stay in 
>>>>> RAM. But it is not quite right anyway because I'd have to interpret the 
>>>>> contents of the file to turn back into log events.
>>>>> 
>>>>> I started reading up on the Chronicle appender; thank you Remko for 
>>>>> pointing it out.
>>>>> 
>>>>> An appender to a cache of objects is really want I want since I also want 
>>>>> to be able to evict the cache. TBC...
>>>> 
>>>> Like a JSR-107 Appender...
>>> 
>>> Looking at EHCache and 
>>> https://ignite.apache.org/jcache/1.0.0/javadoc/javax/cache/Cache.html I can 
>>> see that a cache is always a kind of map, which leads to what the key 
>>> should be.
>>> 
>>> A sequence number like we have in the pattern layout seems like a natural 
>>> choice. I could see a Jsr107Appender that tracks a sequence number as the 
>>> key. The issue is that the JSR107 Cache interface defines the iterator 
>>> order as undefined which would force a client trying to drain a 
>>> Jsr107Appender to sort all entries before being able to serialize them. 
>>> Unless I can find a list-based Cache implementation within EhCache for 
>>> example.
>>> 
>>> Gary
>>> 
>>>  
>>>> 
>>>> Gary
>>>>> 
>>>>> Gary
>>>>> 
>>>>>> 
>>>>>> The Chronicle stuff Remko is linking to is also worth exploring. 
>>>>>> 
>>>>>> Ralph
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> On Sep 26, 2016, at 5:04 PM, Gary Gregory <garydgreg...@gmail.com> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>> oh... what about our own 
>>>>>>> http://logging.apache.org/log4j/2.x/manual/appenders.html#MemoryMappedFileAppender
>>>>>>> 
>>>>>>> ?
>>>>>>> 
>>>>>>> Gary
>>>>>>> 
>>>>>>>> On Mon, Sep 26, 2016 at 4:59 PM, Remko Popma <remko.po...@gmail.com> 
>>>>>>>> wrote:
>>>>>>>> In addition to the Flume based solution, here is another alternative 
>>>>>>>> idea: use Peter Lawrey's Chronicle[1] library to store log events in a 
>>>>>>>> memory mapped file. 
>>>>>>>> 
>>>>>>>> The appender can just keep adding events without worrying about 
>>>>>>>> overflowing the memory. 
>>>>>>>> 
>>>>>>>> The client that reads from this file can be in a separate thread (even 
>>>>>>>> a separate process by the way) and can read as much as it wants, and 
>>>>>>>> send it to the server. 
>>>>>>>> 
>>>>>>>> Serialization: You can either serialize log events to the target 
>>>>>>>> format before storing them in Chronicle (so you have binary blobs in 
>>>>>>>> each Chronicle excerpt), client reads these blobs and sends them to 
>>>>>>>> the server as is. Or you can use the Chronicle Log4j2 appender[2] to 
>>>>>>>> store the events in Chronicle format. The tests[3] show how to read 
>>>>>>>> LogEvent objects from the memory mapped file, and the client would be 
>>>>>>>> responsible for serializing these log events to the target format 
>>>>>>>> before sending data to the server. 
>>>>>>>> 
>>>>>>>> [1]: https://github.com/peter-lawrey/Java-Chronicle
>>>>>>>> [2]: https://github.com/OpenHFT/Chronicle-Logger
>>>>>>>> [3]: 
>>>>>>>> https://github.com/OpenHFT/Chronicle-Logger/blob/master/logger-log4j-2/src/test/java/net/openhft/chronicle/logger/log4j2/Log4j2IndexedChronicleTest.java
>>>>>>>> 
>>>>>>>> Remko
>>>>>>>> 
>>>>>>>> Sent from my iPhone
>>>>>>>> 
>>>>>>>>> On 2016/09/27, at 5:57, Gary Gregory <garydgreg...@gmail.com> wrote:
>>>>>>>>> 
>>>>>>>>> Please allow me to restate the use case I have for the 
>>>>>>>>> CollectionAppender, which is separate from any Flume-based or 
>>>>>>>>> Syslog-based solution, use cases I also have. Well, I have a Syslog 
>>>>>>>>> use case, and whether or not Flume is in the picture will really be a 
>>>>>>>>> larger discussion in my organization due to the requirement to run a 
>>>>>>>>> Flume Agent.)
>>>>>>>>> 
>>>>>>>>> A program (like a JDBC driver already using Log4j) communicates with 
>>>>>>>>> another (like a DBMS, not written in Java). The client and server 
>>>>>>>>> communicate over a proprietary socket protocol. The client sends a 
>>>>>>>>> list of buffers (in one go) to the server to perform one or more 
>>>>>>>>> operations. One kind of buffer this protocol defines is a log buffer 
>>>>>>>>> (where each log event is serialized in a non-Java format.) This 
>>>>>>>>> allows each communication from the client to the server to say "This 
>>>>>>>>> is what's happened up to now". What the server does with the log 
>>>>>>>>> buffers is not important for this discussion.
>>>>>>>>> 
>>>>>>>>> What is important to note is that the log buffer and other buffers go 
>>>>>>>>> to the server in one BLOB; which is why I cannot (in this use case) 
>>>>>>>>> send log events by themselves anywhere.
>>>>>>>>> 
>>>>>>>>> I see that something (a CollectionAppender) must collect log events 
>>>>>>>>> until the client is ready to serialize them and send them to the 
>>>>>>>>> server. Once the events are drained out of the Appender (in one go by 
>>>>>>>>> just getting the collection), events can collect in a new collection. 
>>>>>>>>> A synchronous drain operation would create a new collection and 
>>>>>>>>> return the old one.
>>>>>>>>> 
>>>>>>>>> The question becomes: What kind of temporary location can the client 
>>>>>>>>> use to buffer log event until drain time? A Log4j Appender is a 
>>>>>>>>> natural place to collect log events since the driver uses Log4j. The 
>>>>>>>>> driver will make its business to drain the appender and work with the 
>>>>>>>>> events at the right time. I am thinking that the Log4j Appender part 
>>>>>>>>> is generic enough for inclusion in Log4j. 
>>>>>>>>> 
>>>>>>>>> Further thoughts?
>>>>>>>>> 
>>>>>>>>> Thank you all for reading this far!
>>>>>>>>> Gary
>>>>>>>>> 
>>>>>>>>>> On Sun, Sep 25, 2016 at 1:20 PM, Ralph Goers 
>>>>>>>>>> <ralph.go...@dslextreme.com> wrote:
>>>>>>>>>> I guess I am not understanding your use case quite correctly. I am 
>>>>>>>>>> thinking you have a driver that is logging and you want those logs 
>>>>>>>>>> delivered to some other location to actually be written.  If that is 
>>>>>>>>>> your use case then the driver needs a log4j2.xml that configures the 
>>>>>>>>>> FlumeAppender with either the memory or file channel (depending on 
>>>>>>>>>> your needs) and points to the server(s) that is/are to receive the 
>>>>>>>>>> events. The FlumeAppender handles sending them in batches with 
>>>>>>>>>> whatever size you want (but will send them in smaller amounts if 
>>>>>>>>>> they are in the channel too long). Of course you would need the 
>>>>>>>>>> log4j-flume and flume jars. So on the driver side you wouldn’t need 
>>>>>>>>>> to write anything, just configure the appender and make sure the 
>>>>>>>>>> jars are there.
>>>>>>>>>> 
>>>>>>>>>> For the server that receives them you would also need Flume. 
>>>>>>>>>> Normally this would be a standalone component, but it really 
>>>>>>>>>> wouldn’t be hard to incorporate it into some other application. The 
>>>>>>>>>> only thing you would have to write would be the sink that writes the 
>>>>>>>>>> events to the database or whatever. To incorporate it into an 
>>>>>>>>>> application you would have to look at the main() method of flume and 
>>>>>>>>>> covert that to be a thread that you kick off.
>>>>>>>>>> 
>>>>>>>>>> Ralph
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>>> On Sep 25, 2016, at 12:01 PM, Gary Gregory <garydgreg...@gmail.com> 
>>>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>> Hi Ralph,
>>>>>>>>>>> 
>>>>>>>>>>> Thanks for your feedback. Flume is great in the scenarios that do 
>>>>>>>>>>> not involve sending a log buffer from the driver itself.
>>>>>>>>>>> 
>>>>>>>>>>> I can't require a Flume Agent to be running 'on the side' for the 
>>>>>>>>>>> use case where the driver chains a log buffer at the end of the 
>>>>>>>>>>> train of database IO buffer. For completeness talking about this 
>>>>>>>>>>> Flume scenario, if I read you right, I also would need to write a 
>>>>>>>>>>> custom Flume sink, which would also be in memory, until the driver 
>>>>>>>>>>> is ready to drain it. Or, I could query some other 'safe' and 
>>>>>>>>>>> 'reliable' Flume sink that the driver could then drain of events 
>>>>>>>>>>> when it needs to.
>>>>>>>>>>> 
>>>>>>>>>>> Narrowing down on the use case where the driver chains a log buffer 
>>>>>>>>>>> at the end of the train of database IO buffer, I'll think I have to 
>>>>>>>>>>> see about converting the Log4j ListAppender into a more robust and 
>>>>>>>>>>> flexible version. I think I'll call it a CollectionAppender and 
>>>>>>>>>>> allow various Collection implementations to be plugged in.
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>> Gary
>>>>>>>>>>> 
>>>>>>>>>>>> On Sat, Sep 24, 2016 at 3:44 PM, Ralph Goers 
>>>>>>>>>>>> <ralph.go...@dslextreme.com> wrote:
>>>>>>>>>>>> If you are buffering events in memory you run the risk of losing 
>>>>>>>>>>>> events if something should fail. 
>>>>>>>>>>>> 
>>>>>>>>>>>> That said, if I had your requirements I would use the 
>>>>>>>>>>>> FlumeAppender. It has either an in-memory option to buffer as you 
>>>>>>>>>>>> are suggesting or it can write to a local file to prevent data 
>>>>>>>>>>>> loss if that is a requirement. It already has the configuration 
>>>>>>>>>>>> options you are looking for and has been well tested. The only 
>>>>>>>>>>>> downside is that you need to have either a Flume instance 
>>>>>>>>>>>> receiving the messages are something that can receive Flume events 
>>>>>>>>>>>> over Avro, but it is easier just to use Flume and write a custom 
>>>>>>>>>>>> sink to do what you want with the data.
>>>>>>>>>>>> 
>>>>>>>>>>>> Ralph
>>>>>>>>>>>> 
>>>>>>>>>>>>> On Sep 24, 2016, at 3:13 PM, Gary Gregory 
>>>>>>>>>>>>> <garydgreg...@gmail.com> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi All,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I can't believe it, but through a convoluted use-case, I actually 
>>>>>>>>>>>>> need an in-memory list appender, very much like our test-only 
>>>>>>>>>>>>> ListAppender.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The requirement is as follows.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> We have a JDBC driver and matching proprietary database that 
>>>>>>>>>>>>> specializes in data virtualization of mainframe resources like 
>>>>>>>>>>>>> DB2, VSAM, IMS, and all sorts of non-SQL data sources 
>>>>>>>>>>>>> (http://www.rocketsoftware.com/products/rocket-data/rocket-data-virtualization)
>>>>>>>>>>>>>  
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The high level requirement is to merge the driver log into the 
>>>>>>>>>>>>> server's log for full-end to end tractability and debugging.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the driver is running on the z/OS mainframe, it can be 
>>>>>>>>>>>>> configured with a z/OS specific Appender that can talk to the 
>>>>>>>>>>>>> server log module directly.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> When the driver is running elsewhere, it can talk to the database 
>>>>>>>>>>>>> via a Syslog socket Appender. This requires more set up on the 
>>>>>>>>>>>>> server side and for the server to do special magic to know how 
>>>>>>>>>>>>> the incoming log events match up with server operations. Tricky.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> The customer should also be able to configure the driver such 
>>>>>>>>>>>>> that anytime the driver communicates to the database, it sends 
>>>>>>>>>>>>> along whatever log events have accumulated since the last 
>>>>>>>>>>>>> client-server roundtrip. This allows the server to match exactly 
>>>>>>>>>>>>> the connection and operations the client performed with the 
>>>>>>>>>>>>> server's own logging.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> In order to do that I need to buffer all log events in an 
>>>>>>>>>>>>> Appender and when it's time, I need to get the list of events and 
>>>>>>>>>>>>> reset the appender to a new empty list so events can keep 
>>>>>>>>>>>>> accumulating.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> My proposal is to either turn our ListAppender into such an 
>>>>>>>>>>>>> appender. For sanity, the appender could be configured with 
>>>>>>>>>>>>> various sizing policies:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> - open: the list grows unbounded
>>>>>>>>>>>>> - closed: the list grows to a given size and _new_ events are 
>>>>>>>>>>>>> dropped on the floor beyond that
>>>>>>>>>>>>> - latest: the list grows to a given size and _old_ events are 
>>>>>>>>>>>>> dropped on the floor beyond that
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thoughts?
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Gary
>>>>>>>>>>>>> 
>>>>>>>>>>>>> -- 
>>>>>>>>>>>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> -- 
>>>>>>>>>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>>>> Spring Batch in Action
>>>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> -- 
>>>>>>>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>>>> JUnit in Action, Second Edition
>>>>>>>>> Spring Batch in Action
>>>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>>>> Home: http://garygregory.com/
>>>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> -- 
>>>>>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>>>>>> Java Persistence with Hibernate, Second Edition
>>>>>>> JUnit in Action, Second Edition
>>>>>>> Spring Batch in Action
>>>>>>> Blog: http://garygregory.wordpress.com 
>>>>>>> Home: http://garygregory.com/
>>>>>>> Tweet! http://twitter.com/GaryGregory
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> -- 
>>>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>>>> Java Persistence with Hibernate, Second Edition
>>>>> JUnit in Action, Second Edition
>>>>> Spring Batch in Action
>>>>> Blog: http://garygregory.wordpress.com 
>>>>> Home: http://garygregory.com/
>>>>> Tweet! http://twitter.com/GaryGregory
>>>> 
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>>> Java Persistence with Hibernate, Second Edition
>>>> JUnit in Action, Second Edition
>>>> Spring Batch in Action
>>>> Blog: http://garygregory.wordpress.com 
>>>> Home: http://garygregory.com/
>>>> Tweet! http://twitter.com/GaryGregory
>>> 
>>> 
>>> 
>>> -- 
>>> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
>>> Java Persistence with Hibernate, Second Edition
>>> JUnit in Action, Second Edition
>>> Spring Batch in Action
>>> Blog: http://garygregory.wordpress.com 
>>> Home: http://garygregory.com/
>>> Tweet! http://twitter.com/GaryGregory
> 
> 
> 
> -- 
> E-Mail: garydgreg...@gmail.com | ggreg...@apache.org 
> Java Persistence with Hibernate, Second Edition
> JUnit in Action, Second Edition
> Spring Batch in Action
> Blog: http://garygregory.wordpress.com 
> Home: http://garygregory.com/
> Tweet! http://twitter.com/GaryGregory

Reply via email to