Chris,

This is _so_ helpful.

On 12/11/20 3:00 PM, Christopher Schultz wrote:
Rob,

On 12/11/20 15:00, Rob Sargent wrote:
> [huge snip]

Your “Job” example seems along the lines of get-it-off-the-servlet,
which again points back to my current queue handler I think.

Yes, I think so. So let's get back to your original idea -- which I think is a good one -- to use a shared queue to manage the jobs.

Just to be clear, the servlet is going to reply to the client ASAP by saying "I have accepted this job and will do my best to complete it", or it will return an error (see below), or it will refuse a connection (see below). Sound okay so far?

[My servlet] takes the payload from the client an writes “lots” of
records in the database.  Do I want that save() call in the servlet
or should I queue it up for some other handler. All on the same
hardware, but that frees up the servlet.
If the client doesn't care about job status information, then fire-and-forget clients is a reasonable methodology. You may find that at some point, they will want to get some job-status information. You could implement that, later. Version 2.2 maybe?

Yeah, my clients are only visible through the AWS console currently.  Any "progress/dashboard" won't show up 'til version 2.345


On the other hand, if you can process some of the request in a streaming way, then you can be writing to your database before your client is done sending the request payload. You can still do that with fire-and-forget, but it requires some more careful handling of the streams and stuff like that.

The one thing you cannot do is retain a reference to the request (response, etc.) after your servlet's service() method ends. Well, unless you go async but that's a whole different thing which doesn't sound like what you want to do, now that I have more info.

Calling save() from the servlet would tie-up the request-processing thread until the save completes. That's where you get your 18-hour response times, which is not very HTTP-friendly.
Certainly don't want to pay for 18 EC2 hours of idle.

Avoiding calling save() from the servlet requires that you fully-read the request payload before queuing the save() call into a thread pool bundled-up with your data. (Well, there are some tricks you could use but they are a little dirty and may not buy you much.)

In the small client (my self-made DOS), there’s only a handful of
writes, but still faster to hand that memory to a queue and let the
servlet go back to the storm.
I would make everything work the same way unless there is a compelling reason to have different code paths.

The two payloads are impls of an a base class. Jackson/ObjectMapper unravels them to Type. Type.save();


That’s the thinking behind the question of accessing a ThreadPoolExecutor via JDNI.  I know my existing impl does queue jobs so (so the load is greater than the capacity to handle requests).  I worry that without off-loading Tomcat would just spin up more servlet threads, exhaust resources.  I can lose a client, but would rather not lose the server (that looses all clients...)

Agreed: rejecting a single request is preferred over the service coming down -- and all its in-flight jobs with it.

So I think you want something like this:

servlet {
  post {
    // Buffer all our input data
    long bufferSize = request.getContentLengthLong();
    if(bufferSize > Integer.MAX_VALUE || bufferSize < 0) {
      bufferSize = 8192; // Reasonable default?
    }
    ByteArrayOutputStream buffer = new ByteArrayOutputStream((int)bufferSize);

    int count;
    byte[] buffer = new byte[8192];
    while(-1 != (count = in.read(buf)) {
        buffer.write(buf, 0, count);
    }

    // All data read: tell the client we are good to go
    Job job = new Job(buffer);
    try {
      sharedExecutor.submit(job); // Fire and forget

      response.setStatus(200); // Ok
    } catch (RejectedExecutionException ree) {
      response.setStatus(503); // Service Unavailable
    }
  }
}

This is working:

      protected void doPost(HttpServletRequest req, HttpServletResponse
   resp) /*throws ServletException, IOException*/ {
        lookupHostAndPort();

        Connection conn = null;
        try {

          ObjectMapper jsonMapper = JsonMapper.builder().addModule(new
   JavaTimeModule()).build();
          jsonMapper.setSerializationInclusion(Include.NON_NULL);

          try {

            AbstractPayload payload =
   jsonMapper.readValue(req.getInputStream(), AbstractPayload.class);
            logger.error("received payload");
            String redoUrl =
   String.format("jdbc:postgresql://%s:%d/%s", getDbHost(),
   getDbPort(), getDbName(req));
           Connection copyConn = DriverManager.getConnection(redoUrl,
   getDbRole(req), getDbRole(req)+getExtension());
            payload.setConnection(copyConn);
            payload.write();
            //HERE THE CLIENT IS WAITING FOR THE SAVE.  Though there
   can be a lot of data, COPY is blindingly fast
            resp.setContentType("plain/text");
            resp.setStatus(200);
            resp.getOutputStream().write("SGS_OK".getBytes());
            resp.getOutputStream().flush();
            resp.getOutputStream().close();
          }
            //Client can do squat at this point.
          catch
   (com.fasterxml.jackson.databind.exc.MismatchedInputException mie) {
            logger.error("transform failed: " + mie.getMessage());
            resp.setContentType("plain/text");
            resp.setStatus(461);
            String emsg = "PAYLOAD NOT
   SAVED\n%s\n".format(mie.getMessage());
            resp.getOutputStream().write(emsg.getBytes());
            resp.getOutputStream().flush();
            resp.getOutputStream().close();
          }
        }
        catch (IOException | SQLException ioe) {
        etc }

Obviously, the job needs to know how to execute itself (making it Runnable means you can use the various Executors Java provides). Also, you need to decide what to do about creating the executor.

I used the ByteArrayOutputStream above to avoid the complexity of re-scaling buffers in example code. If you have huge buffers and you need to convert to byte[] at the end, then you are going to need 2x heap space to do it. Yuck. Consider implementing the auto-re-sizing byte-array yourself and avoiding ByteArrayOutputStream.

There isn't anything magic about JNDI. You could also put the thread pool directly into your servvlet:

servlet {
  ThreadPoolExecutor sharedExecutor;
  constructor() {
    sharedExecutor = new ThreadPoolExecutor(...);
  }
  ...
}

Yes, I see now that the single real instance of the servlet can master the sharedExcutor.

I have reliable threadpool code at hand.  I don't need to separate the job types:  In practice all the big ones are done first: they define the small ones.  It's when I'm spectacularly successful and two (2) investigators want to use the system ...

If you want to put those executors into JNDI, you are welcome to do so, but there is no particular reason to. If it's convenient to configure a thread pool executor via some JNDI injection something-or-other, feel free to use that.

But ultimately, you are just going to get a reference to the executor and drop the job on it.

Next up, is SSL.  One of the reason’s I must switch from my naked socket impl.

Nah, you can do TLS on a naked socket. But I think using Tomcat embedded (or not) will save you the trouble of having to learn a whole lot and write a lot of code.

No thanks.
TLS should be fairly easy to get going in Tomcat as long as you already understand how to create a key+certificate.

I've made keys/certs in previous lives (not to say I understand them).  I'm waiting to hear on whether or not I'll be able to self-sign etc.  Talking to AWS Monday on the things security/HIPAA

I'm sure I'll be back, but I think I can move forward.  Much appreciated.

rjs



Reply via email to