Thanks, Paul.

You're right.  If I had thought about it a little more and put 2 & 2 together, 
I would have realized that we don't even need SqlLine up and running, as our 
use case is simply:

Get a single Drill server running, and have a single app connect to it via JDBC 
calls.

(This app in our processing pipeline is responsible for both persisting data 
and fetching data to support a UI.)

The analyst(s) who want to play with the data already have a bunch of Python 
scripts to fetch data (from Drill, Mongo, etc) to do their stuff.

So as you were suggesting, the real issue boils down to figuring out how to get 
just the server containerized -- which I see you and others are already 
tackling.

Currently, we're going ahead with an uncontainerized Drill server (and I have 
to get back to finishing this JDBC code to make Drill calls...), so I won't 
bother you with any more questions.  But I definitely am interested in working 
with you and testing out any container solutions you come up with (ultimately 
the powers that be want everything containerized as much as possible).

Thank you again.

Ron
 
> On February 1, 2020 at 2:05 AM Paul Rogers <[email protected]> wrote:
> 
> 
> Hi Ron,
> 
> I suspect you are right about the issue being TTY related. Sounds like you 
> are bringing up SqlLine inside the pod. But, Sqlline is a command line app 
> that runs Drill in embedded mode. I wonder what Sqline is using as stdin?
> 
> I personally have never used Docker to bring up clients; only servers. So, if 
> we can get the right Dockerfile setup, you might be better off running a 
> Drill server inside the pod, then run Sqlline from your laptop or some 
> terminal session, and have it connect to the Drill server. You can even use 
> K8s to ssh into the pod and start Sqlline there. If your shell exists, 
> Sqlline will exit, but the Drill server will continue to run.
> 
> Running the Drill server (Drillbit) in a pod might be a good idea in general. 
> Once you have your app up and running, you will want multiple people to 
> connect to your running Drill server using Sqlline (which uses JDBC), ODBC, 
> JDBC directly, etc. Also, once you have the server up, you can use a K8s 
> proxy (or more ingress sophisticated mechanism, you have one) to connect to 
> the Drill web console to run queries, look at query profiles, monitor Drill 
> status, etc. (K8s runs its own network layer so, for "stock" K8s, pod IPs are 
> not visible outside of K8s. Maybe OpenShift has an ingress solution built in.)
> 
> Thanks,
> - Paul
> 
>  
> 
>     On Friday, January 31, 2020, 9:21:49 PM PST, Ron Cecchini 
> <[email protected]> wrote:  
>  
>  FWIW, I found a thread on SO that addresses this OpenShift issue with 
> containers seemingly exiting for no apparent reason, even when running as 
> root.  It might be TTY related.
> 
> openshift pod fails and restarts frequently
> https://stackoverflow.com/questions/35744404/openshift-pod-fails-and-restarts-frequently/36234707#36234707
> 
> What is CrashLoopBackOff status for openshift pods?
> https://stackoverflow.com/questions/35710965/what-is-crashloopbackoff-status-for-openshift-pods
> 
> > On January 31, 2020 at 4:40 PM Ron Cecchini <[email protected]> wrote:
> > 
> > 
> > Thank you, Paul, for your in depth and informative response.
> > 
> > Here's what we did today as a test:
> > 
> > In OpenShift, we enabled allowing containers to run however they wanted and 
> > redeployed the Drill Docker.
> 
> [...]
> 
> > The result is that the container comes up - shows that the user.name is 
> > indeed root - doesn't give any errors ...
> > 
> > And then seemingly goes away, again w/o any indication of an error.  And 
> > then OpenShift goes into its crash-reboot cycle trying to restart the 
> > container, and each time we see the same thing:
>

Reply via email to