> Jerry,
> On 7/19/19 13:38, Jerry Malcolm wrote:
>>>> I have had a dedicated hosted environment with WAMP and Tomcat for
>>>> over 15 years.  I'm very familiar with everything related to that
>>>> environment... apache http, mysql, dns server, the file system,
>>>> JAMES, and all of my management scripts that I've accumulated over
>>>> the years. Everything is in the same box and basically on the same
>>>> desktop. But now I have a client that has needs that are best met
>>>> in an AWS environment.
> Can you explain that in a little more depth? What is it about AWS that
> meets their needs better?
> I ask because you can provision a one-box wonder in AWS just like you
> do on a physical space with a single server. You just have to use
> remote-desktop to get into it, and then it's all the same.
> But if they want to use RDS, auto-scaling, and other Amazon-provided
> services then things can get confusing.
>> Unfortunately, that is the precise reason we need to go AWS....
>> Extremely high availability and scalability / load-balancing across
>> multiple instances.  There will need to at least one instance running at
>> all times. Even when doing maintenance/upgrades on other instances.
> So the answer to your question really depends upon what the client
> thinks they'll be getting by you taking your existing product "to the
> cloud".
>>>> I understand just enough AWS to be dangerous, which is not
>>>> much.... I do know that it's a bunch of different modules, and I
>>>> believe I lose the direct file system access.
> That heavily depends upon how you do things. You can get yourself a
> server with a disk and everything, just like you are used to doing.
>> Do you mean AWS offers a 'file server' module that I can basically
>> access directly as a drive from TC?  If so, that eases my mind a bunch. 
>> I manage and serve gigabytes of videos and photos.  I don't really want
>> a full CMS implementation.  Just want a big hard drive I can get to.
>>>> I've watched an AWS intro video and a couple of youtube videos on
>>>> setting up TC in AWS. But they always starts with "now that you
>>>> have your AWS environment set up....".   I am looking for something
>>>> that explains the big picture of migrating an existing WAMP+TC to
>>>> AWS.  I am not so naive to think that there won't be significant
>>>> rip-up to what I have now. But I don't want to do unnecessary
>>>> rip-up just because I don't understand where I'm heading.
>>>> Basically, I don't know enough to know what I don't know.... But I
>>>> need to start planning ahead and learning soon if I'm going to have
>>>> any disasters in my code where I might have played it too loose
>>>> with accessing the file system directly in my dedicated
>>>> environment.
>>>> Has anyone been down this path before and could point me to some
>>>> tutorials targeted to migrating WAMP+TC to AWS? Or possible
>>>> hand-hold me just a little...? I'm a pretty quick learner.  I just
>>>> don't know where to start.
> As usual, start with your requirements :)
>> Requirements are what I have now in a single box, but with the addition
>> of multiple instances of TC (and HTTPD and/or mySQL?) for HA and load
>> balancing.  Day-1 launch won't be massive traffic and theoretically
>> could be handled by my single dedicated server I have today.  But if
>> this takes off like the client predicts, I don't want to get caught
>> flat-footed and have to throw together an emergency redesign to begin
>> clustering TC to handle the traffic. Rather go live initially with
>> single instance AWS, but with a thought-out (and tested/verified) plan
>> to easily begin clustering when the need hits.
>> Thanks again for the info.
> -chris

There are a lot of ways to approach this. I'm not sure how much is
viable under Windows, since I've only done Linux EC2 instances.

Load balancing:

You can't do multicasting (last I checked) in a cloud environment.
You'll need to use something like redis or memcache if you need to
support sessions / load balancing without sticky sessions. I recommend
steering away from sticky sessions because that complicates outages /


I'd look at RDS and multiple instances across availability zones. There
are some issues with fail-over and the time it takes. Look at recent AWS
forums for work-arounds.


I think that one good design (if you can't do Docker or Elastic
Beanstalk) is to place all of your tools on an EBS volume. You can mount
this on Windows (I think - works with Linux), and access all of your
services from there.

There are several advantages to this. Backups are done by doing
snapshots of unmounted disks. You basically do the following:

1. Disconnect an instance from a load balancer
2. Unmount the driver from the instance
3. Perform the snapshot command
4. Once the snapshot command returns, remount the drive
5. Add the instance back to the load balancer

Server instances:

In a cloud environment, server instances are ephemeral. Be prepared to
lose a server at any time (I have). It's best to do load balancing, and
then place all of your "stuff" on EBS - mounted disks. It's
operationally niceer to do Docker or Elastic Beanstalk with
microservices, but I don't know if your software is designed that way.

Another way to handle monolithic services is to place the generic
services on the disk, and use something like Jenkins (or another CI/CD
tool) to deploy your applications to the services. Finally, keep the
versioned applications on an S3 instance, and use the CI/CD tool to
deploy from there. Then there's less need to do traditional backups.

If you keep your platforms generic and deploy to those platforms, you
can make your servers part of an elastic group. Then they can be built
properly when you need them. You may need to do some CloudFormation
magic to get everything put back together properly.

The tough concept to wrap your head around when you're coming from a
traditional environment is that your infrastructure is ephemeral and
code-based. Store components in S3. Store configurations in
CloudFormation. Spread out your database in a multi-zone RDS. Do
distributed session management with redis or memcache.

Reading and piecing together AWS documentation is challenging. The
documentation is pretty good on the tool set, but pretty awful when
trying to build an entire environment. It usually takes several
iterations to get things right.

I've not even discussed remote access, two factor authentication, and
public versus private networks.

This is way off topic from the Tomcat mailing list, but I hope that it
gets you started.

. . . just my two cents

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to