You can find the article at:

http://searchsoa.techtarget.com/tip/0,289483,sid26_gci1330477,00.html?track=NL-449&ad=661968&asrc=EM_NLT_4467119&uid=5532089

Do any of you have any experience of using such a cloud for a large
SOA-based system?

<<Not that long ago, operators of a small to medium sized Web site had
few choices when trying to balance the need to handle an occasional
big demand versus the requirement to keep equipment and electrical
costs low. Recently, operating system virtualization has been used to
make it easy to bring new servers on line when the demand increases or
turn them off when load is low.

So-called "Cloud" computing offers another approach to handling sudden
demands. You can move all or part of your server load to the "Cloud",
only pay for the actual services used, and not have to mess with any
extra hardware.

The Slashdot Effect
If you have ever tried to follow up an interesting news item posted on
a high traffic news site like Slashdot and hit a completely
unresponsive site, you have encountered the Slashdot Effect. It is
simple enough to explain, hitting a site with 100 times the normal
number of requests will saturate a server.

A similar effect may occur when a popular game site releases a new
update. Servers which can handle normal game traffic slow down when
massive update transfers are running. The last time I updated Second
Life, I noticed that the update transfer was actually coming from
Amazon's Web Services, not the regular Second Life servers!

Amazon's S3 Service
Opening an account with Amazon's Web Services (AWS) gets you a unique
public account name plus a private key used for authentication of
operations. As I explained in my earlier article on using Amazon's
Simple Storage Service (S3) for backup, with an AWS account you can
create named "buckets" which can hold files. Public, private, or
read-only access to a bucket or file is minutely controlled by an
Access Control List (ACL) and request authentication. Your AWS account
will be billed $ 0.15 per gigabyte per month for storage. Downloading
files by users is billed at $0.17 per gigabyte with decreasing rates
for terabyte levels.

Public links to the Cloud
You can avoid the slashdot effect with your big public announcement by
putting all or part of the web resource files in Amazon S3 buckets
having public access turned on. Here is an example URL to one of my
pictures. Note how the bucket name "wbbackup.photos" is a prefix to
the s3 service host name. It is also possible to use the bucket name
as part of the request path.

http://wbbackup.photos.s3.amazonaws.com/Jan17IceStorm.jpg

There are numerous tools such as the Firefox S3 organizer plugin and
the open source Java based "Cockpit" application that makes it easy to
move files to and from S3 buckets and control the access allowed. Note
that S3 does not let you move files between buckets or rename them so
make sure your file and bucket names are in their final form before
uploading.

Controlled access to the Cloud
Amazon S3 provides a mechanism to move a bandwidth requirement from
your overloaded server to the cloud while still retaining control over
who gets access. Assuming you have a S3 file stored with private
access control settings, You can provide limited read-only access to
it by means of a temporary url valid for only a short time. Thus your
customer can get the resource but the url can not be handed around and
used repeatedly.

Constructing a limited time URL
Here are the programming steps required to construct a time limited URL:

    * Construct a string in the canonical form incorporating the
public key, bucket, file name, and time limit. Digital signing of the
request string requires that it be formatted in the canonical sequence
since even a single character difference would create a different
signature and cause the S3 service to reject the URL. Fortunately for
developers, if the URL is rejected, the error message shows exactly
how the string should have been constructed. Note that your system
clock time must closely match Amazon's.
    * Using the SHA-1 algorithm, calculate a "Hash Message
Authentication Code" (HMAC) combining your private key and the
canonical string to create a unique signature as an array of 20 bytes.
    * Using the base64 algorithm encode the signature bytes as a
string of printable characters. Some of these characters (+ / and =)
are not legal as part of a URL, therefore the following step is necessary.
    * Use the URLencode algorithm to ensure all characters in the
signature are legal for use in a URL.
    * Finally, add the Signature attribute to the URL and send the
request.

The final URL includes the S3 host, bucket name, file name, public key
ID, expiration date and signature. Here is an example (on multiple
lines to fit this page.)

http://s3.amazonaws.com:80/wbbackup.documents/announce.pdf?
  AWSAccessKeyId=0RZH5W7VT9RG7EHM5KR2&
  Expires=1220794232&
  Signature=WwPfFSREPG0wu2BK6m6X04awcz8%3D

When the S3 service responds, the response headers include a
Content-type deduced from the file type. S3 also allows you to attach
custom header information such as might be required by a complex web
service.

Using only standard Java library classes plus a public domain base64
encoding library, and Amazon's S3 code library I was able to implement
a simple Java servlet to create limited time URLs. In addition to
Java, the Amazon developer's site gives code examples in many
different programming languages including Ruby, Silverlight, Python,
PHP, Perl, ColdFusion, Visual Basic, and Erlang. Amazon's S3 service
is so simple to use and so inexpensive that any web site operator
facing a big increase in server demand should consider offloading that
demand to the cloud.>>

Gervas

Reply via email to