-----Original Message-----
From: Jared Bater [mailto:jba...@merlin.mb.ca] 
Sent: March-03-13 3:06 PM
To: Chris Kluka; Ian Trump
Cc: Sarah Lacroix; operationsleepywea...@lists.skullspace.ca
Subject: RE: [Operationsleepyweasel] CDC Hosting Provider Daemon Defense

Hey all,

Can everyone check their boxes of spare cables to see if you can find a 
multi-mode ST-ST fibre optic cable for the cross-connect between MERLIN and 
Daemon Defense? It only needs to be a couple of meters long but I don't seem to 
have one.


Thanks!
Jared



________________________________

From: Chris Kluka [ckl...@daemondefense.com]
Sent: Thursday, February 28, 2013 1:26 AM
To: Ian Trump
Cc: Sarah Lacroix; operationsleepywea...@lists.skullspace.ca; Jared Bater
Subject: Re: [Operationsleepyweasel] CDC Hosting Provider Daemon Defense


Well; I have spare hard drives. My immediate thought would be to configure 
three different sets of drives; 

One set of 2 drives boots the first server with Targets 1, 2 and 3 coming up.



Quite frankly...

If you let me cut the ram a bit, I can run all 63 instances. 


If I can cut them to (respectively): (384MB, 2GB, 768MB, 1.25GB, 1.5, 1GB, 
0.75GB) = 7.625GB

I can run 2 sets of instances per blade

I have 5 blades...

I can run 10 sets of instances. (10th set for backup in-case one set fails or 
something).




I will need (if anyone has any) 6x 32GB USB keys (to install VMWare ESXi on).


I will provide: 11x 146GB 10k RPM disks.




I will start by installing 1 USB key and 1 hard drive into one of the blades. I 
will install ESXi on the USB key and then get the 7 targets loaded twice each 
onto the 146GB drive. (14 targets per blade). 


I will then clone the hard drive 9 times yielding 11 identical hard drives. I 
will also clone the USB key 5 times yielding 6 identical USB keys.  


I will then stash 1 of the hard drives and 1 of the USB keys in my desk, to be 
preserved as clone source machines for future use. 


I will then install 1 usb key into each of the other 4 blades and 1 hard drive 
into each of the other 4 blades.


I will have 5 remaining hard drives. These hard drives will be 
"Swap-The-Disks-And-Reboot-To-Reset" disks. If any of the machines gets 
corrupted or fails or whatever, I'll just pull the hard disk, put in a new one, 
reboot it, and then re-image the pulled disk from my original in my desk. 




I will test this logic by doing the same setup on the 9th, using 3 USB keys, 2 
blades, and 5 hard drives. I see no reason why this method shouldn't scale to 5 
blades. 







________________________________

From: "Ian Trump" <itr...@octopitech.com>
To: "Chris Kluka" <ckl...@daemondefense.com>, "Jared Bater" 
<jba...@merlin.mb.ca>
Cc: "Sarah Lacroix" <sarahlacroixmu...@gmail.com>, 
operationsleepywea...@lists.skullspace.ca
Sent: Thursday, February 28, 2013 12:10:48 AM
Subject: RE: [Operationsleepyweasel] CDC Hosting Provider Daemon Defense

Sarah:

The episode of when IT Architects Attack continues.... 

Cian: Exactly!

Chris/Nathan:

Overview:

This is the high level: Teams of about 5 students show up and try and secure 
systems in a timed competition. We use Nessus to score each machine before and 
then after the time expires. The teams get points for securing as many 
vulnerabilities as they can. There is an opportunity to earn bonus points on 
each machine.

What we need:

It would be great if we could provide Targets 1 through 3 live for each team, 
all at the same time. That would translate into 9 VM's running at any given 
time when dealing with 3 teams. For the May 10th event, we need capacity to run 
27 VM's running at the same time. If we run into a capacity issue we could go 
to 6 or 18 VM's at one time. Whatever your hardware can handle. All the below 
systems have a number of vulnerabilities, test viruses, out of date software, 
ridiculous configurations, un-used services installed, services installed in a 
non-secure fashion, security features crippled or disabled, insane 
configuration choices, corrupted registries, hacked host files. Targets 4-7 
will have non-persistent spy-ware, and "ahem" some custom, non-replicating 
malware and my personal favorite: a hidden scheduled task which starts 
uninstalling patches and tries to slowly email (and erase every file, starting 
with the documents share) on the machine shortly after being rebooted 
(replicating 
 an insider attack I was actually investigating back in the day) - basically a 
sysadmin's nightmare. Each machine also has a hidden forensic challenge ranging 
from something simple to something with an insane level of difficulty.  

The first number indicates the number of instances required on the Saturday the 
9th of March in brackets indicates the estimate required for the event on May 
10th, 2013

Target 1: 3 (9) Instances of a Windows 2000 Professional Workstation, 512MB 
RAM, 4GB Hard Drive Target 2: 3 (9) Instances of a Windows 2003 SBS Server, 4GB 
of RAM, 25GB Hard Drive Target 3: 3 (9) Instances of a Windows XP, 1GB of RAM, 
25GB Hard Drive Target 4: 3 (9) Instances of a Windows 2008 Server ,2GB of RAM, 
20GB Hard Drive Target 5: 3 (9) Instances of a Windows 7 Pro,2GB of RAM, 15GB 
Hard Drive Target 6: 3 (9) Instances of a Windows Vista Workstation 1.5GB of 
RAM, 20GB Hard Drive Target 7: 3 (9) Instances of a promised Linux Target, ? GB 
of RAM, ? GB Hard Drive (I'm looking at you, Alex)

Jared:

The challenge on the networking side is to isolate each VM environment from 
each team. Right now our architecture has all the VM's in a single network, 
that allows for all manner of shenanigans between teams (fun, but un-productive)
 
Ideally, we need Team 1 to only be able to reach Team 1's Targets 1-3 in the AM 
and then Targets 4-7 in the PM and the Internet.

In terms of the OS to host the VM's I would go with whatever Cian and you 
(Chris)  are most comfortable with, Cian stepped up and helped set up the 
targets in the "shittastic" VM environment, and monitored the VM performance 
(crappy as it was, no fault of his, we were way under resourced for what we 
want to do).

So, as you can see the security mouse trap we built should not require 
ridiculous resources (I hope).

Ian

 -----Original Message-----
From: operationsleepyweasel-boun...@lists.skullspace.ca 
[mailto:operationsleepyweasel-boun...@lists.skullspace.ca] On Behalf Of Cian 
Whalley
Sent: February-27-13 10:24 PM
To: Chris Kluka
Cc: operationsleepywea...@lists.skullspace.ca; Sarah Lacroix; Jared Bater
Subject: Re: [Operationsleepyweasel] CDC Hosting Provider Daemon Defense

"Security competition"

We teach them how to not get hacked. I.e. best practices in securing systems.

That said:I totally agree. I wouldn't want this stuff anywhere near a legit 
system.

On 2013-02-27 10:16 PM, "Chris Kluka" <ckl...@daemondefense.com> wrote:


        ... are you @ 135 innovation right now?
        
        
________________________________

        From: "Chris Kluka" <ckl...@daemondefense.com>
        To: operationsleepywea...@lists.skullspace.ca
        Cc: "Sarah Lacroix" <sarahlacroixmu...@gmail.com>, "Jared Bater" 
<jba...@merlin.mb.ca>, "Cian Whalley" <c...@somewhere.ca>
        Sent: Wednesday, February 27, 2013 10:16:26 PM
        Subject: Re: [Operationsleepyweasel] CDC Hosting Provider Daemon Defense
        
        
        I won't be using the nexus gear for this :P

        In fact, I'll not be connecting these blades to our network in ANY WAY. 

        I have a spare HP 2510G-48 and some 1Gbps SFP multimode optics modules. 
(if you have any single mode optics modules those could work too, or we could 
track down a media converter or something). 

        I am going to have a completely separate managed switch for this. The 
blades will not be able to communicate with Daemon Defenses network in any 
fassion. For all intensive purposes, they will be stand alone rackmount servers 
plugged into power with a single network cable and a KVM. I have removed the 
flex-10 mezzanine cards from these blades so that they can in no way ever 
communicate with our production gear.

        Now, with that said, I just need some kind of NAT router so I can plug 
these into the WAN side of our networking outside our firewall and/or DMZ; But 
I want to do it outside our vlan layer-2 security since I wouldn't count vlan 
hopping outside the realm of the possible for someone who is competing in a 
hacking competition :P



________________________________

        From: "Chris Kluka" <ckl...@daemondefense.com>
        To: "Jared Bater" <jba...@merlin.mb.ca>
        Cc: operationsleepywea...@lists.skullspace.ca, "Sarah Lacroix" 
<sarahlacroixmu...@gmail.com>, "Cian Whalley" <c...@somewhere.ca>
        Sent: Wednesday, February 27, 2013 10:10:22 PM
        Subject: Re: [Operationsleepyweasel] CDC Hosting Provider Daemon Defense
        
        
        yes, our gear is @ 135 innovation... for now. It's going to be moved to 
the bomber stadium in approx 6 weeks, at which point we'll have to figure 
something else out, but we will have 3x100 Mbps symmetric connections there by 
then. I'm contemplating getting a pair of "Ruckus Wireless 7731" wireless N 
bridges and making a 500 meter 190 Mbps wireless link over there. 

        Either way, for all practical purposes for you, the servers are located 
in the server room @ 135 innovation. 

________________________________

        From: "Jared Bater" <jba...@merlin.mb.ca>
        To: "Ian Trump" <itr...@octopitech.com>, "Chris Kluka" 
<ckl...@daemondefense.com>
        Cc: "Cian Whalley" <c...@somewhere.ca>, 
operationsleepywea...@lists.skullspace.ca, "Sarah Lacroix" 
<sarahlacroixmu...@gmail.com>
        Sent: Wednesday, February 27, 2013 10:00:21 PM
        Subject: RE: CDC Hosting Provider Daemon Defense
        
        
        Chris,
        
        If your gear is in 135 Innovation drive we may be able to arrange for 
some SMF between MERLIN and Daemon Defense for this. I'm pretty sure we have at 
least 4 strands between us and the telco room.  We have a 10Gbps link between 
our office at 135 Innovation and our network in rm 625 Engineering 3 on the 
main campus.
        
        Oh, and you had me at "Nexus".
        
        /jared
        
        
________________________________

        From: Ian Trump [itr...@octopitech.com]
        Sent: Wednesday, February 27, 2013 9:47 PM
        To: Chris Kluka
        Cc: Cian Whalley; operationsleepywea...@lists.skullspace.ca; Jared 
Bater; Sarah Lacroix
        Subject: Re: CDC Hosting Provider Daemon Defense
        
        
        Excellent, I'll get a document on what we need to run, Cian can tell me 
how we can run it. Chris can tell us how we can set it up in his environment 
and Jared can figure out how we can connect it.

        Ball is in my court.

        Ian
        
        Sent from my iPhone

        On 2013-02-27, at 21:42, "Chris Kluka" <ckl...@daemondefense.com> wrote:
        
        

                TBA. I'd like to have a discussion about the architecture 
before we start throwing software around. 



________________________________

                From: "Cian Whalley" <c...@somewhere.ca>
                To: "Chris Kluka" <ckl...@daemondefense.com>
                Cc: operationsleepywea...@lists.skullspace.ca, "Jared Bater" 
<jba...@merlin.mb.ca>, "Sarah Lacroix" <sarahlacroixmu...@gmail.com>, "Ian E. 
Trump" <itr...@octopitech.com>
                Sent: Wednesday, February 27, 2013 9:39:07 PM
                Subject: Re: CDC Hosting Provider Daemon Defense
                
                Yeah man that's actually more than enough for us - no SAN 
needed!  As Ian said - we are so grateful for anything beyond the bubblegum and 
duck tape solution we managed to throw together so far :) 

                In that case, Jared has the esxi image and my license...is 
there a convenient time he can throw it on there?
                
                
                
                
                --
                
                Cian Whalley | technology leader, entrepreneur, friend of the 
people
                (o) 204-958-1458 | (c) 204-792-4045 | (e) c...@somewhere.ca | 
(t) @cian_ca

                "A pessimist sees the difficulty in every opportunity. An 
optimist sees the opportunity in every difficulty." - Winston Churchill



                On Wed, Feb 27, 2013 at 9:12 PM, Chris Kluka 
<ckl...@daemondefense.com> wrote:
                

                        Ok, well, just to be clear, I'm not giving you 32 
blades, 1280 threads, and 8TB ram  to start out with :P 

                        Realistically, I was going to start with 2x dual-quad 
core blades we are not making use of. They both have:

                        2x Quad core Xeon with hyperthreading, 64 bit, 3.0 Ghz. 
                        8x 2GB ram
                        1x 1Gbps network
                        2x 146G 10k serial scsi
                        P210 raid controllers (0,1,jbod)


                        It will give us 32 logical cores (16 physical cores), 
32 GB ram and ~600GB hard disk. 

                        I do not have any available SAN resources to throw at 
this :(

                        I have more of these (5) blades powered off and not in 
use right now, but I think 2 should be good to get a realistic feel for how 
much resources are needed for this. I also don't want to get in the situation 
where I offer these blades for our use and then the next week we take on a new 
client and need to format/re install them and put them into production.

                        I can say with a fairly high confidence level that 4 of 
these blades are unlikely to see other use in the next year because we do not 
have software licensing to turn them on with our "production" Virtualization. 



________________________________

                        From: "Cian Whalley" <c...@somewhere.ca>
                        To: "Ian E. Trump" <itr...@octopitech.com>
                        Cc: operationsleepywea...@lists.skullspace.ca, "Jared 
Bater" <jba...@merlin.mb.ca>, "Sarah Lacroix" <sarahlacroixmu...@gmail.com>, 
"Chris Kluka" <ckl...@daemondefense.com>
                        Sent: Wednesday, February 27, 2013 8:43:14 PM
                        Subject: Re: Fwd: CDC Hosting Provider Daemon Defense 



                        That is ridiculously cool. Ian we're hooked up man this 
is all we could ever need. Our vms won't even be a rounding error in that 
environment. Now what to do with that Dell....

                        Side note: is all the San/v center already setup and 
you are just spinning up vms for us?

                        Are you going to give us a virtual esxi environment so 
we can manage our own vms or do we need to spec each one out?

                        On 2013-02-27 8:00 PM, "Ian Trump" 
<itr...@octopitech.com> wrote:
                        

                                Chris,

                                OMG!!! That is some serious heavy metal. It 
sounds to me that you have a lot of capacity if we can get onto a couple of 
your blades. It's truly generous of you to allow us access for the CDC.

                                Jared, Cian please review digest. I'll provide 
details on what we need to run on the similar setup daemon defence will 
provide. Look for an email from me by tomorrow.
                                
                                Sent from my iPhone

                                Begin forwarded message:
                                
                                

                                        From: Chris Kluka 
<ckl...@daemondefense.com>
                                        Date: 27 February, 2013 19:37:37 CST
                                        To: Ian Trump <itr...@octopitech.com>
                                        Subject: Re: CDC Hosting Provider 
Daemon Defense
                                        
                                        

                                

                                        I thought you might like to take a look 
at this Ian; It's not at all related, but it should give you a slightly better 
idea of my day-to-day design tasks. 

                                        I spent the last 4 hours making this 2 
page diagram. It illustrates how to connect a pair of Cisco Nexus 7000 switches 
(acting as core switching) with Cisco Nexus 5000 switches (acting as 
Top-Of-Rack switches) with two HP C7000 blade centers.

                                        Total connectivity from each c7000 to 
the Nexus 7000 switches is 160Gbps using active/active port channel trunking 
between the blades and B22HP's, 80Gbps fabric extender links between the Nexus 
5000's and the B22HP's and 2x 10Gbps dual-port network adapters in each blade. 
The whole configuration takes 24u of rackmount space and allows 32 blades 
access to 40Gbps-per-blade theoretical throughput limit and only a 4:1 over 
subscription rate.
                                        
                                        
                                        Each blade has dual-10 core xeon 
processors at 2.86Ghz (with hyperthreading) and 256 GB of ram. The whole 
cluster has 32 nodes, 1280 logical threads, 8TB of ram, and 360Gbps aggregate 
bandwidth. 
                                        
                                        
                                        
                                        
                                        Toys :)
                                        
                                        
                                        
                                        
                                        Side note: This is not the cluster I'll 
be providing you resources from, but the architecture is the same just ... 
well... a fraction of a fraction of the resources :)
                                        
                                        
                                        -- Chris
                                        
                                        
                                        
                                        

________________________________

                                        From: "Ian Trump" 
<itr...@octopitech.com>
                                        To: "Chris Kluka" 
<ckl...@daemondefense.com>, "Jared Bater" <jba...@merlin.mb.ca>
                                        Cc: "brian cameron" 
<brian.came...@lrsd.net>, "Kerry Augustine [kine...@mymts.net]" 
<kine...@mymts.net>, kka...@seccuris.com, "Paul Unger" <paul.un...@gwl.ca>, 
"Cian Whalley" <c...@somewhere.ca>, sarahlacroixmu...@gmail.com, "Nathan Wild" 
<nathan.w...@gmail.com>, operationsleepywea...@lists.skullspace.ca
                                        Sent: Wednesday, February 27, 2013 
5:20:16 PM
                                        Subject: CDC Hosting Provider Daemon 
Defense
                                        
                                        

                                        Hi Chris,

                                         

                                        Thanks for taking my call today. We are 
super excited and absolutely in your debt for allowing us to use your VM 
infrastructure to host our CDC challenge instances, our first event date is 
Saturday 10 March 13. As you related to me at SkullSpace, your technical 
specifications, comprising of several blade servers should exceed our 
expectations and requirements. I'll send technical specifications later tonight 
on what we need to run in your environment, it would be great if you could 
reply to Myself, Jared and Cian on what your able to let us use.

                                         

                                        Our challenge will be to connect the 
great networking infrastructure Jared from Merlin has created to provide the 
connectivity for our Challenge into your VM environment, His email is above and 
his phone number is Jared Bater:

                                        (204) 791-5855 
<tel:%28204%29%20791-5855> . I'm absolutely delighted to welcome you (and your 
infrastructure) Please give Jared a call, so he can get a head start on the 
networking side. Let me know if we need to run a temporary fibre between Daemon 
Defence and Merlin, we have some MTS sponsorship!

                                         

                                        On my telephone call I related that we 
have received some support from Dell and will receive a server from them. The 
server specified may comfortably provision five to six teams, but for a larger 
competition we will probably need infrastructure, processing, memory and disk 
i/o that can only be provided by blade servers and SAN infrastructure like you 
have.

                                         

                                        The desire of the CDC is to provide - 
one day - a National High School Cyber Defence Competition with teams from 
across Canada all competing.

                                         

                                        Welcome, it's going to be crazy ride.

                                         

                                        Ian

                                         

                                          







        _______________________________________________
        SkullSpace Operationsleepyweasel Mailing List
        Help: 
http://www.skullspace.ca/wiki/index.php/Mailing_List#Operationsleepyweasel


        _______________________________________________
        SkullSpace Operationsleepyweasel Mailing List
        Help: 
http://www.skullspace.ca/wiki/index.php/Mailing_List#Operationsleepyweasel






_______________________________________________
SkullSpace Discuss Mailing List
Help: http://www.skullspace.ca/wiki/index.php/Mailing_List#Discuss
Archive: https://groups.google.com/group/skullspace-discuss-archive/

Reply via email to