I covered the basics of cracking hashes using Hashcat in an earlier post, and I had since been meaning to play around with Hashtopolis, but never really gotten to it. Until now, that is.
Hashtopolis is an open source platform based on Hashcat to crack password hashes in a distributed manner. For each large hash cracking task, it chops up the work and distributes each chunk to separate systems, with their own hash cracking resources (ideally GPGPUs). Needless to say, this might be a bit redundant for a single cracking tower.
Even though not directly affiliated with such distributed cracking solutions, the Hashcat wiki does mention Hashtopolis, HashView (less active development), and CrackLord (seems to be dead for 7 years now). There seem to be a few more of such projects on GitHub, but from what I can tell Hashtopolis seems to be the most actively developed still.
Installing the server was fairly straight-forward. I used Docker, and this is the compose file (official example here):
|
|
|
ℹ️
|
If you prefer or already have centralized storage elsewhere for all your wordlists, rulesets, etc., you might mount that storage as a Docker volume (see line 8 in the above compose file). Hashtopolis is keen on ingesting everything locally, but I found using symlinks works well enough to work around this, avoiding unnecessary duplication. More on that later. |
And the accompanying .env file:
MYSQL_ROOT_PASS=verysecurerootpassword
MYSQL_DATABASE=hashtopolis
MYSQL_USER=hashtopolis
MYSQL_PASSWORD=verysecureuserpassword
HASHTOPOLIS_ADMIN_USER=myuser
HASHTOPOLIS_ADMIN_PASSWORD=myverysecurepassword
HASHTOPOLIS_DB_HOST=db
HASHTOPOLIS_APIV2_ENABLE=0
HASHTOPOLIS_BACKEND_URL=http://localhost:8080/api/v2Once started (docker compose up -d) open the web portal (e.g. http://localhost:8080) and login.
Time to add our wordlists and rulesets.
Go to Files.
From there adding wordlists and rulesets can be done by uploading these via the web portal, or by providing a URL, or by staging files in the import directory.
Especially for larger files, I’d go with staging these in the import directory, which can now be found at hashtopolis/import on your host.
Do mind selecting either the Wordlists tab or Rules tab on top first to import corresponding files.
Once imported, the files have been moved to the hashtopolis/files directory.
|
ℹ️
|
As mentioned before, if you already have separate storage for all your wordlists and rulesets, you may find creating symlinks as a good enough work around to avoid duplication.
When importing, Hashtopolis only moves the symlink the import dir to the files dir: Keep a close eye on those link paths.
Once the links is moved to the files dir, it should still be able to link back to the actual file, which is why I’ve used a relative path using Repeat for any and all files you wish to import without ingesting into Hashtopolis directly. |
Now we might want to enroll our first cracking agent: a separate system with its own cracking resources (e.g. GPGPUs).
From the web portal go to Agents → New Agent.
From there you can download an agent client (hashtopolis.zip), which that separate system can run to receive new jobs, and return cracked hashes.
Also create a new voucher there, which we will use to enroll the agent.
Create a new Hashtopolis working directory on the agent system, e.g. ~/hashtopolis, and create a new config.json file there:
{
"url": "http://192.168.1.1:8080/api/server.php",
"voucher": "randomvouchercreatedbyhashtopolis"
}Then place the agent client hashtopolis.zip next to it.
From here, running can be as simple as executing the agent client directly via Python: python hashtopolis.zip
Or, perhaps a Systemd user script, (e.g. at ~/.config/systemd/user/hashtopolis-agent.service):
[Unit]
Description=Hashtopolis Agent
[Service]
Type=simple
WorkingDirectory=%h/hashtopolis
ExecStart=/usr/bin/python hashtopolis.zip
Restart=on-failure
[Install]
WantedBy=default.targetAnd starting it via systemctl --user start hashtopolis-agent.
Or, even an agent Docker container (official example here):
|
|
|
ℹ️
|
As mentioned before, if you already have separate storage for all your wordlists and rulesets, you may also create symlinks for wordlists and rulesets on the agents themselves, to avoid the same duplication:
Keep a close eye on those link paths. Also see lines 7 and 8 above in the compose file. The only difference here is that, on the agents, you don’t need to place them in an import dir, and you can directly place them in the files dir. |
Repeat the above to add more agents.
Let’s import a hashlist to start cracking hashes for. Go to Lists → New hashlist. From there either upload, paste, or import a hashlist (e.g. potfile) from the import dir.
Once uploaded we can proceed to creating our first task. Go to Tasks → New Task. From there you can configure plenty. For now let’s just create a simple task using a wordlist and a ruleset we imported. On the right-hand side you can select which imported wordlist you wish to use. Once you’ve made your selection, click on the Rule tab above to also select a ruleset.
Notice the Command line field on the left updates each time you select a resource on the right.
This is basically showing the arguments to run Hashcat with on each client.
The #HL# part is the chosen hashlist in the Hashlist field, so leave that be.
Then you can add some of your own arguments to run Hashcat with, such as -O -w 3 -a 0 #HL# rockyou2021.txt.gz -r OneRuleToRuleThemStill.rule, to use optimised kernels, and a specific workload profile.
Next assign a priority above 0 before you hit Create task to automatically start cracking using your available agents.
For more information, see the official documentation. Perhaps I’ll cover Preconfigured Tasks and Supertasks at a later time. I also have not yet been able to configure notifications correctly, which also seems to be poorly documented, sadly. A simple CURL request would’ve been nice, but no luck so far. If at any point I figured this out I’ll update this article.
In any case, this should be enough to get started. Have fun cracking!