Compare commits

...

730 commits

Author SHA1 Message Date
Clinton Hall
bfbf1fb4c1 support for qbittorrent v5.0 (#2001)
* support for qbittorrent v5.0

* Remove py3.8 tests

* Add py 3.13 tests

* Update mediafile.py for Py3.13

* Create filetype.py

* Update link for NZBGet
2024-11-08 07:29:55 +13:00
Clinton Hall
470f611240 Merge branch 'master' into nightly 2024-04-26 12:12:53 +12:00
Clinton Hall
97df874d36 class not added put into debug logging 2024-04-26 12:09:41 +12:00
Matt Park
e9fbbf540c added global ignore flag for bytecode cleanup
Resolves #1867
2024-04-26 12:09:41 +12:00
Clinton Hall
39f5c31486 fix warnings (#1990) 2024-04-26 12:09:41 +12:00
Clinton Hall
cbc2090b0b always return imdbid and dirname 2024-04-26 12:09:41 +12:00
Clinton Hall
cc109bcc0b Add Python 3.12 and fix Radarr handling (#1989)
* Added Python3.12 and future 3.13

* Fix Radarr result handling

* remove py2.7 and py3.7 support
2024-04-26 12:09:41 +12:00
Matt Park
4c512051f7 Update movies.py
Check for an updated dir_name in case IMDB id was appended.
2024-04-26 12:09:41 +12:00
Matt Park
0c564243c2 Update identification.py
Return updated dir_name if needed
2024-04-26 12:09:41 +12:00
Clinton Hall
9ea322111c class not added put into debug logging 2024-03-20 14:23:32 +13:00
Matt Park
e14bc6c733 added global ignore flag for bytecode cleanup
Resolves #1867
2024-03-05 10:59:37 +13:00
Clinton Hall
27df8a4d8e
fix warnings (#1990) 2024-03-01 18:25:19 +13:00
Clinton Hall
b7d6ad8c07
always return imdbid and dirname 2024-02-29 07:01:23 +13:00
Clinton Hall
f98d6fff65
Add Python 3.12 and fix Radarr handling (#1989)
* Added Python3.12 and future 3.13

* Fix Radarr result handling

* remove py2.7 and py3.7 support
2024-02-28 15:47:04 +13:00
Clinton Hall
b802aca7e1
Merge pull request #1982 from MattPark/last-resort-movie-id
Last resort movie identification
2023-12-16 09:32:35 +13:00
Matt Park
836df51d14
Update movies.py
Check for an updated dir_name in case IMDB id was appended.
2023-10-02 15:15:01 -04:00
Matt Park
c6292d5390
Update identification.py
Return updated dir_name if needed
2023-10-02 15:13:36 -04:00
Clinton Hall
558970c212
Merge pull request #1980 from clinton-hall/nightly
Merge Nightly
2023-08-10 21:23:42 +12:00
Clinton Hall
38c628d605
Merge pull request #1979 from clinton-hall/clinton-hall-patch-1
Remove Py2.7 tests
2023-08-10 21:14:47 +12:00
Clinton Hall
029b58b2a6
Remove Py2.7 tests
This is no longer supported in azure pipelines.
2023-08-10 21:09:25 +12:00
Clinton Hall
2885461a12
Merge pull request #1978 from clinton-hall/remove_group
Initialize remove_groups #1973
2023-08-10 21:01:31 +12:00
Clinton Hall
ad73e597e4
Initialize remove_groups #1973
This parameter was not being loaded and therefore was ignored.
2023-08-09 22:50:25 +12:00
clinton-hall
6c2f7c75d4 update to v 12.1.12 2023-07-03 17:41:15 +12:00
clinton-hall
95e22d7af4 Merge branch 'master' into nightly 2023-07-03 17:21:31 +12:00
kandarz
e72c0b9228
Add 'dvb_subtitle' codec to list of ignored codecs when using 'mov_text' (#1974)
Add 'dvb_subtitle' codec to list of ignored codecs when using 'mov_text'. DVB subtitles are bitmap based.
2023-07-03 16:59:24 +12:00
Clinton Hall
c4cc554ea1 update to sonarr api v3 2023-05-22 22:51:28 +12:00
Labrys of Knossos
3078da31af Fix posix_ownership. 2023-05-22 22:51:28 +12:00
Labrys of Knossos
1fdfd128ba Add comments. 2023-05-22 22:51:28 +12:00
Labrys of Knossos
d3100f6178 Add database permissions logging upon failed access. 2023-05-22 22:51:28 +12:00
Clinton Hall
01bb239cdf
Merge pull request #1969 from clinton-hall/Sonarr-apiv3
update to sonarr api v3
2023-05-22 22:43:15 +12:00
Clinton Hall
d0b555c251
update to sonarr api v3 2023-04-18 20:59:28 +12:00
Labrys of Knossos
0c5f7be263
Merge pull request #1955 from clinton-hall/permitted
Fix permissions for posix and add comments
2023-01-01 06:03:08 -05:00
Labrys of Knossos
19d9e27c43 Fix posix_ownership. 2022-12-31 22:26:19 -05:00
Labrys of Knossos
1046c50778
Merge pull request #1954 from clinton-hall/permitted
Add database permissions logging upon failed access.
2022-12-31 18:34:20 -05:00
Labrys of Knossos
2c2d7f24b1 Add comments. 2022-12-31 18:21:33 -05:00
Labrys of Knossos
6e52bb2b33 Add database permissions logging upon failed access. 2022-12-31 17:56:38 -05:00
Clinton Hall
bd9c91ff5e
Merge pull request #1936 from clinton-hall/nightly
update to V12.1.11
2022-12-12 20:24:01 +13:00
Clinton Hall
b8482bed0e
Remove Py3.6 tests.
No longer available for pipeline tests.
2022-12-12 20:18:18 +13:00
Labrys of Knossos
2b6a7add72
Merge pull request #1919 from clinton-hall/hello-friend
Add new Python versions to tests.
2022-12-02 22:32:51 -05:00
Labrys of Knossos
55c1091efa Add new Python versions to classifiers. 2022-12-02 22:25:50 -05:00
Labrys of Knossos
8b409a5716 Add new Python versions to tests. 2022-12-02 22:25:37 -05:00
Labrys of Knossos
9307563ab8
Merge pull request #1910 from clinton-hall/bumpversion
Fixes bumpversion configuration
2022-12-02 21:10:27 -05:00
Labrys of Knossos
69e1c4d22e Bump version: 12.1.10 → 12.1.11 2022-12-02 21:00:32 -05:00
Labrys of Knossos
8a5c8c0863 Fix bumpversion fails with FileNotFoundError
The `README.md` file was moved to the `.github` folder in commit 742d482 and merged in clinton-hall/nzbToMedia#1574.

Additionally the version number was removed from `README.md` in commit 8745af2.

Fixes clinton-hall/nzbToMedia#1909
2022-12-02 20:59:08 -05:00
Clinton Hall
18ac3575ba
Merge pull request #1907 from clinton-hall/vendor
Update vendored libraries
2022-12-03 13:38:44 +13:00
Labrys of Knossos
5e3641ac23 Updated decorator to 4.4.2 2022-12-01 17:34:33 -05:00
Labrys of Knossos
fb6011f88d Updated stevedore to 2.0.1 2022-11-29 01:47:46 -05:00
Labrys of Knossos
f1624a586f Updated importlib-metadata to 2.1.3 2022-11-29 01:35:03 -05:00
Labrys of Knossos
684cca8c9b Updated more-itertools to 5.0.0 2022-11-29 01:26:47 -05:00
Labrys of Knossos
1aff7eb85d Updated zipp to 2.0.1 2022-11-29 01:21:38 -05:00
Labrys of Knossos
f05b09f349 Updates vendored subliminal to 2.1.0
Updates rarfile to 3.1
Updates stevedore to 3.5.0
Updates appdirs to 1.4.4
Updates click to 8.1.3
Updates decorator to 5.1.1
Updates dogpile.cache to 1.1.8
Updates pbr to 5.11.0
Updates pysrt to 1.1.2
Updates pytz to 2022.6
Adds importlib-metadata version 3.1.1
Adds typing-extensions version 4.1.1
Adds zipp version 3.11.0
2022-11-29 00:44:49 -05:00
Labrys of Knossos
d8da02cb69 Updates vendored setuptools to 44.1.1 2022-11-29 00:44:48 -05:00
Labrys of Knossos
3a2e09c26e Updates python-qbittorrent to 0.4.3 2022-11-29 00:44:48 -05:00
Labrys of Knossos
968ec8a1d8 Update vendored beautifulsoup4 to 4.11.1
Adds soupsieve 2.3.2.post1
2022-11-29 00:44:48 -05:00
Labrys of Knossos
2226a74ef8 Update vendored guessit to 3.1.1
Updates python-dateutil to 2.8.2
Updates rebulk to 2.0.1
2022-11-29 00:44:48 -05:00
Labrys of Knossos
ebc9718117 Update vendored requests-oauthlib to 1.3.1 2022-11-29 00:44:48 -05:00
Labrys of Knossos
501be2c479 Update vendored requests to 2.25.1
Updates certifi to 2021.5.30
Updates chardet to 4.0.0
Updates idna to 2.10
Updates urllib3 to 1.26.13
2022-11-29 00:44:48 -05:00
Labrys of Knossos
56c6773c6b Update vendored beets to 1.6.0
Updates colorama to 0.4.6
Adds confuse version 1.7.0
Updates jellyfish to 0.9.0
Adds mediafile 0.10.1
Updates munkres to 1.1.4
Updates musicbrainzngs to 0.7.1
Updates mutagen to 1.46.0
Updates pyyaml to 6.0
Updates unidecode to 1.3.6
2022-11-29 00:44:48 -05:00
Labrys of Knossos
5073ec0c6f Update vendored pyxdg to 0.28 2022-11-29 00:44:47 -05:00
Labrys of Knossos
aed4e9261c Update vendored configobj to 5.0.6
Updates vendored six to 1.16.0
2022-11-29 00:44:47 -05:00
Labrys of Knossos
b1cefa94e5 Update vendored windows libs 2022-11-29 00:44:47 -05:00
Labrys of Knossos
f61c211655 Fix .gitignore for pyd binary files 2022-11-29 00:44:47 -05:00
Labrys of Knossos
78ed3afe29 Merge branch 'processing' into nightly
* processing:
  Add `processor` folder to folder structure
  Streamline `core.processor.nzbget.parse_status`
  Streamline `core.processor.nzbget._parse_unpack_status`
  Streamline `core.processor.nzbget._parse_health_status`
  Extract health status parsing from `core.processor.nzbget.parse_status` -> `_parse_health_status`
  Extract unpack status parsing from `core.processor.nzbget.parse_status` -> `_parse_unpack_status`
  Streamline `core.processor.nzbget._parse_par_status`
  Extract par status parsing from `core.processor.nzbget.parse_status` -> `_parse_par_status`
  Streamline `core.processor.nzbget._parse_total_status`
  Extract total status parsing from `core.processor.nzbget.parse_status` -> `_parse_total_status`
  Streamline `core.processor.nzbget.check_version`
  Streamline `core.processor.nzbget.parse_failure_link`
  Streamline `core.processor.nzbget.parse_download_id`
  Standardize processing
  Extract version checks from `core.processor.nzbget.process` -> `check_version`
  Extract status parsing from `core.processor.nzbget.process` -> `parse_status`
  Extract failure_link parsing from `core.processor.nzbget.process` -> `parse_failure_link`
  Extract download_id parsing from `core.processor.nzbget.process` -> `parse_download_id`
  Standardize processing
  Merge legacy sab parsing with 0.7.17+
  Extract manual processing from `nzbToMedia.main` -> `core.processor.manual`
  Extract sabnzb processing from `nzbToMedia.main` -> `core.processor.sabnzbd`
  Extract nzbget processing from `nzbToMedia.main` -> `core.processor.nzbget`
  Refactor `nzbToMedia.process` -> `core.processor.nzb.process`
2022-11-29 00:36:48 -05:00
Labrys of Knossos
c85ee42874 Add processor folder to folder structure 2022-11-29 00:35:40 -05:00
Labrys of Knossos
34236e8960 Streamline core.processor.nzbget.parse_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
7737d0c4be Streamline core.processor.nzbget._parse_unpack_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
c34159d881 Streamline core.processor.nzbget._parse_health_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
efee5c722b Extract health status parsing from core.processor.nzbget.parse_status -> _parse_health_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
11adb220d8 Extract unpack status parsing from core.processor.nzbget.parse_status -> _parse_unpack_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
8e96d17537 Streamline core.processor.nzbget._parse_par_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
ab006eefb2 Extract par status parsing from core.processor.nzbget.parse_status -> _parse_par_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
e5ea34b569 Streamline core.processor.nzbget._parse_total_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
fc2ebeb245 Extract total status parsing from core.processor.nzbget.parse_status -> _parse_total_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
d7c6a8e1cc Streamline core.processor.nzbget.check_version 2022-11-29 00:35:40 -05:00
Labrys of Knossos
d11dda8af8 Streamline core.processor.nzbget.parse_failure_link 2022-11-29 00:35:40 -05:00
Labrys of Knossos
9cc92ddd7b Streamline core.processor.nzbget.parse_download_id 2022-11-29 00:35:40 -05:00
Labrys of Knossos
3e676f89a5 Standardize processing 2022-11-29 00:35:40 -05:00
Labrys of Knossos
49af821bcb Extract version checks from core.processor.nzbget.process -> check_version 2022-11-29 00:35:40 -05:00
Labrys of Knossos
de06d45bb0 Extract status parsing from core.processor.nzbget.process -> parse_status 2022-11-29 00:35:40 -05:00
Labrys of Knossos
0a8e8fae9f Extract failure_link parsing from core.processor.nzbget.process -> parse_failure_link 2022-11-29 00:35:40 -05:00
Labrys of Knossos
a2b2e4f620 Extract download_id parsing from core.processor.nzbget.process -> parse_download_id 2022-11-29 00:35:40 -05:00
Labrys of Knossos
e8f5dc409a Standardize processing 2022-11-29 00:35:40 -05:00
Labrys of Knossos
637020d2bf Merge legacy sab parsing with 0.7.17+ 2022-11-29 00:35:40 -05:00
Labrys of Knossos
528cbd02cd Extract manual processing from nzbToMedia.main -> core.processor.manual 2022-11-29 00:35:40 -05:00
Labrys of Knossos
58c998712f Extract sabnzb processing from nzbToMedia.main -> core.processor.sabnzbd 2022-11-29 00:35:40 -05:00
Labrys of Knossos
073b19034b Extract nzbget processing from nzbToMedia.main -> core.processor.nzbget 2022-11-29 00:35:34 -05:00
Labrys of Knossos
7a3c2bc8a5 Refactor nzbToMedia.process -> core.processor.nzb.process 2022-11-29 00:30:50 -05:00
Labrys of Knossos
ce65ef20c6 Add Python 3.11 end-of-life 2022-11-29 00:27:18 -05:00
Clinton Hall
7436ba7716
Merge pull request #1896 from clinton-hall/nightly
Nightly
2022-08-18 16:31:35 +12:00
Clinton Hall
382675e391
Merge pull request #1895 from redhat421/rem_fix
Switch `rem_id` to a Set to prevent duplicates.
2022-08-09 10:25:29 +12:00
Nick Austin
c639fc1cf9
Switch to set for rem_id. 2022-08-04 23:46:37 -07:00
Clinton Hall
d23c2c2d3a
Merge pull request #1893 from clinton-hall/nightly
Fix issue with no Require_lan set #1856 (#1892)
2022-07-15 17:51:33 +12:00
Clinton Hall
a886350bea
Fix issue with no Require_lan set #1856 (#1892)
Thanks @BradKollmyer
2022-07-15 17:41:38 +12:00
Clinton Hall
084e404b92
Merge pull request #1891 from clinton-hall/nightly
Nightly
2022-07-15 09:24:28 +12:00
Clinton Hall
a0bccb54cc
Req lan1 (#1890)
* Multiple Req_Lan
2022-07-15 09:02:12 +12:00
Jingxuan He
566e98bc78
Fix a bug about wrong order of function arguments (#1889) 2022-06-17 07:46:51 +12:00
Clinton Hall
d956cd2b75
Updated SiCKRAGE SSO URL (#1886) (#1887)
Co-authored-by: echel0n <echel0n@sickrage.ca>
2022-06-07 10:49:17 +12:00
echel0n
7936c2c92b
Updated SiCKRAGE SSO URL (#1886) 2022-06-07 10:43:13 +12:00
clinton-hall
2766938921 V12.1.10 for merge 2022-01-01 14:11:11 +13:00
Clinton Hall
686d239ce5
Python 3.10 (#1868)
* Add Py 3.10 #1866

* Add tests for Python 3.10

* update Babelfish
2021-12-03 18:48:04 +13:00
Clinton Hall
684cab5c8a
Add Support for Radarr V4 #1862 (#1863) 2021-11-16 16:08:17 +13:00
Clinton Hall
48154d0c3c
Update URL for x264 #1860 (#1861)
* Update URL for x264 #1860
* Use Ubuntu-latest in Pipelines (16.04 image removed from Pipelines)
2021-11-10 10:52:33 +13:00
Clinton Hall
162143b1cd
Media lan check (#1856)
* Add require_lan

#1853
2021-10-11 07:16:00 +13:00
Clinton Hall
36eddcfb92
Updates to Syno Auth #1844 2021-08-26 18:03:32 +12:00
Clinton Hall
411e70ba92
Fix fork recognition when defined in cfg. #1839 (#1842) 2021-08-13 06:52:47 +12:00
Clinton Hall
8b8fda6102
Syno api version detection (#1841)
* Get max api version for login. #1840
2021-08-12 22:14:00 +12:00
Clinton Hall
4103a7dc05
Fix auto-fork detection (#1839)
* Fix Fork Detection when parameters not exact match. #1838

* Fix logging of detected fork. #1838

* Fix SickGear fork detection #1838
2021-08-10 21:32:06 +12:00
clinton-hall
f9dde62762 update to v12.1.09 for merge 2021-07-17 21:44:16 +12:00
Clinton Hall
213f1f6f10
Radarr api-v3 changes (#1834)
#1831
2021-06-09 07:27:58 +12:00
Clinton Hall
30a69d6e37
Remove State from Radarr api return 2021-06-07 21:51:36 +12:00
Clinton Hall
2280f8dee8
Update Radarr api version (#1833) 2021-06-07 15:45:32 +12:00
p0ps
ee060047b7
Check for apikey when fork=medusa-apiv2 is used. (#1828)
#1827
2021-05-07 22:00:54 +12:00
Clinton Hall
e3efbdbaee
Add subs renaming for radarr/sonarr (#1824)
* Re-added rename_subs #1823 #768
2021-04-10 19:37:32 +12:00
Christoph Stahl
6ccc4abc18
Use Response.text instead of Response.content (#1822)
`content` returns a bytes object, `text` returns a string object. The latter can be splitted by the string `\n`, the former cannot, which leads to an Exception.
2021-03-21 10:26:33 +13:00
Clinton Hall
0329cc4f98
Fix missing title when release_id (#1820) 2021-03-08 15:10:06 +13:00
clinton-hall
d64bd636d2 fix removal of duplicate parameters. 2021-02-26 20:36:03 +13:00
clinton-hall
c9e06eb555 allow new params for SickChill. 2021-02-26 20:25:47 +13:00
Henry
623e619534
Added chmod to 644 for subtitles (#1817)
I ran into problems with permissions.
Default sublimal write files with 0600 permission.
2021-02-21 22:13:05 +13:00
p0ps
06d91c6928
Pymedusa (#1815)
* Add wait_for as a valid option for pyMedusa

* Add docs.

* doc

* wrong section
2021-02-18 14:30:51 +13:00
p0ps
c2eaa72a2c
Fix other sickbeard forks errorring. (#1814)
* Update SickBeard section with is_priority param for medusa.

* Add param type to medusa-apiv2 fork.

* Extract param only when not a fork_obj
* Directly return process_result from api_call()

* Implemented classes for PymedusaApiV1 and PymedusaApiv2.

* improve linting
2021-02-17 20:31:08 +13:00
Clinton Hall
f48812eccd
Fix other sickbeard forks errorring. (#1813)
Co-authored-by: p0psicles <rogier@headshots.nl>
2021-02-15 21:28:25 +13:00
p0ps
6a6b25fece
Medusa apiv2 (#1812)
* add fork Medusa-apiV2

* Added classes for sickbeard (base) and PyMedusa.

* refactored part of the forks.py code -> InitSickBeard class.

* Add .vscode to gitignore

* Further refactor forks.py -> sickbeard.py

* Working example for pyMedusa when fork is 'medusa' (no api key)

* fix import for Py2

Co-authored-by: clinton-hall <fock_wulf@hotmail.com>
2021-02-15 15:02:15 +13:00
echel0n
0acf78f196
Added dedicated SiCKRAGE section with API version and SSO login support (#1805)
Added migration code to migrate SickBeard section with fork sickrage-api to new SiCKRAGE section
2021-01-13 13:16:41 +13:00
clinton-hall
9d64c2f478 Update to V12.1.08 2020-12-14 20:34:26 +13:00
clinton-hall
40548fa670 add configobj 2020-11-18 22:07:09 +13:00
clinton-hall
aded4e796e add updated configobj 2020-11-18 22:05:40 +13:00
Clinton Hall
d4d5f00a18
Single file downloads with clean name #1789 (#1791) 2020-10-24 18:25:35 +13:00
Clinton Hall
bf05f1b4e7
Bypass for manual execution (#1788)
* no_status_check prevents additional checks.
#1192
#1778
2020-10-16 22:55:41 +13:00
Clinton Hall
de81037d15
Py3.9 (#1787)
* Add Py3.9 support
2020-10-16 13:51:55 +13:00
Clinton Hall
a96f07c261
No status change error suppression (#1786) 2020-10-15 21:59:43 +13:00
clinton-hall
4c33b5574b Merge branch 'master' into nightly 2020-09-23 16:17:57 +12:00
Clinton Hall
b9c3ccb71d
Merge Nightly (#1783)
* Add Failed to SickGear fork detection (#1772)

* Fix for failed passed as 2,3 from SAB (#1777)

* Fix DB import (#1779)

* Sqlite3.row handling fix
* Fix import error in Python3

* make nzbToWatcher.py executable. #1780

* Update to V12.1.07 (#1782)
2020-09-23 16:08:32 +12:00
Clinton Hall
0833bf1724
Update to V12.1.07 (#1782) 2020-09-23 16:01:35 +12:00
Clinton Hall
f92f8f3952
Add .gz support (#1781)
#1715
2020-09-23 15:40:54 +12:00
clinton-hall
c21fa99bd7 make nzbToWatcher.py executable. #1780 2020-09-23 14:24:47 +12:00
Clinton Hall
beecb1b1a0
Fix DB import (#1779)
* Sqlite3.row handling fix
* Fix import error in Python3
2020-09-19 21:53:01 +12:00
Clinton Hall
a359691515
Fix for failed passed as 2,3 from SAB (#1777) 2020-09-18 16:12:56 +12:00
Clinton Hall
d1fe38b0b2
Add Failed to SickGear fork detection (#1772) 2020-09-12 12:36:49 +12:00
clinton-hall
b3dc118b52 Merge branch 'dev' 2020-09-08 10:43:12 +12:00
clinton-hall
8c8ea0f6fe Merge branch 'nightly' into dev 2020-09-08 10:42:34 +12:00
clinton-hall
b3388f959d update to version 12.1.06 2020-09-08 10:40:57 +12:00
Clinton Hall
2dfdc69487
log error when migrating #850 (#1768)
Can't display debug logging until the config is loaded to enable debugging!
So log as error to get details of the migration fault.
2020-08-26 20:27:38 +12:00
clinton-hall
f10fa03159 Use params for auto fork. #1765 2020-08-15 19:24:52 +12:00
Jelle Breuer
7f8397b516
Added missing ffmpeg settings to nzbToRadarr and nzbToNzbDrone (#1757) 2020-07-23 22:01:50 +12:00
Clinton Hall
850ba6dcea
Fix auto detection of forks. #1738 2020-04-23 10:07:16 +12:00
clinton-hall
f5e4ec0981 Merge branch 'dev' 2020-04-17 11:29:06 +12:00
clinton-hall
54534c4eed Merge branch 'nightly' into dev 2020-04-17 11:28:16 +12:00
clinton-hall
5fb3229c13 update to version 12.1.05 2020-04-17 11:26:57 +12:00
Clinton Hall
b409279254
fix py2 handling #1725 2020-03-09 06:55:13 +13:00
Clinton Hall
001f754cd3
Fix unicode check in Py2 #1725 (#1727) 2020-03-08 13:36:26 +13:00
Clinton Hall
58a6b2022b
Fix dictionary changed size. #1724 (#1726) 2020-03-08 13:35:21 +13:00
Clinton Hall
c037387fc3
always use cmd type for api. #1723 2020-03-03 12:34:02 +13:00
Clinton Hall
f8de0c1ccf
fix api check 2020-03-02 21:56:59 +13:00
Clinton Hall
4facc36e3f
fix return for incorrect command. 2020-03-02 21:38:10 +13:00
Clinton Hall
a233db0024
SickGear 403 fix (#1722)
403 from SickGear #1704
2020-03-02 18:19:56 +13:00
cheese1
c18fb17fd8
fix typos (#1714) 2020-01-29 12:53:52 +13:00
Clinton Hall
2a96311d6f
Qbittorrent patch 1 (#1711)
qBittorrenHost to qBittorrentHost (#1710)

Co-authored-by: boredazfcuk <boredazfcuk@hotmail.co.uk>
2020-01-24 23:05:16 +13:00
Clinton Hall
11f1c2ce3f
Update Syno Default port. #1671 2020-01-21 14:34:35 +13:00
Clinton Hall
0fa2a80bf6
Fix Syno Parser #1671 2020-01-21 14:32:36 +13:00
Clinton Hall
b793ce7933
Syno ds patch 1 (#1702)
* Add Syno DS parsing #1671
as per https://forum.synology.com/enu/viewtopic.php?f=38&t=92856
* add config guidance
* add syno client
2020-01-13 21:26:21 +13:00
Clinton Hall
0827c5bafe
add SABnzbd environment variable handling. #1689 (#1701) 2020-01-13 21:17:33 +13:00
clinton-hall
5a6837759d Merge branch 'dev' 2020-01-13 21:02:33 +13:00
clinton-hall
25528f8e7b Merge branch 'nightly' into dev 2020-01-13 21:01:48 +13:00
clinton-hall
43312fc642 update to v12.1.04 2020-01-13 21:00:12 +13:00
Clinton Hall
6861b9915e
fix empty dir_name #1673 (#1700) 2020-01-13 20:40:46 +13:00
Clinton Hall
bbc8f132c3
fixed typo #1698 2020-01-09 14:18:23 +13:00
Clinton Hall
b8784d71dd
Fix Json returned from Sonarr and Lidarr (#1697) 2020-01-08 07:03:11 +13:00
clinton-hall
f2c07f3c38 fix encoding checks 2020-01-05 13:39:23 +13:00
clinton-hall
71c435ba48 fix encoding issue with python3 #1694 2020-01-05 12:22:23 +13:00
clinton-hall
a320ac5a66 Merge branch 'dev' 2020-01-04 22:36:11 +13:00
clinton-hall
1cca1b7c06 Merge branch 'nightly' into dev 2020-01-04 22:35:36 +13:00
clinton-hall
6d647a2433 update to v 12.1.03 2020-01-04 22:34:42 +13:00
Clinton Hall
a5e76fc56f
Py2fix (#1693)
* Update encoding to use bytes for strings. (#1690)
* fix ffmpeg install issues for test
Co-authored-by: Jonathan Springer <jonpspri@gmail.com>
2020-01-04 22:01:13 +13:00
Clinton Hall
aeb3e0fd6d
Deluge update to V2 (#1683) Fixes #1680 2019-12-10 12:55:13 +13:00
clinton-hall
2e7d4a5863 Merge branch 'dev' 2019-12-08 14:44:16 +13:00
clinton-hall
9111f815f9 Merge branch 'nightly' into dev 2019-12-08 14:43:41 +13:00
clinton-hall
feb4e36c4c update to v12.1.02 2019-12-08 14:42:59 +13:00
clinton-hall
cbd0c25c88 Merge branch 'nightly' into dev 2019-12-08 14:37:25 +13:00
Clinton Hall
75ecbd4862
Add Submodule checks (#1682) 2019-12-08 14:35:15 +13:00
Clinton Hall
d95e4e56c8
remove redundant json.loads #1671 (#1681) 2019-12-08 12:31:46 +13:00
Clinton Hall
0d7c59f1f0
Remove Encode of directory #1671 (#1672) 2019-11-13 18:32:03 +13:00
Clinton Hall
fdaa007756
Don't write byte code (#1669) 2019-11-10 09:38:48 +13:00
Clinton Hall
5cd449632f
Py3.8 (#1659)
* Add Python3.8 and CI Tests
* Force testing of video in case ffmpeg not working
2019-11-08 14:13:07 +13:00
Clinton Hall
70ab7d3d61
Add Watcher3 Config (#1667)
* Set NZBGet config #1665
2019-11-04 13:17:38 +13:00
Clinton Hall
fde8714862
Update all qBittorrent WebAPI paths for client v4.1.0+ (#1666) 2019-11-04 12:28:35 +13:00
Sergio Cambra
c92588c3be fix downloading subtitles, no provider was registered (#1664) 2019-11-04 12:10:20 +13:00
Sergio Cambra
1814bd5ae1 add watcher3 integration (#1665) 2019-11-04 12:05:00 +13:00
Clinton Hall
80ef0d094e
Fix autofork fallback. #163 2019-09-19 20:47:13 +12:00
clinton-hall
46b2e8998c update to v12.1.01 2019-08-13 18:40:15 +12:00
clinton-hall
96f086bdc1 update to v12.1.01 2019-08-13 18:39:26 +12:00
clinton-hall
77f34261fa update to v12.1.01 2019-08-13 18:38:05 +12:00
Clinton Hall
e738727c52
Force status from SABnzbd to be integer. #1646 #1647 (#1648) 2019-08-10 19:35:50 +12:00
clinton-hall
e165bbcefc Merge v12.1.00 2019-08-06 13:19:19 +12:00
clinton-hall
ccfc3c1703 Merge V12.1.00 2019-08-06 13:18:15 +12:00
clinton-hall
dc5d43b028 update to version 12.1.00 2019-08-06 13:16:25 +12:00
clinton-hall
35c65254e7 Merge branch 'nightly' into dev 2019-08-06 13:07:50 +12:00
Clinton Hall
bde5a15f66
Fixes for user_script categories (#1645)
Fixes for user_script categories. #1643
2019-08-06 09:04:45 +12:00
Clinton Hall
5714540949
Fix uTorrent with Python3 (#1644)
* Remove temp workaround for Microsoft Azure python issues.
2019-08-02 13:02:46 +12:00
Clinton Hall
9d05d6c914
Merge pull request #1640 from clinton-hall/imdb-boundary-1
Add word boundary to imdb match. #1639
2019-07-23 14:45:06 +12:00
Clinton Hall
d7eab5d2d3
Add word boundary to imdb match. #1639
Prevents matching (and truncating) longer ids.
Thanks @currently-off-my-rocker
2019-07-23 14:24:31 +12:00
Clinton Hall
745bad3823
Merge pull request #1639 from currently-off-my-rocker/imdb-ids-8-digits
identify imdb ids with 8 digits
2019-07-23 09:00:51 +12:00
currently-off-my-rocker
5a18ee9a27
identify imdb ids with 8 digits 2019-07-22 13:07:09 +02:00
Clinton Hall
8ba8caf021
Fix3 (#1637)
* add singular fork detection for multiple runs. Fixes #1637
* Add newly identified fork variants #1630 #1637
* remove encoding of paths. #1637 #1582
2019-07-22 12:35:01 +12:00
Clinton Hall
f21e18b1bf
Fix2 (#1636)
add chnaged api handling for SickGear. Fixes #1630
2019-07-16 14:36:00 +12:00
Clinton Hall
9a958afac8
don't crash when no optionalParameters. Fixes #1630 (#1632) 2019-07-12 19:39:55 +12:00
Clinton Hall
95e4c70d9a Set theme jekyll-theme-cayman 2019-07-09 15:05:33 +12:00
Clinton Hall
9f6c068cde
Transcode patch 1 (#1627)
* Add Piping of stderr to capture transcoding failures. #1619
* Allow passing absolute nice command. #1619
* Change .cfg description for niceness
* Fix errors due to VM packages out of date (ffmpeg)
* Fix Sqlite import error on tests
* Fix Azure issues

https://developercommunity.visualstudio.com/content/problem/598264/known-issue-azure-pipelines-images-missing-sqlite3.html
2019-06-20 12:56:02 +12:00
Ubuntu
ce50a1c27d Fix allready running handling for Python3. #1626 2019-06-19 21:37:42 +00:00
clinton-hall
f1dc672056 fix deluge client for python3. Fixes #1626 2019-06-19 22:50:36 +12:00
clinton-hall
d39a7dd234 fix to make deluge client py 2 and 3 compatible. Fixes #1626 2019-06-18 20:52:19 +12:00
Clinton Hall
81895afd3f
Merge pull request #1625 from TheHolyRoger/patch-3
Don't replace apostrophes in qBittorrent input_name
2019-06-08 20:08:21 +12:00
TheHolyRoger
3237336775
Don't replace apostrophes in qBittorrent input_name
Don't replace apostrophes in qBittorrent input_name - only trim if found at beginning/end of string.

This stops nzbtomedia processing the entire download folder when asked to process a folder with apostrophes in the title
2019-06-08 00:20:12 +01:00
clinton-hall
fd1149aea1 add additional options to pass into ffmpeg. #1619 2019-06-06 21:46:56 +12:00
Clinton Hall
8c45e76507
Bluray 1 (#1620)
* added code to extract bluray images and folder structure. #1588

* add Mounting of iso files as fall-back

* add new mkv-bluray default.

* clean-up fall-back for ffmpeg not accepting -show error
2019-05-31 14:06:25 +12:00
clinton-hall
5ff056844c Fix NoExtractFailed usage. Fixes #1618 2019-05-20 21:17:54 +12:00
clinton-hall
5375d46c32 add remote path handling for LazyLibrarian #1223 2019-04-18 21:46:32 +12:00
Clinton Hall
52cae37609
Fix crash of remote_path exception. #1223 2019-04-18 08:40:11 +12:00
Labrys of Knossos
472dd8c2c7
Merge pull request #1608 from clinton-hall/fix/database
Fix IndexError on Python 2.7 when accessing database
2019-04-08 19:48:12 -04:00
Labrys of Knossos
455915907b Fix key access for sqlite3.Row on Python 2.7
Fixes #1607
2019-04-08 19:24:59 -04:00
Labrys of Knossos
d3bbcb6b63 Remove unnecessary dict factory for database. 2019-04-08 19:24:59 -04:00
Labrys of Knossos
713f1a14f3
Merge pull request #1606 from clinton-hall/flake8/future-import
Flake8/future import
2019-04-07 17:53:03 -04:00
Labrys of Knossos
424879e4b6 Add future imports 2019-04-07 17:44:25 -04:00
Labrys of Knossos
f42cc020ea Add flake8-future-import to tox.ini 2019-04-07 17:35:02 -04:00
Labrys of Knossos
e98c29010a
Merge pull request #1605 from clinton-hall/flake8/selective-tests
Add optional flake8 tests to selective testing
2019-04-07 15:38:31 -04:00
Labrys of Knossos
c6e35bd2db Add optional flake8 tests to selective testing
Ignore W505 (doc string length) for now
2019-04-07 14:20:20 -04:00
Labrys of Knossos
e4b03005a1
Merge pull request #1604 from clinton-hall/fix/flake8
Fix/flake8
2019-04-07 13:51:51 -04:00
Labrys of Knossos
9f52406d45 Fix flake8-quotes Q000 Remove bad quotes 2019-04-07 13:44:33 -04:00
Labrys of Knossos
99159acd80 Fix flake8-bugbear B007 Loop control variable not used within the loop body. 2019-04-07 13:39:48 -04:00
Labrys of Knossos
d608000345 Fix flake8-commas C819 trailing comma prohibited 2019-04-07 13:38:27 -04:00
Labrys of Knossos
81c50efcd6 Fix flake8-commas C813 missing trailing comma in Python 3 2019-04-07 13:37:17 -04:00
Labrys of Knossos
eec977d909 Fix flake8-docstrings D403 First word of the first line should be properly capitalized 2019-04-07 13:33:20 -04:00
Labrys of Knossos
093f49d5aa Fix flake8-docstrings D401 First line should be in imperative mood 2019-04-07 13:32:06 -04:00
Labrys of Knossos
73e47466b4 Fix flake8-docstrings D205 1 blank line required between summary line and description 2019-04-07 13:30:40 -04:00
Labrys of Knossos
f98b39cdbb Fix flake8-docstrings D204 1 blank line required after class docstring 2019-04-07 13:27:31 -04:00
Labrys of Knossos
70fa47394e Fix flake8-docstrings D202 No blank lines allowed after function docstring 2019-04-07 13:26:13 -04:00
Labrys of Knossos
181675722d Fix flake8 W291 trailing whitespace 2019-04-07 13:23:24 -04:00
Labrys of Knossos
90602bf154 Fix flake8 W293 blank line contains whitespace 2019-04-07 13:17:55 -04:00
Labrys of Knossos
9527a2bd67 Fix flake8 E402 module level import not at top of file 2019-04-07 13:16:35 -04:00
Labrys of Knossos
98e8fd581a Fix flake8 E303 too many blank lines 2019-04-07 13:08:31 -04:00
Labrys of Knossos
daa9819798 Fix flake8 F401 item imported but unused 2019-04-07 13:06:25 -04:00
Labrys of Knossos
9dd25f96b2 Fix flake8 E266 too many leading '#' for block comment
Ignore for NZBGET scripts
2019-04-07 12:58:31 -04:00
Labrys of Knossos
077f04bc53 Fix flake8 E265 block comment should start with '# '
Ignore for NZBGET scripts
2019-04-07 12:56:50 -04:00
Labrys of Knossos
8736642e78 Fix code quality checks to run on project root and custom libs
Fixes #1600
Fixes #1601
2019-04-07 12:46:47 -04:00
Labrys of Knossos
3a95b433f3
Merge pull request #1603 from clinton-hall/fix/flake8
Fix/flake8
2019-04-07 12:46:16 -04:00
Labrys of Knossos
28ff74d0c8 Revert "Temporarily disable some flake8 ignores for testing"
This reverts commit e7179dde1c.
2019-04-07 12:42:18 -04:00
Labrys of Knossos
e7179dde1c Temporarily disable some flake8 ignores for testing 2019-04-07 12:38:43 -04:00
Labrys of Knossos
0788a754cb Fix code quality checks to run all desired tests
Fixes #1602
2019-04-07 12:15:07 -04:00
Labrys of Knossos
aeed469c5f
Merge pull request #1599 from clinton-hall/flake8/bugbear
Flake8/bugbear
2019-04-06 23:55:00 -04:00
Labrys of Knossos
b8c2b6b073
Merge pull request #1598 from clinton-hall/flake8/docstrings
Flake8/docstrings
2019-04-06 23:51:56 -04:00
Labrys of Knossos
23a450f095
Merge pull request #1597 from clinton-hall/flake8/comprehensions
Flake8/comprehensions
2019-04-06 23:49:48 -04:00
Labrys of Knossos
72140e939c Fix flake8-bugbear B902 Invalid first argument used for instance method. 2019-04-06 23:37:20 -04:00
Labrys of Knossos
10b2eab3c5 Fix flake8-docstrings D401 First line should be in imperative mood 2019-04-06 23:37:20 -04:00
Labrys of Knossos
4c8e896bbb Fix flake8-bugbear B007 Loop control variable not used within the loop body. 2019-04-06 23:37:20 -04:00
Labrys of Knossos
e00b5cc195 Fix flake8-bugbear B010 Do not call setattr with a constant attribute value, it is not any safer than normal property access. 2019-04-06 23:37:20 -04:00
Labrys of Knossos
267d8d1632 Add flake8-bugbear to tox.ini 2019-04-06 23:37:20 -04:00
Labrys of Knossos
6f6c9bcc9d Fix flake8-docstrings D400 First line should end with a period 2019-04-06 23:37:19 -04:00
Labrys of Knossos
1d7dba8aeb Fix flake8-docstrings D205 1 blank line required between summary line and description 2019-04-06 23:37:19 -04:00
Labrys of Knossos
777bc7e35d Fix flake8-docstrings D202 No blank lines allowed after function docstring 2019-04-06 23:37:19 -04:00
Labrys of Knossos
4dd58afaf6 Fix flake8-docstrings D200 One-line docstring should fit on one line with quotes 2019-04-06 23:37:19 -04:00
Labrys of Knossos
a8043d0259 Add flake8-docstrings to tox.ini 2019-04-06 23:37:19 -04:00
Labrys of Knossos
169fcaae4a Fix flake8-comprehensions C407 Unnecessary list comprehension 2019-04-06 23:36:18 -04:00
Labrys of Knossos
b9c7eec834 Fix flake8-comprehensions C403 Unnecessary list comprehension 2019-04-06 23:36:18 -04:00
Labrys of Knossos
f2964296c5 Add flake8-comprehensions to tox.ini 2019-04-05 19:19:11 -04:00
Labrys of Knossos
0ba4b9daab
Merge pull request #1596 from clinton-hall/flake8/quotes
Flake8/quotes
2019-04-05 19:13:14 -04:00
Labrys of Knossos
94c42dbd8a Fix flake8-quotes Q000 Remove bad quotes 2019-04-05 19:04:31 -04:00
Labrys of Knossos
2995c7f391 Add flake8-quotes to tox.ini 2019-04-05 19:04:11 -04:00
Labrys of Knossos
bbcef52eb5
Merge pull request #1595 from clinton-hall/flake8/commas
Flake8/commas
2019-04-05 18:25:38 -04:00
Labrys of Knossos
c5244df510 Fix flake8-commas C819 trailing comma prohibited 2019-04-05 18:14:44 -04:00
Labrys of Knossos
14b2aa6bf4 Fix flake8-commas C812 missing trailing comma 2019-04-05 18:14:44 -04:00
Labrys of Knossos
0bcbabd681 Add flake8-commas to tox.ini 2019-04-05 18:14:44 -04:00
Labrys of Knossos
627b453d3b
Merge pull request #1594 from clinton-hall/quality/flake8
Quality/flake8
2019-04-05 17:52:56 -04:00
Labrys of Knossos
697df555ec Fix flake8 W293 blank line contains whitespace 2019-04-05 17:12:05 -04:00
Labrys of Knossos
0350521b87 Fix flake8 W291 trailing whitespace 2019-04-05 17:12:05 -04:00
Labrys of Knossos
644a11118c Fix flake8 F401 imported but unused 2019-04-05 17:12:05 -04:00
Labrys of Knossos
faa378f787 Fix flake8 E712 comparison to True should be 'if cond is True:' or 'if cond:' 2019-04-05 17:12:04 -04:00
Labrys of Knossos
d208798430 Fix flake8 E402 module level import not at top of file 2019-04-05 17:12:04 -04:00
Labrys of Knossos
8e6e2d1647 Fix flake8 E305 expected 2 blank lines after class or function definition, found 1 2019-04-05 17:12:04 -04:00
Labrys of Knossos
032f7456f9 Fix flake8 E302 expected 2 blank lines, found 1 2019-04-05 17:12:04 -04:00
Labrys of Knossos
a571fc3122 Fix flake8 E265 block comment should start with '# ' 2019-04-05 17:12:04 -04:00
Labrys of Knossos
5f633b931a Fix flake8 E261 at least two spaces before inline comment 2019-04-05 17:12:04 -04:00
Labrys of Knossos
8a22f20a8b Fix flake8 E241 multiple spaces after ':' 2019-04-05 17:12:04 -04:00
Labrys of Knossos
07ad515b50 Fix flake8 E226 missing whitespace around arithmetic operator 2019-04-05 17:12:04 -04:00
Labrys of Knossos
87e813f062 Fix flake8 E126 continuation line over-indented for hanging indent 2019-04-05 17:12:04 -04:00
Labrys of Knossos
90090d7e02 Fix flake8 E117 over-indented 2019-04-05 17:12:03 -04:00
Labrys of Knossos
a8d1cc4fe9 Add flake8 quality checks to tox.ini 2019-04-05 17:11:27 -04:00
Labrys of Knossos
51e520547b
Merge pull request #1593 from clinton-hall/quality/tox
Add tox.ini
2019-04-05 17:03:58 -04:00
Labrys of Knossos
822603d021 Add tox.ini 2019-04-05 16:53:14 -04:00
Clinton Hall
825b48a6c1
add h265 to MKV profile allow. Fixes #1592 2019-04-04 11:34:25 +13:00
Labrys of Knossos
cb3f61f137
Merge pull request #1591 from clinton-hall/tests/cleanup
Tests/cleanup
2019-03-31 12:56:18 -04:00
Labrys of Knossos
f5fdc14577 Revert "Force cleanup errors for confirming CI test"
This reverts commit 16b7c11495.
2019-03-31 12:49:32 -04:00
Labrys of Knossos
16b7c11495 Force cleanup errors for confirming CI test 2019-03-31 12:45:07 -04:00
Labrys of Knossos
02813a6eaf Add source install cleanup test 2019-03-31 12:39:12 -04:00
Labrys of Knossos
a531f4480e Add source install cleanup test 2019-03-31 12:30:27 -04:00
Labrys of Knossos
9a833565aa
Merge pull request #1590 from clinton-hall/libs/pywin32
Add pywin32 to setup.py install_requires on Windows
2019-03-31 12:29:08 -04:00
Labrys of Knossos
f20e1e4f0d Add pywin32 to setup.py install_requires on Windows 2019-03-31 11:45:04 -04:00
clinton-hall
809e642039 fix LL default branch. 2019-03-30 08:47:20 +13:00
clinton-hall
1597763d30 minor fix for LazyLibrarian api. 2019-03-29 10:38:59 +13:00
Clinton Hall
aee3b151c0
Lazylib 1 (#1587)
* add support for LazyLibrarian. Fixes #1223
2019-03-29 09:50:43 +13:00
Clinton Hall
a3db8fb4b6
Test 1 (#1586)
* add transcoder tests
2019-03-27 10:09:47 +13:00
Clinton Hall
bdec673bb9
Merge pull request #1583 from clinton-hall/fix-1
remove .encode which creates byte vs string comparison issues.
2019-03-15 20:52:39 +13:00
clinton-hall
19c3e1fd85 remove .encode which creates byte vs string comparison issues. Fixes #1582 2019-03-15 20:42:21 +13:00
Clinton Hall
0db7c3e10c
Merge pull request #1580 from clinton-hall/dev
12.0.10
2019-03-14 20:40:41 +13:00
Clinton Hall
858206de07
Merge pull request #1579 from clinton-hall/nightly
Nightly
2019-03-14 20:32:49 +13:00
clinton-hall
ac7e0b702a update to 12.0.10 2019-03-14 20:28:53 +13:00
Clinton Hall
15d4289003
Merge pull request #1578 from clinton-hall/fix-1
fix cleanup
2019-03-14 20:12:18 +13:00
clinton-hall
6aee6baf6e fix cleanup 2019-03-14 20:02:40 +13:00
Labrys of Knossos
9b31482ce3
Update PULL_REQUEST_TEMPLATE.md 2019-03-12 16:48:50 -04:00
clinton-hall
8745af2629 update to v12.0.9 2019-03-13 07:54:21 +13:00
Clinton Hall
257eb3d761
Merge pull request #1575 from clinton-hall/clean-1
cleanup supporting files.
2019-03-13 07:45:51 +13:00
clinton-hall
742d482020 cleanup supporting files. 2019-03-13 07:40:35 +13:00
Clinton Hall
410aab4c58
improve tests (#1574)
improve tests
2019-03-12 18:55:37 +13:00
Clinton Hall
f5891459c2
Set up CI with Azure Pipelines (#1573)
* Set up CI with Azure Pipelines

* test all python versions

* rename test file and set to run from subdir.
2019-03-11 22:40:59 +13:00
Labrys of Knossos
3f6b447b3e
Merge pull request #1572 from clinton-hall/refactor/configuration
Fix absolute imports for qbittorrent and utorrent in Python 2.7
2019-03-10 20:52:14 -04:00
Labrys of Knossos
a669c983b7 Fix absolute imports for qbittorrent and utorrent in Python 2.7 2019-03-10 20:45:13 -04:00
Labrys of Knossos
2f5fad7737
Merge pull request #1571 from clinton-hall/refactor/configuration
Fix missed commits during refactor
2019-03-10 20:40:35 -04:00
Labrys of Knossos
9f7f28d54e Fix missed commits during refactor 2019-03-10 20:35:05 -04:00
Clinton Hall
832ef32340
Merge pull request #1569 from clinton-hall/refactor/configuration
Refactor/configuration
2019-03-11 08:20:12 +13:00
Clinton Hall
3b3c7ca2d4
Merge pull request #1566 from clinton-hall/refactor/iso_matching
Refactor/iso matching
2019-03-11 08:17:42 +13:00
Labrys of Knossos
b6db785c92 Refactor utils.subtitles to plugins.subtitles 2019-03-10 11:28:54 -04:00
Labrys of Knossos
76b5c06a33 Refactor utils.notifications.plex_update to plugins.plex.plex_update 2019-03-10 11:25:12 -04:00
Labrys of Knossos
e12f2724e6 Refactor plex configuration to plugins.plex 2019-03-10 11:25:12 -04:00
Labrys of Knossos
1d75439441 Refactor utils.nzb to plugins.downloaders.nzb.utils 2019-03-10 11:25:12 -04:00
Labrys of Knossos
e1aa32aee7 Refactor downloader configuration to plugins.downloaders 2019-03-10 11:25:12 -04:00
Labrys of Knossos
28eed3bc92 Refactor ISO file matching to use regex only once per file. 2019-03-10 11:18:06 -04:00
Labrys of Knossos
cd64014a9d Refactor ISO file matching to decode process output a single time. 2019-03-10 11:18:06 -04:00
clinton-hall
ef950d8024 add Contributing guide 2019-03-10 22:32:55 +13:00
Clinton Hall
bb46bbad27
Merge pull request #1565 from clinton-hall/clinton-hall-patch-1
Update issue templates
2019-03-10 22:24:49 +13:00
Clinton Hall
2fc5101ef0
Update issue templates 2019-03-10 22:22:28 +13:00
Clinton Hall
aeda68fbe4
Merge pull request #1564 from clinton-hall/add-code-of-conduct-1
Add code of conduct 1
2019-03-10 22:03:26 +13:00
Clinton Hall
1c63c9fe39
Create CODE_OF_CONDUCT.md 2019-03-10 22:01:22 +13:00
clinton-hall
cedd0c1a20 Merge branch 'dev' 2019-03-10 20:41:23 +13:00
clinton-hall
a8e2b30666 Merge branch 'nightly' into dev 2019-03-10 20:39:22 +13:00
clinton-hall
d4786e10d7 rev up to 12.0.8 2019-03-10 20:37:57 +13:00
clinton-hall
392967780c don't load torrent clients for nzbs. Fixes #1563 2019-03-10 08:34:56 +13:00
clinton-hall
64862ece10 fix python3 parsing of .iso files. Fixes #1561 2019-03-09 20:30:25 +13:00
clinton-hall
f82fe0ee81 decode 7zip outut. Fixes #1561 2019-03-08 23:03:24 +13:00
clinton-hall
3f3e1415c9 change method of writing to system PATH. Fixes #830 2019-03-02 09:03:21 +13:00
clinton-hall
27cfc34577 add sys path config to find executables not in path. Fixes #830 2019-02-25 19:53:54 +13:00
Labrys of Knossos
506ede833e
Merge pull request #1554 from clinton-hall/fix/cleanup
Add exception handling for failure to return to original directory
2019-02-18 06:33:34 -05:00
Labrys of Knossos
fd8452b5c6 Add exception handling for failure to return to original directory
Fixes #1552
2019-02-16 10:17:01 -05:00
clinton-hall
45baf79753 log sucessful when returning failed download to Radarr. Fixes #1546 2019-02-09 11:08:33 +13:00
clinton-hall
8a637918d6 use list for python3 compatibility. Fixes #1545 2019-02-05 22:15:05 +13:00
clinton-hall
f47f68f699 convert byte to string from Popen. Fix Sick* failed processing. Fixes #1545 2019-02-05 22:01:20 +13:00
Labrys of Knossos
f91f40d643
Merge pull request #1543 from clinton-hall/feature/eol
Add Python End-of-Life detection
2019-02-03 19:11:53 -05:00
Labrys of Knossos
f6e620a3fd Add Python End-of-Life detection 2019-02-03 11:30:55 -05:00
clinton-hall
de86259bb0 fix first return parsing from HeadPhones. Fixes #1536 2019-01-27 22:45:04 +13:00
Labrys of Knossos
b7746f1ce5
Merge pull request #1534 from clinton-hall/fork/medusa-api
Add Medusa API
2019-01-20 10:22:00 -05:00
Labrys of Knossos
00877c2d97 Add Medusa API 2019-01-20 10:09:03 -05:00
Labrys of Knossos
80a9576fc3
Merge pull request #1532 from clinton-hall/refactor/configuration
Refactor configuration
2019-01-20 09:43:00 -05:00
Labrys of Knossos
81a6d9c4fa Refactor torrent linking configuration 2019-01-19 14:34:06 -05:00
Labrys of Knossos
9a1be36e8b Refactor torrent deletion configuration 2019-01-19 14:34:05 -05:00
Labrys of Knossos
521d2b7a05 Refactor torrent permission configuration 2019-01-19 14:34:05 -05:00
Labrys of Knossos
f23eccc050 Refactor torrent resuming configuration 2019-01-19 14:34:05 -05:00
Labrys of Knossos
cf0fc1296f Refactor torrent categories configuration 2019-01-19 14:34:05 -05:00
Labrys of Knossos
1906d62664 Refactor flatenning configuration 2019-01-19 14:34:05 -05:00
Labrys of Knossos
218e082ec7 Refactor sabnzbd configuration 2019-01-19 14:34:05 -05:00
Labrys of Knossos
9c105061d6 Refactor qbittorrent configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
22dfadd65c Refactor deluge configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
44df360fbe Refactor utorrent configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
f961c476ae Refactor transmission configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
287e3aa17b Fix initializing constant 2019-01-19 14:34:04 -05:00
Labrys of Knossos
3d2070e106 Refactor utility location configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
0a58b6b6a0 Refactor section configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
4f828e0a77 Refactor torrent class configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
e85b92f1db Refactor passwords file configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
819cf7b225 Fix global declarations 2019-01-19 14:34:04 -05:00
Labrys of Knossos
2d0b5e706b Refactor transcoder configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
10710ffd4c Refactor container configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
ddf15247e3 Use context manager instead of assignment 2019-01-19 14:34:04 -05:00
Labrys of Knossos
e0c55c4f84 Refactor niceness configuration 2019-01-19 14:34:04 -05:00
Labrys of Knossos
f67f8a32aa Refactor plex configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
a5d51d6e5a Use generator exp for remote paths 2019-01-19 14:34:03 -05:00
Labrys of Knossos
c587a137a5 Refactor remote paths configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
003d181bb0 Refactor torrents configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
62aca7ed3c Refactor groups configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
b3870e0d07 Refactor nzbs configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
feffa0da41 Refactor wake on lan configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
2c963f1ffe Fix error log 2019-01-19 14:34:03 -05:00
Labrys of Knossos
bd4c830313 Fix version check conditional 2019-01-19 14:34:03 -05:00
Labrys of Knossos
750c203216 Fix CheckVersion instance creation 2019-01-19 14:34:03 -05:00
Labrys of Knossos
c9e9d9748b Refactor updates configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
2512218d4a Refactor general configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
ca17c7a562 Refactor logging configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
a31683f7e5 Refactor migration configuration 2019-01-19 14:34:03 -05:00
Labrys of Knossos
13846db0b6 Refactor locale configuration 2019-01-19 14:34:02 -05:00
Labrys of Knossos
e0de964fda Refactor process configuration 2019-01-19 14:34:02 -05:00
Labrys of Knossos
1404464ef9 Refactor locale configuration 2019-01-19 14:34:02 -05:00
Labrys of Knossos
0c98912b76 Refactor PASSWORDSFILE -> PASSWORDS_FILE
Refactor DOWNLOADINFO -> DOWNLOAD_INFO
2019-01-19 14:34:02 -05:00
Labrys of Knossos
7e52aec4af Refactor *CONTAINER 2019-01-19 14:34:02 -05:00
Labrys of Knossos
2ebe96e049 Refactor REMOTEPATHS -> REMOTE_PATHS 2019-01-19 14:34:02 -05:00
Labrys of Knossos
d973f4955f Refactor TORRENT_CLIENTAGENT -> TORRENT_CLIENT_AGENT 2019-01-19 14:34:02 -05:00
Labrys of Knossos
fafcdb4ed5 Refactor NZB_DEFAULTDIR -> NZB_DEFAULT_DIRECTORY 2019-01-19 14:34:02 -05:00
Labrys of Knossos
a0d8940f70 Refactor NZB_CLIENTAGENT -> NZB_CLIENT_AGENT 2019-01-19 14:34:02 -05:00
Labrys of Knossos
5bea8f121e Refactor SABNZBD* 2019-01-19 14:34:02 -05:00
Labrys of Knossos
4bf842b4f4 Refactor TORRENT_DEFAULTDIR -> TORRENT_DEFAULT_DIRECTORY 2019-01-19 14:34:02 -05:00
Labrys of Knossos
a24367113b Refactor OUTPUTDIRECTORY -> OUTPUT_DIRECTORY 2019-01-19 14:34:02 -05:00
Labrys of Knossos
28f1bc35c5 Refactor USELINK -> USE_LINK 2019-01-19 14:34:01 -05:00
Labrys of Knossos
d2346b0ea6 Refactor PLEX* 2019-01-19 14:34:01 -05:00
Labrys of Knossos
182a542bda Refactor QBITTORENT* 2019-01-19 14:34:01 -05:00
Labrys of Knossos
1aa0ea6e75 Refactor DELUGEPWD -> DELUGE_PASSWORD 2019-01-19 14:34:01 -05:00
Labrys of Knossos
74bc6fb5b4 Refactor DELUGEUSR -> DELUGE_USER 2019-01-19 14:34:01 -05:00
Labrys of Knossos
df5291fd4f Refactor DELUGEPORT -> DELUGE_PORT 2019-01-19 14:34:01 -05:00
Labrys of Knossos
9262ba9cd0 Refactor DELUGEHOST -> DELUGE_HOST 2019-01-19 14:34:01 -05:00
Labrys of Knossos
a62415d711 Refactor TRANSMISSIONPWD -> TRANSMISSION_PASSWORD 2019-01-19 14:34:01 -05:00
Labrys of Knossos
42dfdf73ab Refactor TRANSMISSIONUSR -> TRANSMISSION_USER 2019-01-19 14:34:01 -05:00
Labrys of Knossos
e66ad2b66d Refactor TRANSMISSIONPORT -> TRANSMISSION_PORT 2019-01-19 14:34:00 -05:00
Labrys of Knossos
5d5eb798c9 Refactor TRANSMISSIONHOST -> TRANSMISSION_HOST 2019-01-19 14:34:00 -05:00
Labrys of Knossos
39974e62cc Refactor UTORRENTPWD -> UTORRENT_PASSWORD 2019-01-19 14:34:00 -05:00
Labrys of Knossos
22d2c1b108 Refactor UTORRENTUSR -> UTORRENT_USER 2019-01-19 14:34:00 -05:00
Labrys of Knossos
20bd765a4b Refactor UTORRENTWEBUI -> UTORRENT_WEB_UI 2019-01-19 14:34:00 -05:00
Labrys of Knossos
649febdedd Refactor NZBGET_POSTPROCESS_PARCHECK -> NZBGET_POSTPROCESS_PAR_CHECK 2019-01-19 14:34:00 -05:00
Labrys of Knossos
4896848099
Merge pull request #1526 from clinton-hall/dev
Merge dev to master
2019-01-15 18:44:43 -05:00
Labrys of Knossos
66604416b4
Merge pull request #1525 from clinton-hall/nightly
Merge nightly to dev
2019-01-15 18:41:56 -05:00
Labrys of Knossos
4220b15232
Merge pull request #1524 from clinton-hall/refactor/utils
Refactor utils
2019-01-15 18:40:04 -05:00
Labrys of Knossos
30872db797 Update changelog 2019-01-15 18:37:45 -05:00
Labrys of Knossos
d9436603ab Bump version: 12.0.6 → 12.0.7 2019-01-15 18:11:31 -05:00
Labrys of Knossos
b6672ccf09 Refactor restart to utils.processes.restart 2019-01-15 18:06:05 -05:00
Labrys of Knossos
d960c432eb Refactor rchmod to utils.paths.rchmod 2019-01-15 18:02:36 -05:00
Labrys of Knossos
aa057e65d5 Refactor common utils to utils.common 2019-01-15 17:55:43 -05:00
Labrys of Knossos
844c1d15e9 Fix cleanup script output 2019-01-15 17:48:17 -05:00
Labrys of Knossos
7185e0b31b Add docstring 2019-01-15 17:48:17 -05:00
Labrys of Knossos
c4be677a62 Fix git subprocess 2019-01-15 17:32:03 -05:00
clinton-hall
243cf52c47 Merge branch 'nightly' into dev 2019-01-13 20:11:56 +13:00
clinton-hall
b5b4808293 update version details for next release. 2019-01-13 20:10:45 +13:00
clinton-hall
247da9c6cc Merge branch 'nightly' into dev 2019-01-13 19:45:58 +13:00
clinton-hall
3a2ed4bc57 fixed manual Torrent run result parsing. Fixes #1520 2019-01-13 19:41:04 +13:00
Labrys of Knossos
6f5e3ca0c0
Merge pull request #1519 from TheHolyRoger/Missed-ProcessResult
Missed ProcessResult
2019-01-11 17:54:20 -05:00
TheHolyRoger
e89bbcf9be
hotfix/processresult bug 2019-01-11 14:38:20 +00:00
Labrys of Knossos
df280c4bc3
Merge pull request #1515 from clinton-hall/refactor/utils
Refactor core.utils into a package
2019-01-06 12:12:14 -05:00
Labrys of Knossos
bd5b970bc7 Refactor network utils to utils.network 2019-01-06 12:10:50 -05:00
Labrys of Knossos
383eb5eaf2 Refactor identification utils from utils to utils.identification 2019-01-06 12:10:50 -05:00
Labrys of Knossos
648ecd4048 Refactor process_dir to use generators 2019-01-06 12:10:50 -05:00
Labrys of Knossos
a888d741d3 Refactor file type detection to utils.files 2019-01-06 12:10:50 -05:00
Labrys of Knossos
cb422a0cea Flatten process_dir 2019-01-06 12:10:50 -05:00
Labrys of Knossos
8d458f10ac Refactor get_dirs 2019-01-06 12:10:50 -05:00
Labrys of Knossos
e67f29cb7b Flatten get_dirs function 2019-01-06 12:10:50 -05:00
Labrys of Knossos
03cb11dae3 Refactor identification utils from utils to utils.identification 2019-01-06 12:10:49 -05:00
Labrys of Knossos
e44c0bb56a Refactor path functions from utils to utils.paths 2019-01-06 12:10:49 -05:00
Labrys of Knossos
36932e25c6 Fix clean_dir for Python 3
TypeError when testing str > int
2019-01-06 12:10:49 -05:00
Labrys of Knossos
6cc3df73b3 Refactor path functions from utils to utils.paths 2019-01-06 12:10:49 -05:00
Labrys of Knossos
dade3f6698 Refactor network utils to utils.network 2019-01-06 12:10:49 -05:00
Labrys of Knossos
0f7c74dd78 Refactor file type detection to utils.files 2019-01-06 12:10:49 -05:00
Labrys of Knossos
4424e21786 Streamline is_media_file 2019-01-06 12:10:49 -05:00
Labrys of Knossos
0cccecb435 Refactor file type detection to utils.files 2019-01-06 12:10:49 -05:00
Labrys of Knossos
a074e56629 Refactor naming utils to utils.naming 2019-01-06 12:10:49 -05:00
Labrys of Knossos
9b0d539423 Make replace_links more DRY and add max_depth for following links 2019-01-06 12:10:49 -05:00
Labrys of Knossos
d1f5211e78 Refactor nzbs from utils to utils.nzbs 2019-01-06 12:10:48 -05:00
Labrys of Knossos
2d4951267b Refactor subtitle utils to utils.subtitles 2019-01-06 12:10:48 -05:00
Labrys of Knossos
6a9ff96e8c Refactor encoding utils to utils.encoding 2019-01-06 12:10:48 -05:00
Labrys of Knossos
9d43e0d60b Refactor notification utils to utils.notifications 2019-01-06 12:10:48 -05:00
Labrys of Knossos
f042e014b1 Refactor naming utils to utils.naming 2019-01-06 12:10:48 -05:00
Labrys of Knossos
c4d9faeb23 Refactor network utils to utils.network 2019-01-06 12:10:48 -05:00
Labrys of Knossos
a14a286a8e Refactor links to utils.links 2019-01-06 12:10:48 -05:00
Labrys of Knossos
5c644890e8 Fix shutil.copyfileobj monkey patching 2019-01-06 12:10:48 -05:00
Labrys of Knossos
a6d2c6e96f Refactor path functions from utils to utils.paths 2019-01-06 12:10:48 -05:00
Labrys of Knossos
094fe555b8 Clean up network utils 2019-01-06 12:10:48 -05:00
Labrys of Knossos
42553df5cb Refactor network utils to utils.network 2019-01-06 12:10:48 -05:00
Labrys of Knossos
84061fea2f Fix PEP8 line length 2019-01-06 12:10:48 -05:00
Labrys of Knossos
7b8721b277 Refactor my_db -> database 2019-01-06 12:10:48 -05:00
Labrys of Knossos
542893b30b Refactor download_info db connection to module variable 2019-01-06 12:10:48 -05:00
Labrys of Knossos
04942bf6ad Refactor download info to utils.download_info 2019-01-06 12:10:48 -05:00
Labrys of Knossos
a50a5edbf7 Refactor path functions from utils to utils.paths 2019-01-06 12:10:47 -05:00
Labrys of Knossos
2d6e8034e2 Refactor parses from utils to utils.parsers 2019-01-06 12:10:47 -05:00
Labrys of Knossos
bd16f11485 Refactor nzbs from utils to utils.nzbs 2019-01-06 12:10:47 -05:00
Labrys of Knossos
4143aa77f8 Refactor torrents from utils to utils.torrents 2019-01-06 12:10:47 -05:00
Labrys of Knossos
21fa4e3896 Refactor utils.*Process -> utils.processes.*Process 2019-01-06 12:10:47 -05:00
Labrys of Knossos
3b670b895b Refactor utils module to package 2019-01-06 12:10:47 -05:00
Labrys of Knossos
22b9a484ae
Merge pull request #1514 from clinton-hall/feature/cleanup
Code cleanup
2019-01-06 12:10:24 -05:00
Labrys of Knossos
a289eef88e Remove unused variable 2019-01-06 12:09:07 -05:00
Labrys of Knossos
93ec74f1c7 Fix conditional assignment 2019-01-06 12:09:07 -05:00
Labrys of Knossos
6d0d2d3f7e Use dict literal or comprehension for dict creation 2019-01-06 12:09:07 -05:00
Labrys of Knossos
c99b497bd8 Merge branch 'hotfix/sourcecleaner' into nightly 2019-01-06 12:08:47 -05:00
Labrys of Knossos
f9f3fafb1b
Merge pull request #1513 from clinton-hall/dev
Dev
2019-01-06 00:51:02 -05:00
Labrys of Knossos
2855ef4ccb
Merge pull request #1512 from clinton-hall/hotfix/sourcecleaner
Proper fix for source cleaner
2019-01-06 00:49:13 -05:00
Labrys of Knossos
0fd570cd85 Bump version: 12.0.4 → 12.0.5 2019-01-06 00:48:08 -05:00
Labrys of Knossos
ada78a14f8 hotfix 2019-01-06 00:47:40 -05:00
Labrys of Knossos
0f1595d29c
Merge pull request #1511 from clinton-hall/hotfix/sourcecleaner
Hotfix/sourcecleaner
2019-01-05 23:08:01 -05:00
Labrys of Knossos
c6b4405aff
Merge pull request #1510 from clinton-hall/dev
Hotfix missed commit for source cleaner
2019-01-05 23:03:13 -05:00
Labrys of Knossos
b9cab56fa5
Merge pull request #1509 from clinton-hall/hotfix/sourcecleaner
Fix missed commit for source cleaner
2019-01-05 23:00:47 -05:00
Labrys of Knossos
84a7011973 Bump version: 12.0.3 → 12.0.4 2019-01-05 22:59:31 -05:00
Labrys of Knossos
f83b37d80b Fix missed commit for source cleaner 2019-01-05 22:59:20 -05:00
Labrys of Knossos
50b743ad30
Merge pull request #1508 from clinton-hall/fix/forkdetection
Fix fork detection, part 1
2019-01-05 22:50:50 -05:00
Labrys of Knossos
656957f1fc Fix excess parameter detection 2019-01-05 22:48:18 -05:00
Labrys of Knossos
f514eecf6c Fix excess parameter detection 2019-01-05 22:48:16 -05:00
Labrys of Knossos
29171baaa3 Add extra logging for fork detection. 2019-01-05 22:48:11 -05:00
Labrys of Knossos
7b2833e5f5
Merge pull request #1506 from clinton-hall/dev
Merge dev back into nightly
2019-01-05 21:44:16 -05:00
Labrys of Knossos
14300d12fd
Merge pull request #1505 from clinton-hall/dev
Merge develop into master
2019-01-05 21:39:38 -05:00
Labrys of Knossos
b86693ea8c
Merge pull request #1504 from clinton-hall/hotfix/sourcecleaner
Hotfix/sourcecleaner
2019-01-05 21:36:26 -05:00
Labrys of Knossos
6616801c38 Bump version: 12.0.2 → 12.0.3 2019-01-05 21:27:14 -05:00
Labrys of Knossos
d250e45c7b Hotfix cleaning for source installs 2019-01-05 21:26:56 -05:00
clinton-hall
f1c4c6e840 and that is why we don't make chnages using vi while on holiday! 2019-01-05 23:33:30 +13:00
clinton-hall
6d7dacf114 update a Readme to reflect recent chnages. 2019-01-05 23:30:11 +13:00
Labrys of Knossos
f14ab17dd5 Merge tag '12.0.2' into nightly 2019-01-05 01:02:45 -05:00
Labrys of Knossos
e386eaaec2 Bump version: 12.0.1 → 12.0.2 2019-01-05 00:53:32 -05:00
Labrys of Knossos
58e57c238d
Merge pull request #1502 from clinton-hall/dev
Hotfix missed process result
2019-01-05 00:44:59 -05:00
Labrys of Knossos
d2c98dc738
Merge pull request #1501 from clinton-hall/hotfix/processresult
Fix missed ProcessResult
2019-01-05 00:32:16 -05:00
Labrys of Knossos
111330510f Fix missed ProcessResult 2019-01-05 00:28:12 -05:00
Labrys of Knossos
e1fe63328c Bump version: 12.0.0 → 12.0.1 2019-01-04 05:30:17 -05:00
Labrys of Knossos
f4eb19ab78
Merge pull request #1497 from clinton-hall/nightly
Hotfix NZBGet comments
2019-01-04 05:28:32 -05:00
Clinton Hall
da8eb0016d
Merge pull request #1496 from SerhatG/nightly
Fix NzbGet not detecting scripts
2019-01-04 22:21:00 +13:00
Serhat
1dd33be9e3 Fix NzbGet not detecting scripts
Revert "Remove comment"
This reverts commit f895446547.

The comments are required by NzbGet to consider the files as proper scripts.
2019-01-04 09:54:43 +01:00
Labrys of Knossos
eed7ef485c
Merge pull request #1494 from clinton-hall/master
Merge master back to Nightly
2019-01-03 22:51:36 -05:00
Labrys of Knossos
89b041f3a9
Merge pull request #1493 from clinton-hall/dev
Merge Dev to master
2019-01-03 22:49:00 -05:00
Labrys of Knossos
1d15a0a7b0
Merge pull request #1492 from clinton-hall/release-12.0.0
Fix hash-bang
2019-01-03 22:47:28 -05:00
Labrys of Knossos
fbaa57e3fe
Merge pull request #1491 from clinton-hall/release-12.0.0
Release 12.0.0
2019-01-03 22:43:09 -05:00
Labrys of Knossos
4d9ecbcb21
Merge pull request #1490 from clinton-hall/release-12.0.0
Change hash-bangs to system python and not specifically python 2
2019-01-01 00:35:57 -05:00
Labrys of Knossos
325a6a03d5 Fix hash-bang 2019-01-01 00:07:21 -05:00
Labrys of Knossos
fde6c430ae
Merge pull request #1489 from clinton-hall/release-12.0.0
Release 12.0.0
2018-12-31 23:26:53 -05:00
Labrys of Knossos
519b2d1c4c
Merge pull request #1488 from clinton-hall/release-12.0.0
Fix typo
2018-12-31 23:23:24 -05:00
Labrys of Knossos
c3e2cf35a4 Fix typo 2018-12-31 23:22:17 -05:00
Labrys of Knossos
152223b648
Merge pull request #1487 from clinton-hall/release-12.0.0
Fix setup.py
2018-12-31 14:45:58 -05:00
Labrys of Knossos
7ac4d8d762 Fix setup.py 2018-12-31 14:45:15 -05:00
Labrys of Knossos
0c57061f04
Merge pull request #1486 from clinton-hall/release-12.0.0
v12.0.0
2018-12-31 11:44:20 -05:00
Labrys of Knossos
095810ea79 Bump version: 11.8.1 → 12.0.0 2018-12-31 11:38:51 -05:00
Labrys of Knossos
3d8353f9f8
Merge pull request #1485 from clinton-hall/feature/stheno
Add `Stheno` fork
2018-12-31 11:31:29 -05:00
Labrys of Knossos
4dadd905c8 Add Stheno fork 2018-12-31 11:30:16 -05:00
Labrys of Knossos
d196ec6f7d
Merge pull request #1483 from clinton-hall/fix/processresult
Add ProcessResult to auto_process.common
2018-12-29 14:26:45 -05:00
Labrys of Knossos
3f455daca6
Merge pull request #1482 from clinton-hall/fix/strings
Fix quotes - standardize to single-quoted strings
2018-12-29 14:26:05 -05:00
Labrys of Knossos
047ce56a3b Add ProcessResult to auto_process.common 2018-12-29 14:19:20 -05:00
Labrys of Knossos
c5343889fb Fix quotes - standardize to single-quoted strings 2018-12-29 14:19:20 -05:00
Labrys of Knossos
4fed4d8f51 Merge branch 'release-11.8.1' 2018-12-29 12:22:16 -05:00
Labrys of Knossos
85b4e22046 Merge branch 'release-11.8.1' into nightly 2018-12-29 12:21:41 -05:00
Labrys of Knossos
7244f8796c
Merge pull request #1480 from clinton-hall/hotfix/submodule
Fix cleaning bytecode for nzbToMedia as submodule
2018-12-29 10:58:45 -05:00
Labrys of Knossos
cc06cb8548 Fix cleaning bytecode for nzbToMedia as submodule 2018-12-29 10:47:47 -05:00
Labrys of Knossos
02d71c1f34 Merge branch 'hotfix/submodule' into nightly
# Conflicts:
#	changelog.txt
2018-12-29 10:22:16 -05:00
Labrys of Knossos
0f495a2cd3
Merge pull request #1478 from clinton-hall/hotfix/submodule
Hotfix cleanup for nzbToMedia installed as a git submodule
2018-12-29 10:15:18 -05:00
Labrys of Knossos
a490595c5b Update changelog
Fixes #1473
2018-12-29 10:12:03 -05:00
Labrys of Knossos
1f61b9b60e Bump version: 11.8.0 → 11.8.1 2018-12-29 10:10:40 -05:00
Labrys of Knossos
b7081da7fa Fix cleanup for nzbToMedia installed as a git submodule
Fixes #1473
2018-12-29 10:10:07 -05:00
Labrys of Knossos
003ce5a781
Merge pull request #1475 from clinton-hall/fix/processresult
Add ProcessResult to auto_process.common
2018-12-29 07:23:54 -05:00
Labrys of Knossos
69446930c3 Add ProcessResult to auto_process.common 2018-12-29 07:20:45 -05:00
Labrys of Knossos
01a0e2e2a9 Merge branch 'fix/changelog' into nightly
# Conflicts:
#	changelog.txt
2018-12-29 06:45:38 -05:00
Labrys of Knossos
c5dad74763
Merge pull request #1470 from clinton-hall/fix/changelog
Update changelog
2018-12-28 15:53:30 -05:00
Labrys of Knossos
75537db1b7 Update changelog 2018-12-28 15:52:03 -05:00
Labrys of Knossos
8dd44bd5f2
Merge pull request #1469 from clinton-hall/feature/version
Add version support
2018-12-28 15:34:55 -05:00
Labrys of Knossos
806e64d78f
Merge pull request #1468 from clinton-hall/feature/version
Feature/version
2018-12-28 15:33:14 -05:00
Labrys of Knossos
493bfe3a32 Bump version: 11.7.4 → 11.8.0 2018-12-28 15:27:16 -05:00
Labrys of Knossos
6ae1aee1b3 Add bumpversion config 2018-12-28 15:25:11 -05:00
Labrys of Knossos
3b0b7baa97 Add version 2018-12-28 13:47:58 -05:00
Labrys of Knossos
f19e9790c8 Add setup.py 2018-12-28 13:47:56 -05:00
Labrys of Knossos
e49589efeb Add editor config 2018-12-28 13:47:53 -05:00
Labrys of Knossos
feb441985d
Merge pull request #1465 from clinton-hall/hotix/cwd
Change current working directory prior to cleanup
2018-12-28 11:49:57 -05:00
Labrys of Knossos
c290064d34
Merge pull request #1466 from clinton-hall/hotix/cwd
Change current working directory prior to cleanup
2018-12-28 11:49:34 -05:00
Labrys of Knossos
ce1c1b8d42 Add WorkingDirectory context manager 2018-12-28 11:40:56 -05:00
Labrys of Knossos
13ab9cd7bc
Merge pull request #1464 from clinton-hall/fix/classes
Remove superfluous classes
2018-12-28 11:17:08 -05:00
Labrys of Knossos
d9e8f720be Remove superfluous classes 2018-12-28 11:15:57 -05:00
Labrys of Knossos
fbf98457c4
Merge pull request #1463 from clinton-hall/refactor/standardize
Remove superfluous classes and work towards standardizing processing
2018-12-27 11:35:26 -05:00
Labrys of Knossos
8c4353bc90 Add ProcessResult to auto_process.common 2018-12-27 11:32:52 -05:00
Labrys of Knossos
1d46f716e1 Refactor common functions from auto_process to auto_process.common 2018-12-27 11:32:51 -05:00
Labrys of Knossos
d95fc59b45 Remove superfluous classes 2018-12-27 11:32:51 -05:00
Labrys of Knossos
5a04a7b4d9
Merge pull request #1462 from clinton-hall/hotfix/gitless
Fix IOError on cleanup when git not found
2018-12-27 11:31:40 -05:00
Labrys of Knossos
366197a3ce
Merge pull request #1459 from clinton-hall/hotfix/gitless
Hotfix/gitless
2018-12-26 20:44:19 -05:00
Labrys of Knossos
1bf404fa2a
Merge pull request #1456 from clinton-hall/fix/names
Fix mutable arguments and shadowing built-ins
2018-12-26 12:33:21 -05:00
Labrys of Knossos
80b5a8c253 Fix mutable default argument 2018-12-26 12:29:57 -05:00
Labrys of Knossos
ad3fb4519d Fix name shadows builtin 2018-12-26 12:28:18 -05:00
Labrys of Knossos
857c47e8c7
Merge pull request #1455 from clinton-hall/fix/pep8
Fix various PEP8 violations
2018-12-26 11:42:42 -05:00
Labrys of Knossos
6f7b85f711 Remove unused import 2018-12-26 11:39:41 -05:00
Labrys of Knossos
bd385176e7 Fix print statement 2018-12-26 11:39:21 -05:00
Labrys of Knossos
f895446547 Remove comment 2018-12-26 11:38:38 -05:00
Labrys of Knossos
c2f0f3affc Fix PEP8 variable and argument names should be snake_case 2018-12-26 11:38:01 -05:00
Labrys of Knossos
985f004e32 Fix PEP8 invalid escape sequences 2018-12-26 11:31:17 -05:00
Labrys of Knossos
5ef4bdc1f4 Fix PEP8 assigning lambda expression 2018-12-26 11:30:33 -05:00
Labrys of Knossos
018ded07d6 Fix PEP8 for bare exceptions 2018-12-26 11:29:38 -05:00
Labrys
52c6096b6a Fix PEP8 whitespace violations 2018-12-26 11:22:11 -05:00
Labrys of Knossos
e490f97a05
Merge pull request #1453 from clinton-hall/hotfix/gitless
Merge back to nightly: Hotfix: not a git repository
2018-12-25 19:52:01 -05:00
Lizband
d1edf9f2a2 Merge branch 'release-11.7' into nightly
# Conflicts:
#	changelog.txt
#	core/versionCheck.py
#	nzbToMedia.py
2018-12-25 14:39:31 -05:00
Lizband
2d2cdd9ccf Update changelog 2018-12-25 12:48:27 -05:00
Labrys of Knossos
f78eac0367
Merge pull request #1447 from clinton-hall/fix/libs
Fix .pth files not loading
2018-12-23 09:04:36 -05:00
Labrys
7e3b53f608 Fix .pth files not loading 2018-12-23 08:48:02 -05:00
Labrys of Knossos
6c5b469c4b
Merge pull request #1446 from clinton-hall/readme/pywin32
Update readme
2018-12-22 19:37:10 -05:00
Labrys
6c76203406 Update readme
pywin32 needs to be installed manually
python 3 is now supported
2018-12-22 19:33:05 -05:00
Labrys of Knossos
d4372ffa60
Merge pull request #1445 from clinton-hall/fix/exit
Fix typo in sys.exit
2018-12-22 19:07:19 -05:00
Labrys
bf74606bfb Fix typo in sys.exit 2018-12-22 18:42:12 -05:00
Labrys of Knossos
cbc8655b9a
Merge pull request #1440 from clinton-hall/fix/bytecode
Clean up bytecode left over after update
2018-12-19 19:48:16 -05:00
Labrys of Knossos
2fce0e40c3 Add logging for cleanup 2018-12-19 19:40:13 -05:00
Labrys of Knossos
70acfc22e7 Add cleanup upon update 2018-12-19 19:27:52 -05:00
Labrys of Knossos
7aff860390 Add git clean functionality 2018-12-19 19:22:31 -05:00
Labrys of Knossos
339d2878b4 Add path parent option to module path and default to using local path 2018-12-19 19:22:28 -05:00
Labrys of Knossos
200a8d8827 Refactor LIB_DIR -> LIB_ROOT 2018-12-19 19:21:27 -05:00
Labrys of Knossos
32e0d7dba2 Refactor PROGRAM_DIR -> APP_DIR 2018-12-19 19:20:26 -05:00
Labrys of Knossos
9d0097fa68 Refactor module_root -> module_path 2018-12-19 19:05:40 -05:00
Labrys of Knossos
aea6e12639
Merge pull request #1439 from clinton-hall/quality/refactor
Major code refactoring
2018-12-19 17:43:30 -05:00
Labrys of Knossos
d546a7dcee Refactor method order 2018-12-19 17:33:43 -05:00
Labrys of Knossos
6b52bb68d0 Refactor process_episode -> process 2018-12-19 17:26:49 -05:00
Labrys of Knossos
7a46bfa55a Remove RunningProcess class and replace with platform alias 2018-12-19 17:20:56 -05:00
Labrys of Knossos
ad6f0b7bb6 Optimize imports 2018-12-19 17:20:56 -05:00
Labrys of Knossos
f61a17b8a0 Refactor nzbToMediaUtil -> utils 2018-12-19 17:20:56 -05:00
Labrys of Knossos
34904ee892 Refactor extractor.extractor -> extractor 2018-12-19 17:20:55 -05:00
Labrys of Knossos
094ae574a2 Refactor gh_api -> github_api 2018-12-19 17:20:55 -05:00
Labrys of Knossos
c624029846 Refactor versionCheck -> version_check 2018-12-19 17:20:55 -05:00
Labrys of Knossos
8176c2f007 Refactor nzbToMediaUserScripts -> user_scripts 2018-12-19 17:20:55 -05:00
Labrys of Knossos
c7398b9550 Refactor nzbToMediaSceneExceptions -> scene_exceptions 2018-12-19 17:20:54 -05:00
Labrys of Knossos
9d9abffdb6 Refactor nzbToMediaDB -> main_db 2018-12-19 17:20:54 -05:00
Labrys of Knossos
5d423b0f38 Refactor nzbToMediaConfig -> configuration 2018-12-19 17:20:54 -05:00
Labrys of Knossos
23cd600a6c Refactor nzbToMediaAutoFork -> forks 2018-12-19 17:20:53 -05:00
Labrys of Knossos
d3ce2a10f5 Refactor transcoder.transcoder -> transcoder 2018-12-19 17:20:53 -05:00
Labrys of Knossos
7d1f2fb2d5 Refactor databases.mainDB -> databases 2018-12-19 17:20:53 -05:00
Labrys of Knossos
3b429dcc7e Add Comic, Game, Movie, Music, and TV to core.auto_process 2018-12-19 17:20:53 -05:00
Labrys of Knossos
5839257f9b Refactor autoProcessGames -> games 2018-12-19 17:20:52 -05:00
Labrys of Knossos
4bd6e6251a Refactor autoProcessMovie -> movies 2018-12-19 17:20:52 -05:00
Labrys of Knossos
cbff62a08c Refactor autoProcessMusic -> music 2018-12-19 17:20:52 -05:00
Labrys of Knossos
95c67978ca Refactor autoProcessTV -> tv 2018-12-19 17:20:51 -05:00
Labrys of Knossos
088f23ad3a Refactor autoProcessComics -> comics 2018-12-19 17:20:51 -05:00
Labrys of Knossos
214ad21ea1 Refactor autoProcess -> auto_process 2018-12-19 17:20:51 -05:00
Labrys of Knossos
88b5f04f5c
Merge pull request #1438 from clinton-hall/libs/pywin32
Remove vendored pywin32
2018-12-19 17:15:19 -05:00
Labrys of Knossos
54422992a4 Remove vendored pywin32 2018-12-19 17:08:36 -05:00
clinton-hall
0700aa0400 fix six import in TorrentToMedia. #1433 2018-12-18 19:38:18 +13:00
Labrys of Knossos
a827c8700b
Merge pull request #1435 from clinton-hall/libs/fix-py2
Enforce lib import order
2018-12-17 22:28:23 -05:00
Labrys of Knossos
5f0a89f01c Enforce lib import order 2018-12-17 22:26:23 -05:00
Labrys of Knossos
af93f25f5b
Merge pull request #1434 from clinton-hall/libs/fix-py2
Fix python 2 libs
2018-12-17 22:19:30 -05:00
Labrys of Knossos
0b03e27032 Add pyyaml version 3.13 to Python 2 requirements 2018-12-17 22:15:35 -05:00
Labrys of Knossos
eab3dabc94 Add BeautifulSoup4 version 4.6.3 to Python 2 requirements 2018-12-17 22:15:35 -05:00
Labrys of Knossos
2f01d12755
Merge pull request #1432 from clinton-hall/libs/requirements
Fix `.gitignore` for `dist-info` and `egg-info`
2018-12-17 00:10:00 -05:00
Labrys of Knossos
f0451bc31a Fix .gitignore for dist-info and egg-info 2018-12-17 00:09:00 -05:00
Labrys of Knossos
7798a71448
Merge pull request #1431 from clinton-hall/quality/pep8
Various PEP8 fixes
2018-12-16 23:40:11 -05:00
Labrys of Knossos
41fa636fc2 PEP8 Argument should be lowercase 2018-12-16 23:33:31 -05:00
Labrys of Knossos
7f2a4d2605 PEP8 Class name should be CamelCase 2018-12-16 22:05:08 -05:00
Labrys of Knossos
d8cbf422dd PEP8 Function name should be lowercase 2018-12-16 21:59:24 -05:00
Labrys of Knossos
97e1ed71b3 PEP8 Variable in function should be lowercase 2018-12-16 21:59:24 -05:00
Labrys of Knossos
39f8949ede
Merge pull request #1429 from clinton-hall/libs/refactor
Refactor libs
2018-12-16 19:03:59 -05:00
Labrys of Knossos
248dd8609b Fix six not available before core import 2018-12-16 18:40:13 -05:00
Labrys of Knossos
26008b3607 Add feature to auto-update libs 2018-12-16 18:40:13 -05:00
Labrys of Knossos
43ffbc7c34 Add feature to make libs importable 2018-12-16 18:40:13 -05:00
Labrys of Knossos
b115ecc1fe Add requirements file 2018-12-16 18:33:01 -05:00
Labrys of Knossos
3a692c94a5 Move Windows libs to libs/windows 2018-12-16 13:50:35 -05:00
Labrys of Knossos
3975aaceb2 Move Python 2 libs to libs/py2 2018-12-16 13:50:28 -05:00
Labrys of Knossos
f3db9af8cf Move custom libs to libs/custom 2018-12-16 13:50:28 -05:00
Labrys of Knossos
1f4bd41bcc Move common libs to libs/common 2018-12-16 13:50:27 -05:00
Labrys of Knossos
8dbb1a2451
Merge pull request #1428 from clinton-hall/libs/requirements
Update requirements
2018-12-16 11:52:04 -05:00
Labrys of Knossos
4f3738fab5 Remove unused rarfile1 import 2018-12-16 11:50:58 -05:00
Labrys of Knossos
30a1789809 Update transmissionrpc to 0.11
Also updates:
- six-1.12.0
2018-12-16 11:50:58 -05:00
Labrys of Knossos
79011dbbc1 Update pyxdg to 0.26 2018-12-16 11:50:57 -05:00
Labrys of Knossos
41ccbfdede Update backports.functools-lru-cache to 1.5 2018-12-16 11:50:57 -05:00
Labrys of Knossos
49c9ea1350 Update futures to 3.2.0 2018-12-16 11:50:57 -05:00
Labrys of Knossos
8a897fed98 Add Python 2 specific requirements file 2018-12-16 11:50:46 -05:00
Labrys of Knossos
ad4ca05b64 Add pyxdg to requirements file 2018-12-16 11:42:06 -05:00
Labrys of Knossos
cd761c5ba9
Merge pull request #1427 from clinton-hall/fix/py3
Fix Github in Python 3
2018-12-16 09:35:29 -05:00
Labrys of Knossos
c9c16c230d Fix gh_api 2018-12-16 09:32:57 -05:00
Labrys of Knossos
3b289ba8e8 Fix str expected instead of bytes 2018-12-16 09:20:58 -05:00
Labrys of Knossos
563a6e1ecb
Merge pull request #1426 from clinton-hall/feature/Python3
Add Python 3 compatibility
2018-12-15 22:06:15 -05:00
Labrys of Knossos
8d3150cfc6 Fix fork detection in Python 3 2018-12-15 22:04:26 -05:00
Labrys of Knossos
959e2c317e Fix near operational error in upsert 2018-12-15 22:04:26 -05:00
Labrys of Knossos
84338c76c6 Fix TypeError: expected str instance, int found 2018-12-15 22:04:26 -05:00
Labrys of Knossos
8cd4b56891 Fix imports for Python 3 2018-12-15 22:04:26 -05:00
Labrys of Knossos
f5f6562fe9 Fix strings for Python 3
`basestring` not available in Python 3
`unicode` not available in Python 3
`str` expected instead of `bytes`
2018-12-15 22:04:26 -05:00
Labrys of Knossos
4ee656f22c Fix next in Python 3
In Python 3 `obj.next` is renamed to `obj.__next__` and should be
called with the `next` builtin.
2018-12-15 22:04:25 -05:00
Labrys of Knossos
943bdc9320 Fix dict usage in Python 3
`KeysView` does not support indexing
'dict_values' does not support operand type `+`
2018-12-15 22:04:25 -05:00
Labrys of Knossos
f744e6ea97
Merge pull request #1418 from clinton-hall/libs/pywin32
Add pywin32 version 224
2018-12-15 22:03:59 -05:00
Labrys of Knossos
b0f0fed8f3 Update pywin32 to 224 2018-12-15 22:00:11 -05:00
Labrys of Knossos
ca1753341b
Merge pull request #1414 from clinton-hall/libs/beets
Update beets to 1.4.7
2018-12-15 21:58:38 -05:00
Labrys of Knossos
014852c683
Merge pull request #1408 from clinton-hall/fix/unvendor
Move vendored packages in `core` to `libs`
2018-12-15 21:58:23 -05:00
Labrys of Knossos
5744e4ab04
Merge pull request #1416 from clinton-hall/libs/subliminal
Update subliminal to 2.0.5
2018-12-15 21:57:26 -05:00
Labrys of Knossos
8b08774ab1
Merge pull request #1415 from clinton-hall/libs/guessit
Update guessit to 3.0.3
2018-12-15 21:56:44 -05:00
Labrys of Knossos
05c3de0f36 Merge branch 'nightly' into fix/unvendor 2018-12-15 16:19:55 -05:00
Labrys of Knossos
e854005ae1 Update beets to 1.4.7
Also updates:
- colorama-0.4.1
- jellyfish-0.6.1
- munkres-1.0.12
- musicbrainzngs-0.6
- mutagen-1.41.1
- pyyaml-3.13
- six-1.12.0
- unidecode-1.0.23
2018-12-15 16:09:52 -05:00
Labrys of Knossos
f3fcb47427 Update subliminal to 2.0.5
Also updates:
- appdirs-1.4.3
- babelfish-0.5.5
- beautifulsoup4-4.6.3
- certifi-2018.11.29
- chardet-3.0.4
- click-7.0
- decorator-4.3.0
- dogpile.cache-0.7.1
- enzyme-0.4.1
- guessit-3.0.3
- idna-2.8
- pbr-5.1.1
- pysrt-1.1.1
- python-dateutil-2.7.5
- pytz-2018.7
- rarfile-3.0
- rebulk-1.0.0
- requests-2.21.0
- six-1.12.0
- stevedore-1.30.0
- urllib3-1.24.1
2018-12-15 16:09:00 -05:00
Labrys of Knossos
2eb9d9dc7c Update guessit to 3.0.3
Also updates:
- babelfish-0.5.5
- python-dateutil-2.7.5
- rebulk-1.0.0
- six-1.12.0
2018-12-15 16:08:03 -05:00
Labrys of Knossos
05b0fb498f
Merge pull request #1425 from clinton-hall/libs/requirements
Libs/requirements
2018-12-15 16:05:52 -05:00
Labrys of Knossos
9e2ca807e4 Add .dist-info and .egg-info for libs to .gitignore 2018-12-15 16:01:58 -05:00
Labrys of Knossos
bda96e248e Add Windows-specific requirements file 2018-12-15 16:01:57 -05:00
Labrys of Knossos
39aeedc14a Add requirements file 2018-12-15 16:01:56 -05:00
Labrys of Knossos
8bda5e64cd
Merge pull request #1409 from clinton-hall/fix/imports
Clean up imports
2018-12-15 15:59:35 -05:00
Labrys of Knossos
76763e4b76
Merge pull request #1417 from clinton-hall/libs/jaraco
Update jaraco-windows to 3.9.2
2018-12-15 15:58:46 -05:00
Labrys of Knossos
0cfdfe34c7
Merge pull request #1424 from clinton-hall/libs/pkg_resources
Update pkg_resources to package from setuptools 40.6.3
2018-12-15 15:57:10 -05:00
Labrys of Knossos
7b7313c1d5 Update pkg_resources to package from setuptools 40.6.3 2018-12-15 15:12:49 -05:00
Labrys of Knossos
8d43b8ea39 Update jaraco-windows to 3.9.2
Also updates:
- importlib-metadata-0.7
- jaraco-windows
- jaraco.classes-1.5
- jaraco.collections-1.6.0
- jaraco.functools-1.20
- jaraco.structures-1.1.2
- jaraco.text-1.10.1
- jaraco.ui-1.6
- more-itertools-4.3.0
- path.py-11.5.0
- six-1.12.0
2018-12-15 15:06:37 -05:00
Labrys of Knossos
cd28996dad Remove superfluous __all__ 2018-12-15 15:01:06 -05:00
Labrys of Knossos
1fdd9c1017 Fix relative import 2018-12-15 15:01:06 -05:00
Labrys of Knossos
c5a3137627 Remove unused imports 2018-12-15 15:01:06 -05:00
Labrys of Knossos
5bc789bca3 Optimize imports 2018-12-15 15:01:06 -05:00
Labrys of Knossos
aa769627bd
Merge pull request #1423 from clinton-hall/py3/encoding
Fix encoding for Python 3
2018-12-15 14:59:05 -05:00
Labrys of Knossos
417a9e5e63 Decode output during versionCheck 2018-12-15 14:56:40 -05:00
Labrys of Knossos
02ae99b117 Fix sys.setdefaultencoding in Python 3
This does away with the setdefaultencoding hack in Python 3
2018-12-15 14:56:36 -05:00
Labrys of Knossos
6992a9c66e
Merge pull request #1422 from clinton-hall/fix/pywin32
Final fix for pywin32
2018-12-15 14:52:51 -05:00
Labrys of Knossos
a2ce48c52e Final fix for pywin32 2018-12-15 14:52:02 -05:00
Labrys of Knossos
dc6fb6d54c
Merge pull request #1421 from clinton-hall/fix/pywin32
Proper fix for pywin32 imports
2018-12-15 14:32:43 -05:00
Labrys of Knossos
0a37651ae1 Fix pywin32 imports 2018-12-15 14:27:17 -05:00
Labrys of Knossos
c84cc98ba4
Merge pull request #1420 from clinton-hall/fix/pywin32
Fix pywin32 imports
2018-12-15 14:18:11 -05:00
Labrys of Knossos
345ff1de93 Fix pywin32 imports 2018-12-15 14:17:10 -05:00
Labrys of Knossos
3f5f97877a
Merge pull request #1419 from clinton-hall/fix/configobj
Fix configobj import
2018-12-15 14:16:42 -05:00
Labrys of Knossos
f210693102 Fix configobj import 2018-12-15 14:14:28 -05:00
Labrys of Knossos
fc16e7c374 Update rencode to 1.0.6 2018-12-15 13:43:34 -05:00
Labrys of Knossos
9fb4cc1986 Fix utorrent client import 2018-12-15 13:43:33 -05:00
Labrys of Knossos
d17ceb11f9 Move vendored package rencode from synchronousdeluge to libs 2018-12-15 13:43:33 -05:00
Labrys of Knossos
6bba210fd0 Fix imap is map in python 3 2018-12-15 13:40:07 -05:00
Labrys of Knossos
22cbec26e0 Fix linktastic import 2018-12-15 13:23:07 -05:00
Labrys of Knossos
d8f7c4eb7b Fix imports for Python 3 2018-12-15 13:20:01 -05:00
Labrys of Knossos
87d6378768 Remove superfluous try..except 2018-12-15 13:19:57 -05:00
Labrys of Knossos
e3d282d0d4 Remove superfluous __all__ 2018-12-15 12:25:27 -05:00
Labrys of Knossos
7d6ef5b7c6 Remove unused imports 2018-12-15 12:25:15 -05:00
Labrys of Knossos
ec95829a37 Fix relative import 2018-12-15 12:25:05 -05:00
Labrys of Knossos
f26f017cc8 Merge remote-tracking branch 'origin/libs/transmission' into fix/unvendor
# Conflicts:
#	libs/transmissionrpc/__init__.py
2018-12-15 12:23:48 -05:00
Labrys of Knossos
e7eb7c8085 Merge remote-tracking branch 'origin/libs/qbittorrent' into fix/unvendor 2018-12-15 12:21:48 -05:00
Labrys of Knossos
9f6e2f0978 Merge remote-tracking branch 'origin/libs/linktastic' into fix/unvendor 2018-12-15 12:07:24 -05:00
Labrys of Knossos
3ed574a49f Move vendored packages in core to libs
Remove `core` references from vendored packages in `core` and optimize imports
2018-12-15 03:13:00 -05:00
Labrys of Knossos
27a294bfdc
Merge pull request #1413 from clinton-hall/libs/configobj
Update configobj to 5.0.6
2018-12-15 03:05:38 -05:00
Labrys of Knossos
8c8f58eae8
Merge pull request #1412 from clinton-hall/libs/babelfish
Update babelfish to 0.5.5
2018-12-15 02:52:17 -05:00
Labrys of Knossos
93c6c24225
Merge pull request #1411 from clinton-hall/libs/requests
Update requests to 2.21.0
2018-12-15 02:48:57 -05:00
Labrys of Knossos
c7ecec3e7c
Merge pull request #1410 from clinton-hall/libs/six
Update six to 1.12.0
2018-12-15 02:46:02 -05:00
Labrys of Knossos
ba758c1551 Update configobj to 5.0.6
Also updates:
- six-1.12.0
2018-12-15 01:58:50 -05:00
Labrys of Knossos
2e674ae169 Update transmissionrpc to 0.11
Also updates:
- six-1.12.0
2018-12-15 01:57:39 -05:00
Labrys of Knossos
ee7c75c994 Update python-qBittorrent to 0.3.1
Also updates:
- certifi-2018.11.29
- chardet-3.0.4
- idna-2.8
- requests-2.21.0
- urllib3-1.24.1
2018-12-15 01:56:43 -05:00
Labrys of Knossos
72226bffd8 Update requests to 2.21.0
Also updates:
- certifi-2018.11.29
- chardet-3.0.4
- idna-2.8
- urllib3-1.24.1
2018-12-15 01:48:06 -05:00
Labrys of Knossos
07629f9c47 Update six to 1.12.0 2018-12-15 01:35:09 -05:00
Labrys of Knossos
992a73d3f7 Update linktastic to 0.1.0 2018-12-15 01:32:46 -05:00
Labrys of Knossos
b4dd6afa41 Update babelfish to 0.5.5 2018-12-15 01:26:59 -05:00
Labrys of Knossos
367f7f3a61
Merge pull request #1407 from clinton-hall/fix/six
Remove vendored lib `six` from `core/transmissionrpc`
2018-12-14 19:09:31 -05:00
Labrys of Knossos
ef192b2c2c Remove vendored lib six from core/transmissionrpc 2018-12-14 19:05:47 -05:00
Labrys of Knossos
98d495503e
Merge pull request #1406 from clinton-hall/py3/print
Fix print statements
2018-12-14 18:51:49 -05:00
Labrys of Knossos
a3281d888d Fix print statements 2018-12-14 16:47:45 -05:00
Clinton Hall
c81d8bc7a5
Merge pull request #1403 from nikagl/nightly
Start vbs directly from extractor and use args instead of Wscript.Arguments
2018-12-08 13:51:30 +13:00
Nika Gerson Lohman
c2a591c8bb
Use args instead of Wscript.Arguments 2018-12-07 22:46:31 +01:00
Nika Gerson Lohman
1817ac349d
Delete invisible.cmd 2018-12-07 22:45:22 +01:00
Nika Gerson Lohman
5272c8b31d
Start vbs directly from extractor 2018-12-07 22:45:01 +01:00
Clinton Hall
23c4edb50c
Merge pull request #1401 from nikagl/patch-1
Fix execution of extraction
2018-12-07 11:55:58 +13:00
Nika Gerson Lohman
9769596d24
Fix execution of extraction
Change core.SHOWEXTRACT to string in cmd_7zip
2018-12-06 14:35:32 +01:00
clinton-hall
de869391b1 remove surpluss debug, fix handling of None Password file, and fix invisible windows extraction.
added option for windows extraction debugging. Fixes #1399 #759
2018-12-06 19:24:04 +13:00
clinton-hall
e2accb9ec2 added debugging to extractor. #1399 2018-12-05 21:28:59 +13:00
clinton-hall
1dfa9092ef Merge branch 'nightly' of https://github.com/clinton-hall/nzbToMedia into nightly 2018-12-05 20:16:00 +13:00
clinton-hall
20e821d0cc remove .r00 extraction. Fixes error introduced at #1399 2018-12-05 20:14:29 +13:00
Clinton Hall
580259407b
Merge pull request #1400 from nikagl/nightly
Update windows extraction method to allow return values
2018-12-05 19:19:29 +13:00
Nika Gerson Lohman
9aed1c29f5
Update extractor.py for correct return code 2018-12-04 11:52:06 +01:00
Nika Gerson Lohman
13f503e796
Update invisible.vbs to return exit code of 7zip 2018-12-04 11:50:06 +01:00
Nika Gerson Lohman
933732731f
Update invisible.cmd to return errorlevel 2018-12-04 11:48:36 +01:00
Nika Gerson Lohman
49eabb2ede
Updated 7z x86 files to 18.05 version 2018-12-04 11:47:24 +01:00
Nika Gerson Lohman
2cd1b62f69
Updated 7z x64 files to 18.05 version 2018-12-04 11:46:22 +01:00
Nika Gerson Lohman
300363946d
Merge pull request #2 from clinton-hall/nightly
Merge Nightly
2018-12-04 11:43:35 +01:00
clinton-hall
7cba0fa16b add extract of .r00 file types. Fixes #1399 2018-12-03 23:07:34 +13:00
clinton-hall
e306935c28 add test for "None" releases. Fixes #1396 2018-11-27 20:37:06 +13:00
clinton-hall
de442852c7 don't check failed media when no_extract_failed set. Fixes #1091 2018-11-10 15:28:12 +13:00
2521 changed files with 296350 additions and 84673 deletions

12
.bumpversion.cfg Normal file
View file

@ -0,0 +1,12 @@
[bumpversion]
current_version = 12.1.13
commit = True
tag = False
[bumpversion:file:setup.py]
search = version='{current_version}'
replace = version='{new_version}'
[bumpversion:file:core/__init__.py]
search = __version__ = '{current_version}'
replace = __version__ = '{new_version}'

13
.editorconfig Normal file
View file

@ -0,0 +1,13 @@
# see http://editorconfig.org
root = true
[*]
end_of_line = lf
trim_trailing_whitespace = true
insert_final_newline = true
indent_style = space
indent_size = 4
charset = utf-8
[*.{bat,cmd,ps1}]
end_of_line = crlf

76
.github/CODE_OF_CONDUCT.md vendored Normal file
View file

@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at fock_wulf@hotmail.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

14
.github/CONTRIBUTING.md vendored Normal file
View file

@ -0,0 +1,14 @@
# Contributing
When contributing to this repository, please first check the issues list, current pull requests, and FAQ pages.
While it is prefered that all interactions be made through github, the author can be contacted directly at fock_wulf@hotmail.com
Please note we have a code of conduct, please follow it in all your interactions with the project.
## Pull Request Process
1. Please base all pull requests on the current nightly branch.
2. Include a description to explain what is achieved with a pull request.
3. Link any relevant issues that are closed or impacted by the pull request.
4. Please update the FAQ to reflect any new parameters, changed behaviour, or suggested configurations relevant to the changes.

23
.github/ISSUE_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,23 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**Technical Specs**
1. Running on (Windows, Linux, NAS Model etc) '....'
2. Python version '....'
3. Download Client (NZBget, SABnbzd, Transmission) '....'
4. Intended Media Management (SickChill, CouchPotoato, Radarr, Sonarr) '....'
**Expected behavior**
A clear and concise description of what you expected to happen.
**Log**
Please provide an extract, or full debug log that indicates the issue.

28
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View file

@ -0,0 +1,28 @@
# Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
## Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
**Test Configuration**:
# Checklist:
- [ ] I have based this change on the nightly branch
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes

View file

@ -1,11 +1,13 @@
nzbToMedia
================
==========
Provides an [efficient](https://github.com/clinton-hall/nzbToMedia/wiki/Efficient-on-demand-post-processing) way to handle postprocessing for [CouchPotatoServer](https://couchpota.to/ "CouchPotatoServer") and [SickBeard](http://sickbeard.com/ "SickBeard") (and its [forks](https://github.com/clinton-hall/nzbToMedia/wiki/Failed-Download-Handling-%28FDH%29#sick-beard-and-its-forks))
when using one of the popular NZB download clients like [SABnzbd](http://sabnzbd.org/ "SABnzbd") and [NZBGet](http://nzbget.sourceforge.net/ "NZBGet") on low performance systems like a NAS.
when using one of the popular NZB download clients like [SABnzbd](http://sabnzbd.org/ "SABnzbd") and [NZBGet](https://nzbget.com/ "NZBGet") on low performance systems like a NAS.
This script is based on sabToSickBeard (written by Nic Wolfe and supplied with SickBeard), with the support for NZBGet being added by [thorli](https://github.com/thorli "thorli") and further contributions by [schumi2004](https://github.com/schumi2004 "schumi2004") and [hugbug](https://sourceforge.net/apps/phpbb/nzbget/memberlist.php?mode=viewprofile&u=67 "hugbug").
Torrent suport added by [jkaberg](https://github.com/jkaberg "jkaberg") and [berkona](https://github.com/berkona "berkona")
Corrupt video checking, auto SickBeard fork determination and a whole lot of code improvement was done by [echel0n](https://github.com/echel0n "echel0n")
Python3 compatibility, and much cleaner code base has been contributed by [Labrys of Knossos](https://github.com/labrys "Labrys of Knossos")
Introduction
------------
@ -17,7 +19,7 @@ Failed download handling for SickBeard is available by using Tolstyak's fork [Si
To use this feature, in autoProcessTV.cfg set the parameter "fork=failed". Default is "fork=default" and will work with the standard version of SickBeard and just ignores failed downloads.
Development of Tolstyak's fork ended in 2013, but newer forks exist with significant feature updates such as [Mr-Orange TPB](https://github.com/coach0742/Sick-Beard) (discontinued), [SickRageTV](https://github.com/SiCKRAGETV/SickRage) and [SickRage](https://github.com/SickRage/SickRage) (active). See [SickBeard Forks](https://github.com/clinton-hall/nzbToMedia/wiki/Failed-Download-Handling-%28FDH%29#sick-beard-and-its-forks "SickBeard Forks") for a list of known forks.
Full support is provided for [SickRageTV](https://github.com/SiCKRAGETV/SickRage), [SickRage](https://github.com/SickRage/SickRage), and [SickGear](https://github.com/SickGear/SickGear).
Full support is provided for [SickChill](https://github.com/SickChill/SickChill), [SiCKRAGE](https://github.com/SiCKRAGE/SiCKRAGE), [Medusa](https://github.com/pymedusa/Medusa), and [SickGear](https://github.com/SickGear/SickGear).
Torrent support has been added with the assistance of jkaberg and berkona. Currently supports uTorrent, Transmission, Deluge and possibly more.
To enable Torrent extraction, on Windows, you need to install [7-zip](http://www.7-zip.org/ "7-zip") or on *nix you need to install the following packages/commands.
@ -30,7 +32,7 @@ Installation instructions for this are available in the [wiki](https://github.co
Contribution
------------
We who have developed nzbToMedia believe in the openness of open-source, and as such we hope that any modifications will lead back to the [orignal repo](https://github.com/clinton-hall/nzbToMedia "orignal repo") via pull requests.
We who have developed nzbToMedia believe in the openness of open-source, and as such we hope that any modifications will lead back to the [original repo](https://github.com/clinton-hall/nzbToMedia "orignal repo") via pull requests.
Founder: [clinton-hall](https://github.com/clinton-hall "clinton-hall")
@ -50,9 +52,11 @@ Sorry for any inconvenience caused here.
### General
1. Install python 2.7.
1. Install Python
2. Clone or copy all files into a directory wherever you want to keep them (eg. /scripts/ in the home directory of your download client)
1. Install `pywin32`
1. Clone or copy all files into a directory wherever you want to keep them (eg. /scripts/ in the home directory of your download client)
and change the permission accordingly so the download client can access these files.
`git clone git://github.com/clinton-hall/nzbToMedia.git`

8
.gitignore vendored
View file

@ -1,7 +1,7 @@
*.cfg
!.bumpversion.cfg
*.cfg.old
*.pyc
*.pyo
*.py[cod]
*.log
*.pid
*.db
@ -9,3 +9,7 @@
/userscripts/
/logs/
/.idea/
/venv/
*.dist-info
*.egg-info
/.vscode

View file

@ -1,181 +1,184 @@
#!/usr/bin/env python2
#!/usr/bin/env python
# coding=utf-8
import cleanup
cleanup.clean('core', 'libs')
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import datetime
import os
import sys
import eol
import cleanup
eol.check()
cleanup.clean(cleanup.FOLDER_STRUCTURE)
import core
from core import logger, main_db
from core.auto_process import comics, games, movies, music, tv, books
from core.auto_process.common import ProcessResult
from core.plugins.plex import plex_update
from core.user_scripts import external_script
from core.utils import char_replace, convert_to_ascii, replace_links
from libs.six import text_type
from core import logger, nzbToMediaDB
from core.nzbToMediaUtil import convert_to_ascii, CharReplace, plex_update, replace_links
from core.nzbToMediaUserScript import external_script
try:
text_type = unicode
except NameError:
text_type = str
def processTorrent(inputDirectory, inputName, inputCategory, inputHash, inputID, clientAgent):
def process_torrent(input_directory, input_name, input_category, input_hash, input_id, client_agent):
status = 1 # 1 = failed | 0 = success
root = 0
foundFile = 0
found_file = 0
if clientAgent != 'manual' and not core.DOWNLOADINFO:
logger.debug('Adding TORRENT download info for directory {0} to database'.format(inputDirectory))
if client_agent != 'manual' and not core.DOWNLOAD_INFO:
logger.debug('Adding TORRENT download info for directory {0} to database'.format(input_directory))
myDB = nzbToMediaDB.DBConnection()
my_db = main_db.DBConnection()
inputDirectory1 = inputDirectory
inputName1 = inputName
input_directory1 = input_directory
input_name1 = input_name
try:
encoded, inputDirectory1 = CharReplace(inputDirectory)
encoded, inputName1 = CharReplace(inputName)
except:
encoded, input_directory1 = char_replace(input_directory)
encoded, input_name1 = char_replace(input_name)
except Exception:
pass
controlValueDict = {"input_directory": text_type(inputDirectory1)}
newValueDict = {"input_name": text_type(inputName1),
"input_hash": text_type(inputHash),
"input_id": text_type(inputID),
"client_agent": text_type(clientAgent),
"status": 0,
"last_update": datetime.date.today().toordinal()
}
myDB.upsert("downloads", newValueDict, controlValueDict)
control_value_dict = {'input_directory': text_type(input_directory1)}
new_value_dict = {
'input_name': text_type(input_name1),
'input_hash': text_type(input_hash),
'input_id': text_type(input_id),
'client_agent': text_type(client_agent),
'status': 0,
'last_update': datetime.date.today().toordinal(),
}
my_db.upsert('downloads', new_value_dict, control_value_dict)
logger.debug("Received Directory: {0} | Name: {1} | Category: {2}".format(inputDirectory, inputName, inputCategory))
logger.debug('Received Directory: {0} | Name: {1} | Category: {2}'.format(input_directory, input_name, input_category))
# Confirm the category by parsing directory structure
inputDirectory, inputName, inputCategory, root = core.category_search(inputDirectory, inputName, inputCategory,
root, core.CATEGORIES)
if inputCategory == "":
inputCategory = "UNCAT"
input_directory, input_name, input_category, root = core.category_search(input_directory, input_name, input_category,
root, core.CATEGORIES)
if input_category == '':
input_category = 'UNCAT'
usercat = inputCategory
try:
inputName = inputName.encode(core.SYS_ENCODING)
except UnicodeError:
pass
try:
inputDirectory = inputDirectory.encode(core.SYS_ENCODING)
except UnicodeError:
pass
usercat = input_category
logger.debug("Determined Directory: {0} | Name: {1} | Category: {2}".format
(inputDirectory, inputName, inputCategory))
logger.debug('Determined Directory: {0} | Name: {1} | Category: {2}'.format
(input_directory, input_name, input_category))
# auto-detect section
section = core.CFG.findsection(inputCategory).isenabled()
if section is None:
section = core.CFG.findsection("ALL").isenabled()
if section is None:
logger.error('Category:[{0}] is not defined or is not enabled. '
'Please rename it or ensure it is enabled for the appropriate section '
'in your autoProcessMedia.cfg and try again.'.format
(inputCategory))
return [-1, ""]
section = core.CFG.findsection(input_category).isenabled()
if section is None: #Check for user_scripts for 'ALL' and 'UNCAT'
if usercat in core.CATEGORIES:
section = core.CFG.findsection('ALL').isenabled()
usercat = 'ALL'
else:
usercat = "ALL"
section = core.CFG.findsection('UNCAT').isenabled()
usercat = 'UNCAT'
if section is None: # We haven't found any categories to process.
logger.error('Category:[{0}] is not defined or is not enabled. '
'Please rename it or ensure it is enabled for the appropriate section '
'in your autoProcessMedia.cfg and try again.'.format
(input_category))
return [-1, '']
if len(section) > 1:
logger.error('Category:[{0}] is not unique, {1} are using it. '
'Please rename it or disable all other sections using the same category name '
'in your autoProcessMedia.cfg and try again.'.format
(usercat, section.keys()))
return [-1, ""]
return [-1, '']
if section:
sectionName = section.keys()[0]
logger.info('Auto-detected SECTION:{0}'.format(sectionName))
section_name = section.keys()[0]
logger.info('Auto-detected SECTION:{0}'.format(section_name))
else:
logger.error("Unable to locate a section with subsection:{0} "
"enabled in your autoProcessMedia.cfg, exiting!".format
(inputCategory))
return [-1, ""]
logger.error('Unable to locate a section with subsection:{0} '
'enabled in your autoProcessMedia.cfg, exiting!'.format
(input_category))
return [-1, '']
section = dict(section[sectionName][usercat]) # Type cast to dict() to allow effective usage of .get()
section = dict(section[section_name][usercat]) # Type cast to dict() to allow effective usage of .get()
Torrent_NoLink = int(section.get("Torrent_NoLink", 0))
keep_archive = int(section.get("keep_archive", 0))
torrent_no_link = int(section.get('Torrent_NoLink', 0))
keep_archive = int(section.get('keep_archive', 0))
extract = int(section.get('extract', 0))
extensions = section.get('user_script_mediaExtensions', "").lower().split(',')
uniquePath = int(section.get("unique_path", 1))
extensions = section.get('user_script_mediaExtensions', '')
unique_path = int(section.get('unique_path', 1))
if clientAgent != 'manual':
core.pause_torrent(clientAgent, inputHash, inputID, inputName)
if client_agent != 'manual':
core.pause_torrent(client_agent, input_hash, input_id, input_name)
# In case input is not directory, make sure to create one.
# This way Processing is isolated.
if not os.path.isdir(os.path.join(inputDirectory, inputName)):
basename = os.path.basename(inputDirectory)
basename = core.sanitizeName(inputName) \
if inputName == basename else os.path.splitext(core.sanitizeName(inputName))[0]
outputDestination = os.path.join(core.OUTPUTDIRECTORY, inputCategory, basename)
elif uniquePath:
outputDestination = os.path.normpath(
core.os.path.join(core.OUTPUTDIRECTORY, inputCategory, core.sanitizeName(inputName).replace(" ",".")))
if not os.path.isdir(os.path.join(input_directory, input_name)):
basename = os.path.basename(input_directory)
basename = core.sanitize_name(input_name) \
if input_name == basename else os.path.splitext(core.sanitize_name(input_name))[0]
output_destination = os.path.join(core.OUTPUT_DIRECTORY, input_category, basename)
elif unique_path:
output_destination = os.path.normpath(
core.os.path.join(core.OUTPUT_DIRECTORY, input_category, core.sanitize_name(input_name).replace(' ', '.')))
else:
outputDestination = os.path.normpath(
core.os.path.join(core.OUTPUTDIRECTORY, inputCategory))
try:
outputDestination = outputDestination.encode(core.SYS_ENCODING)
except UnicodeError:
pass
output_destination = os.path.normpath(
core.os.path.join(core.OUTPUT_DIRECTORY, input_category))
if outputDestination in inputDirectory:
outputDestination = inputDirectory
if output_destination in input_directory:
output_destination = input_directory
logger.info("Output directory set to: {0}".format(outputDestination))
logger.info('Output directory set to: {0}'.format(output_destination))
if core.SAFE_MODE and outputDestination == core.TORRENT_DEFAULTDIR:
if core.SAFE_MODE and output_destination == core.TORRENT_DEFAULT_DIRECTORY:
logger.error('The output directory:[{0}] is the Download Directory. '
'Edit outputDirectory in autoProcessMedia.cfg. Exiting'.format
(inputDirectory))
return [-1, ""]
(input_directory))
return [-1, '']
logger.debug("Scanning files in directory: {0}".format(inputDirectory))
logger.debug('Scanning files in directory: {0}'.format(input_directory))
if sectionName in ['HeadPhones', 'Lidarr']:
if section_name in ['HeadPhones', 'Lidarr']:
core.NOFLATTEN.extend(
inputCategory) # Make sure we preserve folder structure for HeadPhones.
input_category) # Make sure we preserve folder structure for HeadPhones.
now = datetime.datetime.now()
if extract == 1:
inputFiles = core.listMediaFiles(inputDirectory, archives=False, other=True, otherext=extensions)
input_files = core.list_media_files(input_directory, archives=False, other=True, otherext=extensions)
else:
inputFiles = core.listMediaFiles(inputDirectory, other=True, otherext=extensions)
if len(inputFiles) == 0 and os.path.isfile(inputDirectory):
inputFiles = [inputDirectory]
logger.debug("Found 1 file to process: {0}".format(inputDirectory))
input_files = core.list_media_files(input_directory, other=True, otherext=extensions)
if len(input_files) == 0 and os.path.isfile(input_directory):
input_files = [input_directory]
logger.debug('Found 1 file to process: {0}'.format(input_directory))
else:
logger.debug("Found {0} files in {1}".format(len(inputFiles), inputDirectory))
for inputFile in inputFiles:
filePath = os.path.dirname(inputFile)
fileName, fileExt = os.path.splitext(os.path.basename(inputFile))
fullFileName = os.path.basename(inputFile)
logger.debug('Found {0} files in {1}'.format(len(input_files), input_directory))
for inputFile in input_files:
file_path = os.path.dirname(inputFile)
file_name, file_ext = os.path.splitext(os.path.basename(inputFile))
full_file_name = os.path.basename(inputFile)
targetFile = core.os.path.join(outputDestination, fullFileName)
if inputCategory in core.NOFLATTEN:
if not os.path.basename(filePath) in outputDestination:
targetFile = core.os.path.join(
core.os.path.join(outputDestination, os.path.basename(filePath)), fullFileName)
logger.debug("Setting outputDestination to {0} to preserve folder structure".format
(os.path.dirname(targetFile)))
try:
targetFile = targetFile.encode(core.SYS_ENCODING)
except UnicodeError:
pass
target_file = core.os.path.join(output_destination, full_file_name)
if input_category in core.NOFLATTEN:
if not os.path.basename(file_path) in output_destination:
target_file = core.os.path.join(
core.os.path.join(output_destination, os.path.basename(file_path)), full_file_name)
logger.debug('Setting outputDestination to {0} to preserve folder structure'.format
(os.path.dirname(target_file)))
if root == 1:
if not foundFile:
logger.debug("Looking for {0} in: {1}".format(inputName, inputFile))
if any([core.sanitizeName(inputName) in core.sanitizeName(inputFile),
core.sanitizeName(fileName) in core.sanitizeName(inputName)]):
foundFile = True
logger.debug("Found file {0} that matches Torrent Name {1}".format
(fullFileName, inputName))
if not found_file:
logger.debug('Looking for {0} in: {1}'.format(input_name, inputFile))
if any([core.sanitize_name(input_name) in core.sanitize_name(inputFile),
core.sanitize_name(file_name) in core.sanitize_name(input_name)]):
found_file = True
logger.debug('Found file {0} that matches Torrent Name {1}'.format
(full_file_name, input_name))
else:
continue
@ -183,106 +186,105 @@ def processTorrent(inputDirectory, inputName, inputCategory, inputHash, inputID,
mtime_lapse = now - datetime.datetime.fromtimestamp(os.path.getmtime(inputFile))
ctime_lapse = now - datetime.datetime.fromtimestamp(os.path.getctime(inputFile))
if not foundFile:
logger.debug("Looking for files with modified/created dates less than 5 minutes old.")
if not found_file:
logger.debug('Looking for files with modified/created dates less than 5 minutes old.')
if (mtime_lapse < datetime.timedelta(minutes=5)) or (ctime_lapse < datetime.timedelta(minutes=5)):
foundFile = True
logger.debug("Found file {0} with date modified/created less than 5 minutes ago.".format
(fullFileName))
found_file = True
logger.debug('Found file {0} with date modified/created less than 5 minutes ago.'.format
(full_file_name))
else:
continue # This file has not been recently moved or created, skip it
if Torrent_NoLink == 0:
if torrent_no_link == 0:
try:
core.copy_link(inputFile, targetFile, core.USELINK)
core.rmReadOnly(targetFile)
except:
logger.error("Failed to link: {0} to {1}".format(inputFile, targetFile))
core.copy_link(inputFile, target_file, core.USE_LINK)
core.remove_read_only(target_file)
except Exception:
logger.error('Failed to link: {0} to {1}'.format(inputFile, target_file))
inputName, outputDestination = convert_to_ascii(inputName, outputDestination)
input_name, output_destination = convert_to_ascii(input_name, output_destination)
if extract == 1:
logger.debug('Checking for archives to extract in directory: {0}'.format(inputDirectory))
core.extractFiles(inputDirectory, outputDestination, keep_archive)
logger.debug('Checking for archives to extract in directory: {0}'.format(input_directory))
core.extract_files(input_directory, output_destination, keep_archive)
if inputCategory not in core.NOFLATTEN:
if input_category not in core.NOFLATTEN:
# don't flatten hp in case multi cd albums, and we need to copy this back later.
core.flatten(outputDestination)
core.flatten(output_destination)
# Now check if video files exist in destination:
if sectionName in ["SickBeard", "NzbDrone", "Sonarr", "CouchPotato", "Radarr"]:
numVideos = len(
core.listMediaFiles(outputDestination, media=True, audio=False, meta=False, archives=False))
if numVideos > 0:
logger.info("Found {0} media files in {1}".format(numVideos, outputDestination))
if section_name in ['SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr', 'CouchPotato', 'Radarr', 'Watcher3']:
num_videos = len(
core.list_media_files(output_destination, media=True, audio=False, meta=False, archives=False))
if num_videos > 0:
logger.info('Found {0} media files in {1}'.format(num_videos, output_destination))
status = 0
elif extract != 1:
logger.info("Found no media files in {0}. Sending to {1} to process".format(outputDestination, sectionName))
logger.info('Found no media files in {0}. Sending to {1} to process'.format(output_destination, section_name))
status = 0
else:
logger.warning("Found no media files in {0}".format(outputDestination))
logger.warning('Found no media files in {0}'.format(output_destination))
# Only these sections can handling failed downloads
# so make sure everything else gets through without the check for failed
if sectionName not in ['CouchPotato', 'Radarr', 'SickBeard', 'NzbDrone', 'Sonarr']:
if section_name not in ['CouchPotato', 'Radarr', 'SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr', 'Watcher3']:
status = 0
logger.info("Calling {0}:{1} to post-process:{2}".format(sectionName, usercat, inputName))
logger.info('Calling {0}:{1} to post-process:{2}'.format(section_name, usercat, input_name))
if core.TORRENT_CHMOD_DIRECTORY:
core.rchmod(outputDestination, core.TORRENT_CHMOD_DIRECTORY)
core.rchmod(output_destination, core.TORRENT_CHMOD_DIRECTORY)
result = [0, ""]
if sectionName == 'UserScript':
result = external_script(outputDestination, inputName, inputCategory, section)
result = ProcessResult(
message='',
status_code=0,
)
if section_name == 'UserScript':
result = external_script(output_destination, input_name, input_category, section)
elif section_name in ['CouchPotato', 'Radarr', 'Watcher3']:
result = movies.process(section_name, output_destination, input_name, status, client_agent, input_hash, input_category)
elif section_name in ['SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr']:
if input_hash:
input_hash = input_hash.upper()
result = tv.process(section_name, output_destination, input_name, status, client_agent, input_hash, input_category)
elif section_name in ['HeadPhones', 'Lidarr']:
result = music.process(section_name, output_destination, input_name, status, client_agent, input_category)
elif section_name == 'Mylar':
result = comics.process(section_name, output_destination, input_name, status, client_agent, input_category)
elif section_name == 'Gamez':
result = games.process(section_name, output_destination, input_name, status, client_agent, input_category)
elif section_name == 'LazyLibrarian':
result = books.process(section_name, output_destination, input_name, status, client_agent, input_category)
elif sectionName in ['CouchPotato', 'Radarr']:
result = core.autoProcessMovie().process(sectionName, outputDestination, inputName,
status, clientAgent, inputHash, inputCategory)
elif sectionName in ['SickBeard', 'NzbDrone', 'Sonarr']:
if inputHash:
inputHash = inputHash.upper()
result = core.autoProcessTV().processEpisode(sectionName, outputDestination, inputName,
status, clientAgent, inputHash, inputCategory)
elif sectionName in ['HeadPhones', 'Lidarr']:
result = core.autoProcessMusic().process(sectionName, outputDestination, inputName,
status, clientAgent, inputCategory)
elif sectionName == 'Mylar':
result = core.autoProcessComics().processEpisode(sectionName, outputDestination, inputName,
status, clientAgent, inputCategory)
elif sectionName == 'Gamez':
result = core.autoProcessGames().process(sectionName, outputDestination, inputName,
status, clientAgent, inputCategory)
plex_update(input_category)
plex_update(inputCategory)
if result[0] != 0:
if result.status_code != 0:
if not core.TORRENT_RESUME_ON_FAILURE:
logger.error("A problem was reported in the autoProcess* script. "
"Torrent won't resume seeding (settings)")
elif clientAgent != 'manual':
logger.error("A problem was reported in the autoProcess* script. "
"If torrent was paused we will resume seeding")
core.resume_torrent(clientAgent, inputHash, inputID, inputName)
logger.error('A problem was reported in the autoProcess* script. '
'Torrent won\'t resume seeding (settings)')
elif client_agent != 'manual':
logger.error('A problem was reported in the autoProcess* script. '
'If torrent was paused we will resume seeding')
core.resume_torrent(client_agent, input_hash, input_id, input_name)
else:
if clientAgent != 'manual':
if client_agent != 'manual':
# update download status in our DB
core.update_downloadInfoStatus(inputName, 1)
core.update_download_info_status(input_name, 1)
# remove torrent
if core.USELINK == 'move-sym' and not core.DELETE_ORIGINAL == 1:
logger.debug('Checking for sym-links to re-direct in: {0}'.format(inputDirectory))
for dirpath, dirs, files in os.walk(inputDirectory):
if core.USE_LINK == 'move-sym' and not core.DELETE_ORIGINAL == 1:
logger.debug('Checking for sym-links to re-direct in: {0}'.format(input_directory))
for dirpath, _, files in os.walk(input_directory):
for file in files:
logger.debug('Checking symlink: {0}'.format(os.path.join(dirpath, file)))
replace_links(os.path.join(dirpath, file))
core.remove_torrent(clientAgent, inputHash, inputID, inputName)
core.remove_torrent(client_agent, input_hash, input_id, input_name)
if not sectionName == 'UserScript':
if section_name != 'UserScript':
# for user script, we assume this is cleaned by the script or option USER_SCRIPT_CLEAN
# cleanup our processing folders of any misc unwanted files and empty directories
core.cleanDir(outputDestination, sectionName, inputCategory)
core.clean_dir(output_destination, section_name, input_category)
return result
@ -292,82 +294,79 @@ def main(args):
core.initialize()
# clientAgent for Torrents
clientAgent = core.TORRENT_CLIENTAGENT
client_agent = core.TORRENT_CLIENT_AGENT
logger.info("#########################################################")
logger.info("## ..::[{0}]::.. ##".format(os.path.basename(__file__)))
logger.info("#########################################################")
logger.info('#########################################################')
logger.info('## ..::[{0}]::.. ##'.format(os.path.basename(__file__)))
logger.info('#########################################################')
# debug command line options
logger.debug("Options passed into TorrentToMedia: {0}".format(args))
logger.debug('Options passed into TorrentToMedia: {0}'.format(args))
# Post-Processing Result
result = [0, ""]
result = ProcessResult(
message='',
status_code=0,
)
try:
inputDirectory, inputName, inputCategory, inputHash, inputID = core.parse_args(clientAgent, args)
except:
logger.error("There was a problem loading variables")
input_directory, input_name, input_category, input_hash, input_id = core.parse_args(client_agent, args)
except Exception:
logger.error('There was a problem loading variables')
return -1
if inputDirectory and inputName and inputHash and inputID:
result = processTorrent(inputDirectory, inputName, inputCategory, inputHash, inputID, clientAgent)
if input_directory and input_name and input_hash and input_id:
result = process_torrent(input_directory, input_name, input_category, input_hash, input_id, client_agent)
elif core.TORRENT_NO_MANUAL:
logger.warning('Invalid number of arguments received from client, and no_manual set')
else:
# Perform Manual Post-Processing
logger.warning("Invalid number of arguments received from client, Switching to manual run mode ...")
logger.warning('Invalid number of arguments received from client, Switching to manual run mode ...')
for section, subsections in core.SECTIONS.items():
for subsection in subsections:
if not core.CFG[section][subsection].isenabled():
continue
for dirName in core.getDirs(section, subsection, link='hard'):
logger.info("Starting manual run for {0}:{1} - Folder:{2}".format
(section, subsection, dirName))
for dir_name in core.get_dirs(section, subsection, link='hard'):
logger.info('Starting manual run for {0}:{1} - Folder:{2}'.format
(section, subsection, dir_name))
logger.info("Checking database for download info for {0} ...".format
(os.path.basename(dirName)))
core.DOWNLOADINFO = core.get_downloadInfo(os.path.basename(dirName), 0)
if core.DOWNLOADINFO:
clientAgent = text_type(core.DOWNLOADINFO[0].get('client_agent', 'manual'))
inputHash = text_type(core.DOWNLOADINFO[0].get('input_hash', ''))
inputID = text_type(core.DOWNLOADINFO[0].get('input_id', ''))
logger.info("Found download info for {0}, "
"setting variables now ...".format(os.path.basename(dirName)))
logger.info('Checking database for download info for {0} ...'.format
(os.path.basename(dir_name)))
core.DOWNLOAD_INFO = core.get_download_info(os.path.basename(dir_name), 0)
if core.DOWNLOAD_INFO:
client_agent = text_type(core.DOWNLOAD_INFO[0]['client_agent']) or 'manual'
input_hash = text_type(core.DOWNLOAD_INFO[0]['input_hash']) or ''
input_id = text_type(core.DOWNLOAD_INFO[0]['input_id']) or ''
logger.info('Found download info for {0}, '
'setting variables now ...'.format(os.path.basename(dir_name)))
else:
logger.info('Unable to locate download info for {0}, '
'continuing to try and process this release ...'.format
(os.path.basename(dirName)))
clientAgent = 'manual'
inputHash = ''
inputID = ''
(os.path.basename(dir_name)))
client_agent = 'manual'
input_hash = ''
input_id = ''
if clientAgent.lower() not in core.TORRENT_CLIENTS:
if client_agent.lower() not in core.TORRENT_CLIENTS:
continue
try:
dirName = dirName.encode(core.SYS_ENCODING)
except UnicodeError:
pass
inputName = os.path.basename(dirName)
try:
inputName = inputName.encode(core.SYS_ENCODING)
except UnicodeError:
pass
input_name = os.path.basename(dir_name)
results = processTorrent(dirName, inputName, subsection, inputHash or None, inputID or None,
clientAgent)
if results[0] != 0:
logger.error("A problem was reported when trying to perform a manual run for {0}:{1}.".format
results = process_torrent(dir_name, input_name, subsection, input_hash or None, input_id or None,
client_agent)
if results.status_code != 0:
logger.error('A problem was reported when trying to perform a manual run for {0}:{1}.'.format
(section, subsection))
result = results
if result[0] == 0:
logger.info("The {0} script completed successfully.".format(args[0]))
if result.status_code == 0:
logger.info('The {0} script completed successfully.'.format(args[0]))
else:
logger.error("A problem was reported in the {0} script.".format(args[0]))
logger.error('A problem was reported in the {0} script.'.format(args[0]))
del core.MYAPP
return result[0]
return result.status_code
if __name__ == "__main__":
if __name__ == '__main__':
exit(main(sys.argv))

1
_config.yml Normal file
View file

@ -0,0 +1 @@
theme: jekyll-theme-cayman

View file

@ -12,7 +12,7 @@
git_user =
# GitHUB branch for repo
git_branch =
# Enable/Disable forceful cleaning of leftover files following postprocess
# Enable/Disable forceful cleaning of leftover files following postprocess
force_clean = 0
# Enable/Disable logging debug messages to nzbtomedia.log
log_debug = 0
@ -22,10 +22,14 @@
log_env = 0
# Enable/Disable logging git output to debug nzbtomedia.log (helpful to track down update failures.)
log_git = 0
# Set to the directory to search for executables if not in default system path
sys_path =
# Set to the directory where your ffmpeg/ffprobe executables are located
ffmpeg_path =
# Enable/Disable media file checking using ffprobe.
check_media = 1
# Required media audio language for media to be deemed valid. Leave blank to disregard media audio language check.
require_lan =
# Enable/Disable a safety check to ensure we don't process all downloads in the default_downloadDirectories by mistake.
safe_mode = 1
# Turn this on to disable additional extraction attempts for failed downloads. Default = 0 will attempt to extract and verify if media is present.
@ -34,12 +38,19 @@
[Posix]
### Process priority setting for External commands (Extractor and Transcoder) on Posix (Unix/Linux/OSX) systems.
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
niceness = 0
# If entering an integer e.g 'niceness = 4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness = nice,4' this will be passed as 'nice 4' (Safer).
niceness = nice,-n0
# Set the ionice scheduling class. 0 for none, 1 for real time, 2 for best-effort, 3 for idle.
ionice_class = 0
# Set the ionice scheduling class data. This defines the class data, if the class accepts an argument. For real time and best-effort, 0-7 is valid data.
ionice_classdata = 0
[Windows]
### Set specific settings for Windows systems
# Set this to 1 to allow extraction (7zip) windows to be lunched visble (for debugging) otherwise 0 to have this run in background.
show_extraction = 0
[CouchPotato]
#### autoProcessing for Movies
#### movie - category that gets called for post-processing with CPS
@ -59,6 +70,8 @@
method = renamer
delete_failed = 0
wait_for = 2
# Set this to suppress error if no status change after rename called
no_status_check = 0
extract = 1
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
@ -102,6 +115,36 @@
##### Set to define import behavior Move or Copy
importMode = Copy
[Watcher3]
#### autoProcessing for Movies
#### movie - category that gets called for post-processing with CPS
[[movie]]
enabled = 0
apikey =
host = localhost
port = 9090
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
ssl = 0
web_root =
# api key for www.omdbapi.com (used as alternative to imdb)
omdbapikey =
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
delete_failed = 0
wait_for = 0
extract = 1
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if Watcher3 is on a remote server for this category
remote_path = 0
##### Set to path where download client places completed downloads locally for this category
watch_dir =
##### Set the recursive directory permissions to the following (0 to disable)
chmodDirectory = 0
[SickBeard]
#### autoProcessing for TV Series
#### tv - category that gets called for post-processing with SB
@ -123,6 +166,52 @@
process_method =
# force processing of already processed content when running a manual scan.
force = 0
# Additionally to force, handle the download as a priority downlaod.
# The processed files will always replace existing qualities, also if this is a lower quality.
is_priority = 0
# tell SickRage/Medusa to delete all source files after processing.
delete_on = 0
# tell Medusa to ignore check for associated subtitle check when postponing release
ignore_subs = 0
extract = 1
nzbExtractionBy = Downloader
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if SickBeard is on a remote server for this category
remote_path = 0
##### Set to path where download client places completed downloads locally for this category
watch_dir =
##### Set the recursive directory permissions to the following (0 to disable)
chmodDirectory = 0
##### pyMedusa (fork=medusa-apiv2) uses async postprocessing. Wait a maximum of x minutes for a pp result
wait_for = 10
[SiCKRAGE]
#### autoProcessing for TV Series
#### tv - category that gets called for post-processing with SR
[[tv]]
enabled = 0
host = localhost
port = 8081
apikey =
# api version 1 uses api keys
# api version 2 uses SSO user/pass
api_version = 2
# SSO login requires API v2 to be set
sso_username =
sso_password =
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
web_root =
ssl = 0
delete_failed = 0
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
process_method =
# force processing of already processed content when running a manual scan.
force = 0
# tell SickRage/Medusa to delete all source files after processing.
delete_on = 0
# tell Medusa to ignore check for associated subtitle check when postponing release
@ -257,7 +346,7 @@
apikey =
host = localhost
port = 8085
######
######
library = Set to path where you want the processed games to be moved to.
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
ssl = 0
@ -275,10 +364,35 @@
##### Set to path where download client places completed downloads locally for this category
watch_dir =
[LazyLibrarian]
#### autoProcessing for LazyLibrarian
#### books - category that gets called for post-processing with LazyLibrarian
[[books]]
enabled = 0
apikey =
host = localhost
port = 5299
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
ssl = 0
web_root =
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
extract = 1
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if LazyLibrarian is on a remote server for this category
remote_path = 0
##### Set to path where download client places completed downloads locally for this category
watch_dir =
[Network]
# Enter Mount points as LocalPath,RemotePath and separate each pair with '|'
# e.g. MountPoints = /volume1/Public/,E:\|/volume2/share/,\\NAS\
mount_points =
mount_points =
[Nzb]
###### clientAgent - Supported clients: sabnzbd, nzbget
@ -289,15 +403,17 @@
sabnzbd_apikey =
###### Enter the default path to your default download directory (non-category downloads). this directory is protected by safe_mode.
default_downloadDirectory =
# enable this option to prevent nzbToMedia from running in manual mode and scanning an entire directory.
no_manual = 0
[Torrent]
###### clientAgent - Supported clients: utorrent, transmission, deluge, rtorrent, vuze, qbittorrent, other
###### clientAgent - Supported clients: utorrent, transmission, deluge, rtorrent, vuze, qbittorrent, synods, other
clientAgent = other
###### useLink - Set to hard for physical links, sym for symbolic links, move to move, move-sym to move and link back, and no to not use links (copy)
useLink = hard
###### outputDirectory - Default output directory (categories will be appended as sub directory to outputDirectory)
outputDirectory = /abs/path/to/complete/
###### Enter the default path to your default download directory (non-category downloads). this directory is protected by safe_mode.
###### Enter the default path to your default download directory (non-category downloads). this directory is protected by safe_mode.
default_downloadDirectory =
###### Other categories/labels defined for your downloader. Does not include CouchPotato, SickBeard, HeadPhones, Mylar categories.
categories = music_videos,pictures,software,manual
@ -318,15 +434,22 @@
DelugeUSR = your username
DelugePWD = your password
###### qBittorrent (You must edit this if you're using TorrentToMedia.py with qBittorrent)
qBittorrenHost = localhost
qBittorrentHost = localhost
qBittorrentPort = 8080
qBittorrentUSR = your username
qBittorrentPWD = your password
###### Synology Download Station (You must edit this if you're using TorrentToMedia.py with Synology DS)
synoHost = localhost
synoPort = 5000
synoUSR = your username
synoPWD = your password
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
deleteOriginal = 0
chmodDirectory = 0
resume = 1
resumeOnFailure = 1
# enable this option to prevent TorrentToMedia from running in manual mode and scanning an entire directory.
no_manual = 0
[Extensions]
compressedExtensions = .zip,.rar,.7z,.gz,.bz,.tar,.arj,.1,.01,.001
@ -340,15 +463,15 @@
plex_host = localhost
plex_port = 32400
plex_token =
plex_ssl = 0
plex_ssl = 0
# Enter Plex category to section mapping as Category,section and separate each pair with '|'
# e.g. plex_sections = movie,3|tv,4
plex_sections =
plex_sections =
[Transcoder]
# getsubs. enable to download subtitles.
getSubs = 0
# subLanguages. create a list of languages in the order you want them in your subtitles.
# subLanguages. create a list of languages in the order you want them in your subtitles.
subLanguages = eng,spa,fra
# transcode. enable to use transcoder
transcode = 0
@ -363,7 +486,7 @@
# outputQualityPercent. used as -q:a value. 0 will disable this from being used.
outputQualityPercent = 0
# outputVideoPath. Set path you want transcoded videos moved to. Leave blank to disable.
outputVideoPath =
outputVideoPath =
# processOutput. 1 will send the outputVideoPath to SickBeard/CouchPotato. 0 will send original files.
processOutput = 0
# audioLanguage. set the 3 letter language code you want as your primary audio track.
@ -382,16 +505,18 @@
externalSubDir =
# hwAccel. 1 will set ffmpeg to enable hardware acceleration (this requires a recent ffmpeg)
hwAccel = 0
# generalOptions. Enter your additional ffmpeg options here with commas to separate each option/value (i.e replace spaces with commas).
# generalOptions. Enter your additional ffmpeg options (these insert before the '-i' input files) here with commas to separate each option/value (i.e replace spaces with commas).
generalOptions =
# otherOptions. Enter your additional ffmpeg options (these insert after the '-i' input files and before the output file) here with commas to separate each option/value (i.e replace spaces with commas).
otherOptions =
# outputDefault. Loads default configs for the selected device. The remaining options below are ignored.
# If you want to use your own profile, leave this blank and set the remaining options below.
# outputDefault profiles allowed: iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mp4-scene-release
# outputDefault profiles allowed: iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mkv-bluray, mp4-scene-release
outputDefault =
#### Define custom settings below.
outputVideoExtension = .mp4
outputVideoCodec = libx264
VideoCodecAllow =
VideoCodecAllow =
outputVideoPreset = medium
outputVideoResolution = 1920:1080
outputVideoFramerate = 24
@ -399,15 +524,15 @@
outputVideoCRF = 19
outputVideoLevel = 3.1
outputAudioCodec = ac3
AudioCodecAllow =
AudioCodecAllow =
outputAudioChannels = 6
outputAudioBitrate = 640k
outputAudioTrack2Codec = libfaac
AudioCodec2Allow =
outputAudioTrack2Channels = 2
AudioCodec2Allow =
outputAudioTrack2Channels = 2
outputAudioTrack2Bitrate = 128000
outputAudioOtherCodec = libmp3lame
AudioOtherCodecAllow =
AudioOtherCodecAllow =
outputAudioOtherChannels =
outputAudioOtherBitrate = 128000
outputSubtitleCodec =
@ -464,4 +589,4 @@
# enter a list (comma separated) of Group Tags you want removed from filenames to help with subtitle matching.
# e.g remove_group = [rarbag],-NZBgeek
# be careful if your "group" is a common "real" word. Please report if you have any group replacements that would fall in this category.
remove_group =
remove_group =

74
azure-pipelines.yml Normal file
View file

@ -0,0 +1,74 @@
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
jobs:
- job: 'Test'
pool:
vmImage: 'Ubuntu-latest'
strategy:
matrix:
Python39:
python.version: '3.9'
Python310:
python.version: '3.10'
Python311:
python.version: '3.11'
Python312:
python.version: '3.12'
Python313:
python.version: '3.13'
maxParallel: 3
steps:
- script: |
sudo apt-get update
sudo apt-get install ffmpeg
displayName: 'Install ffmpeg'
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
architecture: 'x64'
- script: python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: |
pip install pytest
pytest tests --doctest-modules --junitxml=junit/test-results.xml
displayName: 'pytest'
- script: |
rm -rf .git
python cleanup.py
python TorrentToMedia.py
python nzbToMedia.py
displayName: 'Test source install cleanup'
- task: PublishTestResults@2
inputs:
testResultsFiles: '**/test-results.xml'
testRunTitle: 'Python $(python.version)'
condition: succeededOrFailed()
- job: 'Publish'
dependsOn: 'Test'
pool:
vmImage: 'Ubuntu-latest'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
architecture: 'x64'
- script: |
python -m pip install setuptools
python setup.py sdist
displayName: 'Build sdist'

View file

@ -1,676 +0,0 @@
Change_LOG / History
V11.7 12/25/2018
Merry Christmas and Happy Holidays!
Add cleanup script to clean up bytecode
Add automatic cleanup on update
NOTE: Cleanup will force-run every time during a transitional period to minimize issues with upcoming refactoring
V11.06 11/03/2018
updates to incorporate importMode for NzbDrone/Sonarr and Radarr.
Correct typo(s) for "Lidarr" category.
only pass id to CP if release id found.
fix issue with no release id and no imdbid.
Fixed NZBGet save of Lidarr config.
improve logging for imdb id lookup.
fix minor description error.
add better logging of movie name when added to CP.
attempt to clean up Liddar api commands.
update to use Mylar api.
set Torrent move-sym option to force SickRage process_method.
add rmDir import for HeadPhones processing.
change sickrage and sickchill names and modify api process to work with multiple sick* forks.
add NZBGet WebUI set of delete failed for HP.
fix qbittorrent to delete permanently (remove files on delete).
V11.05 27/06/2018
Add qBittorrent support.
Add SickGear support.
Add SiCKRAGE api support.
Fix for single file download.
Diable media check for failed HeadPhones downloads.
Added Lidarr flow. Still awaiting confirmation of api interface commands and return.
V11.04 30/12/2017
do not embed .sub.
add proper check of sub streams #1150 and filter out commentary.
traverse audiostreams in reverse.
add catch for OMDB api errors.
convert all listdir functions to unicode.
perform extraction, corruption checks, and transcoding when no server.
fix list indices errors when no fork set.
fix CP server responding test. Add trailing /.
use basestring to match unicode path in transcoder.
attempt autofork even if no username set.
allow long paths in Cleandir.
add Radarr handling.
minor fix for transcoder.
fix non-iterable type.
fix logging error.
DownloadedMovieScan updated to DownloadedMoviesScan.
add check to exception rename to not over-write exisiting.
don't try and process when no api/user.
Added omdbapikey functionality
force sonarr processing to "move".
already extracted archive not skipped.
fix text for keep_archive.
try to avoid spaces in outputdir.
change subtitle logging level.
Increase shutil copy buffer length from 4KB to 512KB.
improve user script media extension handling
add par2 rename/repair (linux only).
V11.03 15/01/2017
Add -o to output path for 7zip.
Try album directory then parent directory for HeadPhones variants.
Prevent duplication of audio tracks in Transcoder.
Update uTorrent Client interface.
Updated to use force_next for SickRage to prevent postprocessing in queue.
V11.02 30/11/2016
Added default "MKV-SD"
Added VideoResolution in nzbGet.
Fix Headphones direcotry parsing.
Remove proc_type when failed.
Added option "no_extract_failed"
Updated beautifulsoup 4 module.
Check for existence of codec_type key when counting streams.
Added default fallback for sabnzbd port = 8080.
V11.01 30/10/2016
Updated external modules and changed config to dict.
Started making code python 3 compatible.
Fixed auto-fork detection for new Sick* branches.
Fixed invalid indexing scope for TorrentToMedia.
Add Medusa fork and new param "ignore_subs".
Added check for language tag size, convert 3 letter language codes.
Fixed guessit call to allow guessit to work of full file path.
Add the ability to set octal permissions on the processed files prior to handing it off to Sickrage/Couchpotato.
Catch errors if not audio codec name.
Allow manual scans to continue.
Revert to 7zip if others missing.
Fixed int conversion base 8 from string or int.
Added more logging to server tests.
Added MKV-SD Profile.
Check for preferred codec even if not preferred language.
Don't convert VobSub to mov_text.
V10.15 29/05/2016
Don't copy archives when set to extract.
Specifically check for failed download handing regardless of fork.
sort Media file results by pathlength.
Synchronize changed SickRage directory param.
Don't remove release group information from base folder.
Don't add imdb id to file name when move-sym in use.
Fix string and integer concat error.
V10.14 13/03/2016
Add option move-sym to create symlink to renamed files.
Transmission comment fix.
Prevent int errors in chmod.
Fix urllib warnings.
Create unique directory in output incase of rename error in sick/couch.
Add -strict -2 to dts codec.
Added support to handle archives in SickRage.
Report Downloader failures to SickRage.
Continue on encoding detection failure.
Strip trailing and leading whitespaces from `mount_points`.
Also check sabnzbd history for nzoid.
Add generic run mode (manually enter parameters for execution).
V10.13 11/12/2015
Always add -strict -2 to aac codec.
Add "delete_on" for SickRage.
Add https handling for SABnzbd.
Added the ability to chmod Torrent diretory before processing.
Add option to not resume failed torrent.
Add Option to not resume successful torrent.
Add procees name to final SABnzbd message.
Fix SSL warnings forcomic processing.
Add .ts to mediaExtensions.
Don't update plex on failed.
Add option to preserve archive files after extraction.
Force_Clean doesn't over-ride delete_failed.
Added support for SickRageTV and SickRage branches.
V10.12 21/09/2015
Updated Requests Module to Latest Version. Works with Python 2.7.10
Add .img files to transcoder extraction routines.
V10.11 28/05/2015
Use socket to verify if running on Linux. Prevents issues with stale pid.
Add timeouts and improve single instance handling.
Prevent Scale Up.
Improve regex for rename script.
Improve safe rename functionality.
Ignore .bts extensions.
Don't process output when no transcoding needed.
Ignore Thumbs.db on manual run.
Rename nzbtomedia to core. To prevent erros on non-case sensitive file systems.
Mark as bad if no media files found.
Increase server responding timeout.
Don't use last modified entry for CP renamer when no imdb id found.
Add plex library update.
V10.10 29/01/2015
Fix error when extracting on windows. (added import of subprocess)
Fix subtitles download and emdedding.
V10.9 19/01/2015
Prevent Errors when trying next release from CouchPotato (CouchPotato failed handling)
Prevent check for status change when using Manage scan (CouchPotato)
Better Tooltip for "host" in NZBGet settings.
Continue if failed to connect to Torrent Client.
Fixed resolution settings in Transcoder.
Make Windows Linking and extraction invisible.
V10.8 15/12/2014
Impacts All
Removed "stand alone" scripts DeleteSamples and ResetDateTimes. These are now in https://github.com/clinton-hall/GetScripts
Removed chp.exe and replaced with vb script.
Improved Sonarr(NZBDrone) CDH support.
Use folder Permissions to set permissions for sub directories and files following extract.
Added support fro new SickRage Login.
Impacts NZBs
Get NZOID from SABnzbd for better release matching.
Impacts Torrents
Now gets Label from Deluge.
Changed SSL version for updated Deluge (0.3.11+)
Impacts Transcoding
Fixed reported bugs.
Fix Audio mapping.
Fix Subtitle mapping from external files.
Fixed scaling errors.
V10.7 06/10/2014
Impacts All
Add Transcoding of iso/images and VIDEO_TS structures.
Improved multiple session handling.
Improve NZBDrone handling (including Torrent Branch).
Multiple bug-fixes.
Impacts NZBs
Add custom "group" replacements to allow better subtitle searching.
Impacts Torrents
Add Vuze Torrent Client support.
V10.6 26/08/2014
Impacts All
Bug Fixes.
Impacts NZBs
Added FailureLink style feedback to dognzb for failed and corrupt downloads.
V10.5 05/08/2014
Impacts All
Bug Fixes for Transcoder.
Support for lib-av as well as ffmpeg.
Fixed SickBeard aut-fork detection.
V10.4 30/07/2014
Impacts All
Supress printed messages from extractor.
Allow no sub languages to be specified.
Ignore hdmv_pgs_subtitle codecs in transcoder.
Fix remote directory use with HeadPhones.
Only use nice and ionice when available.
Impacts NZBs
Cleaner exit logging for SABnzbd.
Impacts Torrents
Improved manual run handling.
V10.3 15/07/2014
Impacts All
Fix auto-fork to identify default fork.
V10.2 15/07/2014
Impacts All
Bug Fixes.
If extracting files and extraction not successful, return Failure and Don't delete archives.
V10.1 11/07/2014
Impacts All
Improved Transcoder
Minor Bug Fixes
Now accepts Number of Audio Channels for Transcoder options.
Userscript can perform video corruption check first.
Improved extraction. Extract all subdirs and multiple "unique" archives in a directory.
Check if already running and wait for complete before continuing.
Impacts NZBs
Allow UserScript for NZBs
Impacts Torrents
Do Extraction Before Flatten
V10.0 03/07/2014
Impacts All
Changed to python2 (some systems now come with python = python3 as default).
Major changes to Transcoder. Only copy streams where possible.
Pre-defined Transcode options for some devices.
Added log_env option to capture environment variables.
Improved remote directory handling.
Various fixes.
V9.3 09/06/2014
Impacts Torrents
Allow Headphones to remove torrents and data after processing.
Delete torrent if uselink = move
Added forceClean for outputDir. Works in file permissions prevent CP/SB from moving files.
Ignore .x264 from archive "part" checks.
Changed handling of TPB/Pistachitos SB forks. Default is to link/extract here. Disabled by Torrent_NoLink = 1.
Changed handling for HeadPhones Now that HeadPhones allows process directory to be defined.
Restructured Flow and streamlines process
Impacts NZBs
Fix setting of Mylar config from NZBGet.
Created sheel scripts to nzbTo{App}. All now call the common nzbToMedia.py
Impacts All
Changes to Couchpotato API for [nosql] added. Keeps aligned with current CouchPotato develop branch.
Add Auto Detection of SickBeard Fork. Thanks @echel0n
Added config class, re-coded migratecfg, misc bugfixes and code cleanup.
Added dynamic timeout based on directory size.
Added process_Method for SickBeard.
Changed configuration migrate process.
Major structure and process re-format.
Improved Manual Call Handling
Now prints github version into log when available.
Changed log location and format.
Added autoUpdate option via git.
All calls now use requests, not urllib.
All details now saved into Database. Can be used for more features later ;)
Improved status checking to ensure we only cleanup when successfully processed.
Huge Thanks @echel0n
V9.2 05/03/2014
Impacts All
Change default "wait_for" to 5 mins. CouchPotato can take more than 2 minutes to return on renamer.scan request.
Added SickBeard "wait_for" to bw customizable to prevent unwanted timeouts.
Fixed ascii conversion of directory name.
Added list of common sample ids and a way to set deletion of All media files less than the sample file size limit.
Added urlquote to dirName for CouchPotato (allows special characters in directory name)
Impacts NZBs
Fix Error with manual run of nzbToMedia
Make sure SickBeard receives the individula download dir.
Added option to set SickBeard extraction as either Downlaoder or Destination (SickBeard).
Fixed Health Check handling for NZBGet.
Impacts Torrents
Added option to run userscript once only (on directory).
Added Option to not flatten specific categories.
Added rtorrent integration.
Fixes for HeadPhones use (no flatten), no move/sym, and fix move back to original.
V9.1 24/01/2014
Impacts All
Don't wait to verify status change in CouchPotato when no initial status (manual run)
Now use "wait_for" timing as socket timeout on the renamer.scan. It appears to now be delayed in confirming success.
V9.0 19/01/2014
Impacts NZBs
SABnzbd 0.7.17+ now uses 8 arguments, not 7. These scripts now support the extra argument.
Impacts Torrents
Always pause before processing.
Moved delete to end of routine, only when succesful process occurs.
Don't flatten hp category (in case multi cd album)
Added UserScript to be called for un-categorized downloads and other defined categories.
Added Torrent Hash to Deluge to assist with movie ID.
Added passwords option to attempt extraction od passworded archives.
Impacts All
Added default socket timeout to prevent script hanging when the destination servers don't respond to http requests.
Made processing Category Centric as an option for people running multiple versions of SickBeard and CouchPotato etc.
Added TPB version of SickBeard processing. This now uses a fork pass-in instead of failed_fork.
Added new option to convert files, directories, and parameters to ASCII. To be used if you regularly download "foreign" titles and have problems with CP/SB.
Now only parse results from CouchPotato 50 at a time to prevent error with large wanted list.
V8.5 05/10/2013
Impacts Torrents
Added Transmission RPC client.
Now pauses and resumes or removes from transmission.
Added debugging of input arguments from torrent clients.
Impacts NZBs
Removed obsolete NZBget (pre V11) code.
Impacts All.
Fixed HeadPhones processing.
Fixed movie parsing in CPS api.
V8.4 14/09/2013
Impacts Torrents
Don't include 720p or 1080p as parts for extracting.
Extracts all sub-folders.
Added option to Move files.
Fix for single file torrents linked to subfolder of same name.
Impacts All
Added option for SickBeard delay (for forks that use 1 minute check.
Updated to new api call in CouchPotato (movie.searcher.try_next)
V8.3 11/07/2013
Impacts All
Allow use of experimental AAC codec in transcoder.
Remove username and password when api key is used.
Add .m4v as media
Added ResetDateTime.py
Manual Opion for Mylar script.
Fixes for Gamez script.
Impacts NZBs
Added option to remove folder path when CouchPotato on different system to downlaoder.
NZBGet v11.0 stable now current.
V8.2 26/05/2013
Impacts All
Add option to set the "wait_for" period. This is how long the script waits to see if the movie changes status in CouchPotato.
minSampleSize now moved to [extensions] section and availabe for nzbs and torrents.
New option in transcoder to use "niceness" on Linux.
Remove excess logging from transcoder.
Impacts NZBs
Added Flatten of input directory and test for media files (including sample deletion) in autoProcessTV
Impacts Torrents
Fixed Delete_Original option
Fix type which caused crash if not sickbeard or couchpotato.
V8.1 04/05/2013
Impacts All
Improved exception logging for error conditions
Impacts Torrents
Fixed an import error when extracting
Impacts NZBs
Fixed passthrough of inputName from NZBGet to pass the .nzb extension (required for SickBeard's failed fork)
V8.0 28/04/2013
Impacts All
Added download_id pass through for CouchPotato release matching
Uses single directory scanning for CouchPotato renamer
Matches imdb_id, download_id, clientAgent with CPS database
Impacts NZB
Addeed direct configuration support via nzbget webUI (nzbget v11+)
All nzb scripts are now directly callabale in nzbget v11
Settings made in nzbget webUI will be applied to the auotPorcessMedia.cfg when the scripts are run from nzbget.
Fixed TLS support for NZBGet email notifications (for V10 support)
V7.1 28/03/2013
Impacts Torrents
Added test for chp.exe. If not found, calls 7zip directly
Added test for multi-part archives. Will only extract part1
Impacts NZB
Fixed failed download handling from nzbget (won't delete or move root!!!)
Fixed sendEmail for nzbget to use html with <br> line breaks
V7.0 21/03/2013
Impacts Torrents
Added option to delete torrent and original files after processing (utorrent)
Impacts NZB
Added nzbget windows script (to be compiled)
Changed nzbget folders to previous X.X, current-stable, testing X.X format
Fix nzbget change directory failure problem
Improved nzbget logging
Add logging to nzbget email notification
Synchronised v10 to latest nzbget testing scripts
Added failed download folder for failed downloads in nzbget
Added option to delete failed in nzbget
Created a single nzbToMedia.py script for all categories (will be the only nzb script compiled for windows)
Impacts All
Added rotating log file handler
Added ffmpeg transcoder
Added CouchPotato status check to provide confirmation of renamer complete
CouchPotato status check will timeout after 2 minutes in case something goes wrong
Improved logging.
Improved scen exception handling.
Major changes to code layout
Better efficiency
Added support for Mylar, Gamez, and HeadPhones
Moved many of the "support" files to the autoProcess directory so that they aren't visible (looks neater)
Added migration tool to update .cfg file on first run following update.
V6.0 03/03/2013
Impacts Torrents
Bundled 7zip binaries and created extraction functions.
Now pauses uTorrent seeding before calling renamer in SickBeard/CouchPotatoServer
uTorrent Resumes seeding after files (hardlinks) have been renamed
Impacts NZB
Added local file logging.
Impacts All
Added scene exception handling. Currently for "QoQ"
Improved code layout.
V5.1 22/02/2013
Improved category search to loop through directory structure.
Added support for deluge and potentially other Torrent clients.
uTorrent now must pass "utorrent" before "%D" "%N" "%L"
added test for date modified (less than 5 mins ago) if root directory and no torrent name.
".cp(ttxxxxxx)" tag preserved in directory name for CPS renaming.
All changes affect Torrent handling. Should not impact NZB handling.
V5.0 20/02/2013
Fixed Extarction and Hard-Linking support in TorrentToMedia
Added new config options for movie file extensions, metadata extensions, compressed file extensions.
Added braid to sync linktastic.
Windows Builds now run without console displaying.
All changes affect Torrent handling. Should not impact NZB handling.
V4.3 17/02/2013
Added Logger in TorrentToMedia.py
Added nzbget V10.0 script.
Delete sample files in nzbget postprocessing
Single Version for all files.
V4.2 12/02/2013
Fixes to TorrentToMedia
V4.1 02/02/2013
Added Torrent Support (µTorrent and Transmission).
Added manual run option for nzbToSickBeard.
Changed nzbGet script to use move not copy and remove.
Merged all .cfg scripts into one (autoProcessMedia.cfg).
Made all scripts execitable (755) on github.
Added category limits for email support in nzbget.
Fixed issue with replacements (of paths) in email messages in nzbget.
V4.0 21/12/2012
Changed name from nzbToCouchPotato to nzbToMedia; Now supports mltiple post-processing from two nzb download clients.
Added email support for nzbget.
Version printing now for each of the nzbTo* scripts.
Added "custom" post-process support in nzbget.
Added post-process script output logging in nzbget.
V3.2 11/12/2012
Added failed handling from NZBGet. Thanks to schumi2004.
Also added support for the "failed download" development branch of SickBeard from https://github.com/Tolstyak/Sick-Beard.git
V3.1 02/12/2012
Added conversion to ensure the status passed to the autoProcessTV and autoProcessMovie is always handled as an integer.
V3.0 30/11/2012
Changed name from sabToCouchPotato to nzbToCouchPotato as this now included NZBGet support.
Packaged the NZBGet postprocess files as well as modified version of nzbToSickBeard (from sabToSickBeard).
V2.2 05/10/2012
Re-wrote the failed downlaod handling to just search for the imdb ttXXXX identifier (as received from the nzb name)
Now issues only two api calls. movie.list and searcher.try_next
Should be more robust with regards changes to CPS and also utilises less resources (i.e. less api call and and less processing).
V2.1 04/10/2012
detected a change in the movie release info format. Fixed the script to work with new format.
V2.0 04/10/2012
Fixed an issue with the failed download handling in that the status id for "snatched" can be different on each installation. now performs a status.list via api to verify the status.
Also including a version print (currently 2.0... yeah original I know) so you know if you are current.
removed the multiple versions. The former _recue version will perform the standard renamer only if "postprocess only verified downloads" (default) is enabled in SABnzbd. Also, the "unix" version works fine in Windows, only the "dos" version gave issue in Linux. In other words, this one version should work for all systems.
For historical reasons, the former download stats apply to the old versions:
sabToCouchPotato-dos - downloaded 143 times
sabToCouchPotato-unix - downloaded 205 times
sabToCouchPotato_recue - downloaded 105 times
Also updated the Windows Build to include the same changes. I have removed the link to the linux build as this didn't work on all systems and it really shouldn't be necessary. Let me know if you need this updated.
V1.9 18/09/2012
compiled (build) versions of sabToSickBeard and sabToCouchPotato added for both Linux and Windows. links at top of post.
V1.9 16/09/2012
Added a compiled .exe version for windows. Should prevent the "python not recognised" issue and allow this to be used in conjunction with the windows build on systems that do not have python installed.
This is the full (_recue version) if sabnzbd is set to post ptocess only verified jobs, this will not recue and will function as a standard renamer.
V1.9 27/08/2012
Following the latest CPS update on the master branch, this script is not really needed as CPS actually polls the SABnzbd api and does the same as this script (internally).
However, if you have any issues with CPS constantly downloading the same movies, or filling the log with polling SABnzbd for completed movies, or otherwise prefer to use this method, then you can still use this script and make the following changes in CPS:
Settings, renamer, run every (advanced) = set to 1440 (or some longer interval)
Settings, renamer, next On_failed = off
Settings, downloaders, SABnzbd, Delete failed = off.
V1.9 06/08/2012
Also added the integer handling of status in the sabToSickBeard.py script to prevent SickBeard trying to postprocess a failed TV download. Only impacts the _recue version
V1.8 05/08/2012
Modified the _recue version as SABnzbd 0.7.3 now appears to pass the "status" variable as a string not an integer!!! (or i had it wrong on first attempt :~)
This causes the old script to identify completed downloads as failed and recues the next download!
The fix here should work with any conceivable subsequent updates in that I now make the sys.argv[7] an integer before passing it. if the variable already is an integer, this shouldn't cause any issues.
status = int(sys.argv[7])
autoProcessMovie.process(sys.argv[1], sys.argv[2], status)
V1.7 02/08/2012
Added a new version sabToCouchPotato_recue
This works the same as the other versions, but includes support for recuing failed downloads.
This is new, and only tested once (with success ) at my end.
To get this to run you will need to uncheck the "post-process only verified jobs" option in SABnzbd. Also, to avoid issues with SickBeard postprocessing, I have included a modified postprocessing for SickBeard that just checks for failed status and then exits (the SickBeard Team are currently working on failed download handling and I will hopefully make this script work with that in the future)
This re-cue works as follows:
Performs an api call to CPS to get a list of all wanted movies (with all data including the releases and status etc)
It finds the nzbname (from SABnzbd) in the json list returned from the api call (movie.list) and identifies the movie id and release id.
It performs an api call to make the release as "ignore" and then performs another api call to refresh the movie.
If another (next best) release that meets your criteria is already available it will send that to SABnzbd, otherwise it will wait until a new release becomes availabe.
I have left the old versions here for now for those who don't want to try this. Also, if you don't uncheck the "post-process only verified jobs" in SABnzbd this code will perform the same as the previous versions.
The next issue to tackle (if this works) is automating the deletion of failed download files in SABnzbd.... but I figured this was a start.
V1.6 22/07/2012
no functionality change, but providing scripts in both unix and dos format to prevent exit(127) errors.
if you are using windows, use the dos format. if you are using linux, use the unix format and unzip the files in linux.
V1.5 17/07/2012
add back the web_root parameter to set the URL base.
V1.4 17/07/2012
Have uploaded the latest version.
changes
Removed support for a movie.downlaoded api call that was only used in a seperate branch and is not expected to be merged.
Modified the passthrough to allow a manual call to this script (i.e. does not need to be called from SABnzbd).
Have added a helpfile that explains the setup options in a bit more detail.
Modified the .cfg.sample file to use 60 as a default delay and now specify that 60 should be your minimum to ensure the renamer.scan finds newly extracted movies.
V1.3 and earlier were not fully tracked, as the script itself (not files) was posted on the QNAP forums.

View file

@ -1,7 +1,79 @@
from __future__ import print_function
#!/usr/bin/env python
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import subprocess
import sys
import shutil
sys.dont_write_bytecode = True
FOLDER_STRUCTURE = {
'libs': [
'common',
'custom',
'py2',
'win',
],
'core': [
'auto_process',
'extractor',
'plugins',
'processor',
'utils',
],
}
class WorkingDirectory(object):
"""Context manager for changing current working directory."""
def __init__(self, new, original=None):
self.working_directory = new
self.original_directory = os.getcwd() if original is None else original
def __enter__(self):
os.chdir(self.working_directory)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
try:
os.chdir(self.original_directory)
except OSError as error:
print(
'Unable to return to {original_directory}: {error}\n'
'Continuing in {working_directory}'.format(
original_directory=self.original_directory,
error=error,
working_directory=self.working_directory,
),
)
def module_path(module=__file__, parent=False):
"""
Detect path for a module.
:param module: The module who's path is being detected. Defaults to current module.
:param parent: True to return the parent folder of the current module.
:return: The absolute normalized path to the module or its parent.
"""
try:
path = module.__file__
except AttributeError:
path = module
directory = os.path.dirname(path)
if parent:
directory = os.path.join(directory, os.pardir)
absolute = os.path.abspath(directory)
normalized = os.path.normpath(absolute)
return normalized
def git_clean(remove_directories=False, force=False, dry_run=False, interactive=False, quiet=False, exclude=None,
@ -44,6 +116,7 @@ def clean_bytecode():
result = git_clean(
remove_directories=True,
force=True,
ignore_rules=True,
exclude=[
'*.*', # exclude everything
'!*.py[co]', # except bytecode
@ -76,24 +149,70 @@ def clean_folders(*paths):
return result
def clean(*paths):
def force_clean_folder(path, required):
"""
Force clean a folder and exclude any required subfolders.
:param path: Target folder to remove subfolders
:param required: Keep only the required subfolders
"""
root, dirs, files = next(os.walk(path))
required = sorted(required)
if required:
print('Skipping required subfolders', required)
remove = sorted(set(dirs).difference(required))
missing = sorted(set(required).difference(dirs))
for path in remove:
pathname = os.path.join(root, path)
print('Removing', pathname)
shutil.rmtree(pathname)
if missing:
raise Exception('Required subfolders missing:', missing)
def clean(paths):
"""Clean up bytecode and obsolete folders."""
print('-- Cleaning bytecode --')
try:
result = clean_bytecode()
except SystemExit as error:
print(error)
else:
print(result or 'No bytecode to clean\n')
if paths:
print('-- Cleaning folders: {} --'.format(paths))
def _report_error(msg):
print('WARNING: Automatic cleanup could not be executed.')
print(' If errors occur, manual cleanup may be required.')
print('REASON : {}'.format(msg))
with WorkingDirectory(module_path()) as cwd:
if cwd.working_directory != cwd.original_directory:
print('Changing to directory:', cwd.working_directory)
print('\n-- Cleaning bytecode --')
try:
result = clean_folders(*paths)
result = clean_bytecode()
except SystemExit as error:
print(error)
_report_error(error)
else:
print(result or 'No folders to clean\n')
print(result or 'No bytecode to clean')
if paths and os.path.exists('.git'):
print('\n-- Cleaning folders: {} --'.format(list(paths)))
try:
result = clean_folders(*paths)
except SystemExit as error:
_report_error(error)
else:
print(result or 'No folders to clean\n')
else:
print('\nDirectory is not a git repository')
try:
items = paths.items()
except AttributeError:
_report_error('Failed to clean, no subfolder structure given')
else:
for folder, subfolders in items:
print('\nForce cleaning folder:', folder)
force_clean_folder(folder, subfolders)
if cwd.working_directory != cwd.original_directory:
print('Returning to directory: ', cwd.original_directory)
print('\n-- Cleanup finished --\n')
if __name__ == '__main__':
clean('libs', 'core')
clean(FOLDER_STRUCTURE)

File diff suppressed because it is too large Load diff

View file

@ -1,77 +0,0 @@
# coding=utf-8
import os
import core
import requests
from core.nzbToMediaUtil import convert_to_ascii, remoteDir, server_responding
from core import logger
requests.packages.urllib3.disable_warnings()
class autoProcessComics(object):
def processEpisode(self, section, dirName, inputName=None, status=0, clientAgent='manual', inputCategory=None):
apc_version = "2.04"
comicrn_version = "1.01"
cfg = dict(core.CFG[section][inputCategory])
host = cfg["host"]
port = cfg["port"]
apikey = cfg["apikey"]
ssl = int(cfg.get("ssl", 0))
web_root = cfg.get("web_root", "")
remote_path = int(cfg.get("remote_path"), 0)
protocol = "https://" if ssl else "http://"
url = "{0}{1}:{2}{3}/api".format(protocol, host, port, web_root)
if not server_responding(url):
logger.error("Server did not respond. Exiting", section)
return [1, "{0}: Failed to post-process - {1} did not respond.".format(section, section)]
inputName, dirName = convert_to_ascii(inputName, dirName)
clean_name, ext = os.path.splitext(inputName)
if len(ext) == 4: # we assume this was a standard extension.
inputName = clean_name
params = {
'cmd': 'forceProcess',
'apikey': apikey,
'nzb_folder': remoteDir(dirName) if remote_path else dirName,
}
if inputName is not None:
params['nzb_name'] = inputName
params['failed'] = int(status)
params['apc_version'] = apc_version
params['comicrn_version'] = comicrn_version
success = False
logger.debug("Opening URL: {0}".format(url), section)
try:
r = requests.post(url, params=params, stream=True, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error("Unable to open URL", section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
result = r.content
if not type(result) == list:
result = result.split('\n')
for line in result:
if line:
logger.postprocess("{0}".format(line), section)
if "Post Processing SUCCESSFUL" in line:
success = True
if success:
logger.postprocess("SUCCESS: This issue has been processed successfully", section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
else:
logger.warning("The issue does not appear to have successfully processed. Please check your Logs", section)
return [1, "{0}: Failed to post-process - Returned log from {1} was not as expected.".format(section, section)]

View file

@ -1,77 +0,0 @@
# coding=utf-8
import os
import core
import requests
import shutil
from core.nzbToMediaUtil import convert_to_ascii, server_responding
from core import logger
requests.packages.urllib3.disable_warnings()
class autoProcessGames(object):
def process(self, section, dirName, inputName=None, status=0, clientAgent='manual', inputCategory=None):
status = int(status)
cfg = dict(core.CFG[section][inputCategory])
host = cfg["host"]
port = cfg["port"]
apikey = cfg["apikey"]
library = cfg.get("library")
ssl = int(cfg.get("ssl", 0))
web_root = cfg.get("web_root", "")
protocol = "https://" if ssl else "http://"
url = "{0}{1}:{2}{3}/api".format(protocol, host, port, web_root)
if not server_responding(url):
logger.error("Server did not respond. Exiting", section)
return [1, "{0}: Failed to post-process - {1} did not respond.".format(section, section)]
inputName, dirName = convert_to_ascii(inputName, dirName)
fields = inputName.split("-")
gamezID = fields[0].replace("[", "").replace("]", "").replace(" ", "")
downloadStatus = 'Downloaded' if status == 0 else 'Wanted'
params = {
'api_key': apikey,
'mode': 'UPDATEREQUESTEDSTATUS',
'db_id': gamezID,
'status': downloadStatus
}
logger.debug("Opening URL: {0}".format(url), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error("Unable to open URL")
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
result = r.json()
logger.postprocess("{0}".format(result), section)
if library:
logger.postprocess("moving files to library: {0}".format(library), section)
try:
shutil.move(dirName, os.path.join(library, inputName))
except:
logger.error("Unable to move {0} to {1}".format(dirName, os.path.join(library, inputName)), section)
return [1, "{0}: Failed to post-process - Unable to move files".format(section)]
else:
logger.error("No library specified to move files to. Please edit your configuration.", section)
return [1, "{0}: Failed to post-process - No library defined in {1}".format(section, section)]
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
elif result['success']:
logger.postprocess("SUCCESS: Status for {0} has been set to {1} in Gamez".format(gamezID, downloadStatus), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
else:
logger.error("FAILED: Status for {0} has NOT been updated in Gamez".format(gamezID), section)
return [1, "{0}: Failed to post-process - Returned log from {1} was not as expected.".format(section, section)]

View file

@ -1,464 +0,0 @@
# coding=utf-8
import os
import time
import requests
import json
import core
from core.nzbToMediaSceneExceptions import process_all_exceptions
from core.nzbToMediaUtil import convert_to_ascii, rmDir, find_imdbid, find_download, listMediaFiles, remoteDir, import_subs, server_responding, reportNzb
from core import logger
from core.transcoder import transcoder
requests.packages.urllib3.disable_warnings()
class autoProcessMovie(object):
def get_release(self, baseURL, imdbid=None, download_id=None, release_id=None):
results = {}
params = {}
# determine cmd and params to send to CouchPotato to get our results
section = 'movies'
cmd = "media.list"
if release_id or imdbid:
section = 'media'
cmd = "media.get"
params['id'] = release_id or imdbid
if not (release_id or imdbid or download_id):
logger.debug("No information available to filter CP results")
return results
url = "{0}{1}".format(baseURL, cmd)
logger.debug("Opening URL: {0} with PARAMS: {1}".format(url, params))
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error("Unable to open URL {0}".format(url))
return results
try:
result = r.json()
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error("CouchPotato returned the following non-json data")
for line in r.iter_lines():
logger.error("{0}".format(line))
return results
if not result['success']:
if 'error' in result:
logger.error('{0}'.format(result['error']))
else:
logger.error("no media found for id {0}".format(params['id']))
return results
# Gather release info and return it back, no need to narrow results
if release_id:
try:
id = result[section]['_id']
results[id] = result[section]
return results
except:
pass
# Gather release info and proceed with trying to narrow results to one release choice
movies = result[section]
if not isinstance(movies, list):
movies = [movies]
for movie in movies:
if movie['status'] not in ['active', 'done']:
continue
releases = movie['releases']
for release in releases:
try:
if release['status'] not in ['snatched', 'downloaded', 'done']:
continue
if download_id:
if download_id.lower() != release['download_info']['id'].lower():
continue
id = release['_id']
results[id] = release
results[id]['title'] = movie['title']
except:
continue
# Narrow results by removing old releases by comparing their last_edit field
if len(results) > 1:
for id1, x1 in results.items():
for id2, x2 in results.items():
try:
if x2["last_edit"] > x1["last_edit"]:
results.pop(id1)
except:
continue
# Search downloads on clients for a match to try and narrow our results down to 1
if len(results) > 1:
for id, x in results.items():
try:
if not find_download(str(x['download_info']['downloader']).lower(), x['download_info']['id']):
results.pop(id)
except:
continue
return results
def command_complete(self, url, params, headers, section):
try:
r = requests.get(url, params=params, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url), section)
return None
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return None
else:
try:
return r.json()['state']
except (ValueError, KeyError):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error("{0} did not return expected json data.".format(section), section)
return None
def CDH(self, url2, headers, section="MAIN"):
try:
r = requests.get(url2, params={}, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url2), section)
return False
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return False
else:
try:
return r.json().get("enableCompletedDownloadHandling", False)
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
return False
def process(self, section, dirName, inputName=None, status=0, clientAgent="manual", download_id="", inputCategory=None, failureLink=None):
cfg = dict(core.CFG[section][inputCategory])
host = cfg["host"]
port = cfg["port"]
apikey = cfg["apikey"]
if section == "CouchPotato":
method = cfg["method"]
else:
method = None
#added importMode for Radarr config
if section == "Radarr":
importMode = cfg.get("importMode","Move")
else:
importMode = None
delete_failed = int(cfg["delete_failed"])
wait_for = int(cfg["wait_for"])
ssl = int(cfg.get("ssl", 0))
web_root = cfg.get("web_root", "")
remote_path = int(cfg.get("remote_path", 0))
protocol = "https://" if ssl else "http://"
omdbapikey = cfg.get("omdbapikey", "")
status = int(status)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get("extract", 0))
imdbid = find_imdbid(dirName, inputName, omdbapikey)
if section == "CouchPotato":
baseURL = "{0}{1}:{2}{3}/api/{4}/".format(protocol, host, port, web_root, apikey)
if section == "Radarr":
baseURL = "{0}{1}:{2}{3}/api/command".format(protocol, host, port, web_root)
url2 = "{0}{1}:{2}{3}/api/config/downloadClient".format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey}
if not apikey:
logger.info('No CouchPotato or Radarr apikey entered. Performing transcoder functions only')
release = None
elif server_responding(baseURL):
if section == "CouchPotato":
release = self.get_release(baseURL, imdbid, download_id)
else:
release = None
else:
logger.error("Server did not respond. Exiting", section)
return [1, "{0}: Failed to post-process - {1} did not respond.".format(section, section)]
# pull info from release found if available
release_id = None
media_id = None
downloader = None
release_status_old = None
if release:
try:
release_id = release.keys()[0]
media_id = release[release_id]['media_id']
download_id = release[release_id]['download_info']['id']
downloader = release[release_id]['download_info']['downloader']
release_status_old = release[release_id]['status']
except:
pass
if not os.path.isdir(dirName) and os.path.isfile(dirName): # If the input directory is a file, assume single file download and split dir/name.
dirName = os.path.split(os.path.normpath(dirName))[0]
SpecificPath = os.path.join(dirName, str(inputName))
cleanName = os.path.splitext(SpecificPath)
if cleanName[1] == ".nzb":
SpecificPath = cleanName[0]
if os.path.isdir(SpecificPath):
dirName = SpecificPath
process_all_exceptions(inputName, dirName)
inputName, dirName = convert_to_ascii(inputName, dirName)
if not listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False) and listMediaFiles(dirName, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dirName))
core.extractFiles(dirName)
inputName, dirName = convert_to_ascii(inputName, dirName)
good_files = 0
num_files = 0
# Check video files for corruption
for video in listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False):
num_files += 1
if transcoder.isVideoGood(video, status):
import_subs(video)
good_files += 1
if num_files and good_files == num_files:
if status:
logger.info("Status shown as failed from Downloader, but {0} valid video files found. Setting as success.".format(good_files), section)
status = 0
elif num_files and good_files < num_files:
logger.info("Status shown as success from Downloader, but corrupt video files found. Setting as failed.", section)
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if failureLink:
failureLink += '&corrupt=true'
status = 1
elif clientAgent == "manual":
logger.warning("No media files found in directory {0} to manually process.".format(dirName), section)
return [0, ""] # Success (as far as this script is concerned)
else:
logger.warning("No media files found in directory {0}. Processing this as a failed download".format(dirName), section)
status = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if status == 0:
if core.TRANSCODE == 1:
result, newDirName = transcoder.Transcode_directory(dirName)
if result == 0:
logger.debug("Transcoding succeeded for files in {0}".format(dirName), section)
dirName = newDirName
chmod_directory = int(str(cfg.get("chmodDirectory", "0")), 8)
logger.debug("Config setting 'chmodDirectory' currently set to {0}".format(oct(chmod_directory)), section)
if chmod_directory:
logger.info("Attempting to set the octal permission of '{0}' on directory '{1}'".format(oct(chmod_directory), dirName), section)
core.rchmod(dirName, chmod_directory)
else:
logger.error("Transcoding failed for files in {0}".format(dirName), section)
return [1, "{0}: Failed to post-process - Transcoding failed".format(section)]
for video in listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False):
if not release and ".cp(tt" not in video and imdbid:
videoName, videoExt = os.path.splitext(video)
video2 = "{0}.cp({1}){2}".format(videoName, imdbid, videoExt)
if not (clientAgent in [core.TORRENT_CLIENTAGENT, 'manual'] and core.USELINK == 'move-sym'):
logger.debug('Renaming: {0} to: {1}'.format(video, video2))
os.rename(video, video2)
if not apikey: #If only using Transcoder functions, exit here.
logger.info('No CouchPotato or Radarr apikey entered. Processing completed.')
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
params = {}
if download_id and release_id:
params['downloader'] = downloader or clientAgent
params['download_id'] = download_id
params['media_folder'] = remoteDir(dirName) if remote_path else dirName
if section == "CouchPotato":
if method == "manage":
command = "manage.update"
params = {}
else:
command = "renamer.scan"
url = "{0}{1}".format(baseURL, command)
logger.debug("Opening URL: {0} with PARAMS: {1}".format(url, params), section)
logger.postprocess("Starting {0} scan for {1}".format(method, inputName), section)
if section == "Radarr":
payload = {'name': 'DownloadedMoviesScan', 'path': params['media_folder'], 'downloadClientId': download_id,'importMode' : importMode}
if not download_id:
payload.pop("downloadClientId")
logger.debug("Opening URL: {0} with PARAMS: {1}".format(baseURL, payload), section)
logger.postprocess("Starting DownloadedMoviesScan scan for {0}".format(inputName), section)
try:
if section == "CouchPotato":
r = requests.get(url, params=params, verify=False, timeout=(30, 1800))
else:
r = requests.post(baseURL, data=json.dumps(payload), headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error("Unable to open URL", section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
elif section == "CouchPotato" and result['success']:
logger.postprocess("SUCCESS: Finished {0} scan for folder {1}".format(method, dirName), section)
if method == "manage":
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif section == "Radarr":
logger.postprocess("Radarr response: {0}".format(result['state']))
try:
res = json.loads(r.content)
scan_id = int(res['id'])
logger.debug("Scan started with id: {0}".format(scan_id), section)
Started = True
except Exception as e:
logger.warning("No scan id was returned due to: {0}".format(e), section)
scan_id = None
else:
logger.error("FAILED: {0} scan was unable to finish for folder {1}. exiting!".format(method, dirName),
section)
return [1, "{0}: Failed to post-process - Server did not return success".format(section)]
else:
core.FAILED = True
logger.postprocess("FAILED DOWNLOAD DETECTED FOR {0}".format(inputName), section)
if failureLink:
reportNzb(failureLink, clientAgent)
if section == "Radarr":
logger.postprocess("FAILED: The download failed. Sending failed download to {0} for CDH processing".format(section), section)
return [1, "{0}: Download Failed. Sending back to {1}".format(section, section)] # Return as failed to flag this in the downloader.
if delete_failed and os.path.isdir(dirName) and not os.path.dirname(dirName) == dirName:
logger.postprocess("Deleting failed files and folder {0}".format(dirName), section)
rmDir(dirName)
if not release_id and not media_id:
logger.error("Could not find a downloaded movie in the database matching {0}, exiting!".format(inputName),
section)
return [1, "{0}: Failed to post-process - Failed download not found in {1}".format(section, section)]
if release_id:
logger.postprocess("Setting failed release {0} to ignored ...".format(inputName), section)
url = "{url}release.ignore".format(url=baseURL)
params = {'id': release_id}
logger.debug("Opening URL: {0} with PARAMS: {1}".format(url, params), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.ConnectionError:
logger.error("Unable to open URL {0}".format(url), section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
elif result['success']:
logger.postprocess("SUCCESS: {0} has been set to ignored ...".format(inputName), section)
else:
logger.warning("FAILED: Unable to set {0} to ignored!".format(inputName), section)
return [1, "{0}: Failed to post-process - Unable to set {1} to ignored".format(section, inputName)]
logger.postprocess("Trying to snatch the next highest ranked release.", section)
url = "{0}movie.searcher.try_next".format(baseURL)
logger.debug("Opening URL: {0}".format(url), section)
try:
r = requests.get(url, params={'media_id': media_id}, verify=False, timeout=(30, 600))
except requests.ConnectionError:
logger.error("Unable to open URL {0}".format(url), section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
elif result['success']:
logger.postprocess("SUCCESS: Snatched the next highest release ...", section)
return [0, "{0}: Successfully snatched next highest release".format(section)]
else:
logger.postprocess("SUCCESS: Unable to find a new release to snatch now. CP will keep searching!", section)
return [0, "{0}: No new release found now. {1} will keep searching".format(section, section)]
# Added a release that was not in the wanted list so confirm rename successful by finding this movie media.list.
if not release:
download_id = None # we don't want to filter new releases based on this.
# we will now check to see if CPS has finished renaming before returning to TorrentToMedia and unpausing.
timeout = time.time() + 60 * wait_for
while time.time() < timeout: # only wait 2 (default) minutes, then return.
logger.postprocess("Checking for status change, please stand by ...", section)
if section == "CouchPotato":
release = self.get_release(baseURL, imdbid, download_id, release_id)
scan_id = None
else:
release = None
if release:
try:
release_id = release.keys()[0]
title = release[release_id]['title']
release_status_new = release[release_id]['status']
if release_status_old is None: # we didn't have a release before, but now we do.
logger.postprocess("SUCCESS: Movie {0} has now been added to CouchPotato with release status of [{1}]".format(
title, str(release_status_new).upper()), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
if release_status_new != release_status_old:
logger.postprocess("SUCCESS: Release for {0} has now been marked with a status of [{1}]".format(
title, str(release_status_new).upper()), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
except:
pass
elif scan_id:
url = "{0}/{1}".format(baseURL, scan_id)
command_status = self.command_complete(url, params, headers, section)
if command_status:
logger.debug("The Scan command return status: {0}".format(command_status), section)
if command_status in ['completed']:
logger.debug("The Scan command has completed successfully. Renaming was successful.", section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif command_status in ['failed']:
logger.debug("The Scan command has failed. Renaming was not successful.", section)
# return [1, "%s: Failed to post-process %s" % (section, inputName) ]
if not os.path.isdir(dirName):
logger.postprocess("SUCCESS: Input Directory [{0}] has been processed and removed".format(
dirName), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif not listMediaFiles(dirName, media=True, audio=False, meta=False, archives=True):
logger.postprocess("SUCCESS: Input Directory [{0}] has no remaining media files. This has been fully processed.".format(
dirName), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
# pause and let CouchPotatoServer/Radarr catch its breath
time.sleep(10 * wait_for)
# The status hasn't changed. we have waited wait_for minutes which is more than enough. uTorrent can resume seeding now.
if section == "Radarr" and self.CDH(url2, headers, section=section):
logger.debug("The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.".format(section), section)
return [status, "{0}: Complete DownLoad Handling is enabled. Passing back to {1}".format(section, section)]
logger.warning(
"{0} does not appear to have changed status after {1} minutes, Please check your logs.".format(inputName, wait_for),
section)
return [1, "{0}: Failed to post-process - No change in status".format(section)]

View file

@ -1,238 +0,0 @@
# coding=utf-8
import os
import time
import requests
import core
import json
from core.nzbToMediaUtil import convert_to_ascii, rmDir, remoteDir, listMediaFiles, server_responding
from core.nzbToMediaSceneExceptions import process_all_exceptions
from core import logger
requests.packages.urllib3.disable_warnings()
class autoProcessMusic(object):
def command_complete(self, url, params, headers, section):
try:
r = requests.get(url, params=params, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url), section)
return None
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return None
else:
try:
return r.json()['state']
except (ValueError, KeyError):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error("{0} did not return expected json data.".format(section), section)
return None
def get_status(self, url, apikey, dirName):
logger.debug("Attempting to get current status for release:{0}".format(os.path.basename(dirName)))
params = {
'apikey': apikey,
'cmd': "getHistory"
}
logger.debug("Opening URL: {0} with PARAMS: {1}".format(url, params))
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.RequestException:
logger.error("Unable to open URL")
return None
try:
result = r.json()
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
return None
for album in result:
if os.path.basename(dirName) == album['FolderName']:
return album["Status"].lower()
def forceProcess(self, params, url, apikey, inputName, dirName, section, wait_for):
release_status = self.get_status(url, apikey, dirName)
if not release_status:
logger.error("Could not find a status for {0}, is it in the wanted list ?".format(inputName), section)
logger.debug("Opening URL: {0} with PARAMS: {1}".format(url, params), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error("Unable to open URL {0}".format(url), section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
logger.debug("Result: {0}".format(r.text), section)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
elif r.text == "OK":
logger.postprocess("SUCCESS: Post-Processing started for {0} in folder {1} ...".format(inputName, dirName), section)
else:
logger.error("FAILED: Post-Processing has NOT started for {0} in folder {1}. exiting!".format(inputName, dirName), section)
return [1, "{0}: Failed to post-process - Returned log from {1} was not as expected.".format(section, section)]
# we will now wait for this album to be processed before returning to TorrentToMedia and unpausing.
timeout = time.time() + 60 * wait_for
while time.time() < timeout:
current_status = self.get_status(url, apikey, dirName)
if current_status is not None and current_status != release_status: # Something has changed. CPS must have processed this movie.
logger.postprocess("SUCCESS: This release is now marked as status [{0}]".format(current_status), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
if not os.path.isdir(dirName):
logger.postprocess("SUCCESS: The input directory {0} has been removed Processing must have finished.".format(dirName), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
time.sleep(10 * wait_for)
# The status hasn't changed.
return [2, "no change"]
def process(self, section, dirName, inputName=None, status=0, clientAgent="manual", inputCategory=None):
status = int(status)
cfg = dict(core.CFG[section][inputCategory])
host = cfg["host"]
port = cfg["port"]
apikey = cfg["apikey"]
wait_for = int(cfg["wait_for"])
ssl = int(cfg.get("ssl", 0))
delete_failed = int(cfg["delete_failed"])
web_root = cfg.get("web_root", "")
remote_path = int(cfg.get("remote_path", 0))
protocol = "https://" if ssl else "http://"
status = int(status)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get("extract", 0))
if section == "Lidarr":
url = "{0}{1}:{2}{3}/api/v1".format(protocol, host, port, web_root)
else:
url = "{0}{1}:{2}{3}/api".format(protocol, host, port, web_root)
if not server_responding(url):
logger.error("Server did not respond. Exiting", section)
return [1, "{0}: Failed to post-process - {1} did not respond.".format(section, section)]
if not os.path.isdir(dirName) and os.path.isfile(dirName): # If the input directory is a file, assume single file download and split dir/name.
dirName = os.path.split(os.path.normpath(dirName))[0]
SpecificPath = os.path.join(dirName, str(inputName))
cleanName = os.path.splitext(SpecificPath)
if cleanName[1] == ".nzb":
SpecificPath = cleanName[0]
if os.path.isdir(SpecificPath):
dirName = SpecificPath
process_all_exceptions(inputName, dirName)
inputName, dirName = convert_to_ascii(inputName, dirName)
if not listMediaFiles(dirName, media=False, audio=True, meta=False, archives=False) and listMediaFiles(dirName, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dirName))
core.extractFiles(dirName)
inputName, dirName = convert_to_ascii(inputName, dirName)
#if listMediaFiles(dirName, media=False, audio=True, meta=False, archives=False) and status:
# logger.info("Status shown as failed from Downloader, but valid video files found. Setting as successful.", section)
# status = 0
if status == 0 and section == "HeadPhones":
params = {
'apikey': apikey,
'cmd': "forceProcess",
'dir': remoteDir(dirName) if remote_path else dirName
}
res = self.forceProcess(params, url, apikey, inputName, dirName, section, wait_for)
if res[0] in [0, 1]:
return res
params = {
'apikey': apikey,
'cmd': "forceProcess",
'dir': os.path.split(remoteDir(dirName))[0] if remote_path else os.path.split(dirName)[0]
}
res = self.forceProcess(params, url, apikey, inputName, dirName, section, wait_for)
if res[0] in [0, 1]:
return res
# The status hasn't changed. uTorrent can resume seeding now.
logger.warning("The music album does not appear to have changed status after {0} minutes. Please check your Logs".format(wait_for), section)
return [1, "{0}: Failed to post-process - No change in wanted status".format(section)]
elif status == 0 and section == "Lidarr":
url = "{0}{1}:{2}{3}/api/v1/command".format(protocol, host, port, web_root)
headers = {"X-Api-Key": apikey}
if remote_path:
logger.debug("remote_path: {0}".format(remoteDir(dirName)), section)
data = {"name": "Rename", "path": remoteDir(dirName)}
else:
logger.debug("path: {0}".format(dirName), section)
data = {"name": "Rename", "path": dirName}
data = json.dumps(data)
try:
logger.debug("Opening URL: {0} with data: {1}".format(url, data), section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url), section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
Success = False
Queued = False
Started = False
try:
res = json.loads(r.content)
scan_id = int(res['id'])
logger.debug("Scan started with id: {0}".format(scan_id), section)
Started = True
except Exception as e:
logger.warning("No scan id was returned due to: {0}".format(e), section)
scan_id = None
Started = False
return [1, "{0}: Failed to post-process - Unable to start scan".format(section)]
n = 0
params = {}
url = "{0}/{1}".format(url, scan_id)
while n < 6: # set up wait_for minutes to see if command completes..
time.sleep(10 * wait_for)
command_status = self.command_complete(url, params, headers, section)
if command_status and command_status in ['completed', 'failed']:
break
n += 1
if command_status:
logger.debug("The Scan command return status: {0}".format(command_status), section)
if not os.path.exists(dirName):
logger.debug("The directory {0} has been removed. Renaming was successful.".format(dirName), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif command_status and command_status in ['completed']:
logger.debug("The Scan command has completed successfully. Renaming was successful.", section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif command_status and command_status in ['failed']:
logger.debug("The Scan command has failed. Renaming was not successful.", section)
# return [1, "%s: Failed to post-process %s" % (section, inputName) ]
else:
logger.debug("The Scan command did not return status completed. Passing back to {0} to attempt complete download handling.".format(section), section)
return [status, "{0}: Passing back to {1} to attempt Complete Download Handling".format(section, section)]
else:
if section == "Lidarr":
logger.postprocess("FAILED: The download failed. Sending failed download to {0} for CDH processing".format(section), section)
return [1, "{0}: Download Failed. Sending back to {1}".format(section, section)] # Return as failed to flag this in the downloader.
else:
logger.warning("FAILED DOWNLOAD DETECTED", section)
if delete_failed and os.path.isdir(dirName) and not os.path.dirname(dirName) == dirName:
logger.postprocess("Deleting failed files and folder {0}".format(dirName), section)
rmDir(dirName)
return [1, "{0}: Failed to post-process. {1} does not support failed downloads".format(section, section)] # Return as failed to flag this in the downloader.

View file

@ -1,373 +0,0 @@
# coding=utf-8
import copy
import os
import time
import errno
import requests
import json
import core
from core.nzbToMediaAutoFork import autoFork
from core.nzbToMediaSceneExceptions import process_all_exceptions
from core.nzbToMediaUtil import convert_to_ascii, flatten, rmDir, listMediaFiles, remoteDir, import_subs, server_responding, reportNzb
from core import logger
from core.transcoder import transcoder
requests.packages.urllib3.disable_warnings()
class autoProcessTV(object):
def command_complete(self, url, params, headers, section):
try:
r = requests.get(url, params=params, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url), section)
return None
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return None
else:
try:
return r.json()['state']
except (ValueError, KeyError):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error("{0} did not return expected json data.".format(section), section)
return None
def CDH(self, url2, headers, section="MAIN"):
try:
r = requests.get(url2, params={}, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url2), section)
return False
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return False
else:
try:
return r.json().get("enableCompletedDownloadHandling", False)
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
return False
def processEpisode(self, section, dirName, inputName=None, failed=False, clientAgent="manual", download_id=None, inputCategory=None, failureLink=None):
cfg = dict(core.CFG[section][inputCategory])
host = cfg["host"]
port = cfg["port"]
ssl = int(cfg.get("ssl", 0))
web_root = cfg.get("web_root", "")
protocol = "https://" if ssl else "http://"
username = cfg.get("username", "")
password = cfg.get("password", "")
apikey = cfg.get("apikey", "")
if server_responding("{0}{1}:{2}{3}".format(protocol, host, port, web_root)):
# auto-detect correct fork
fork, fork_params = autoFork(section, inputCategory)
elif not username and not apikey:
logger.info('No SickBeard username or Sonarr apikey entered. Performing transcoder functions only')
fork, fork_params = "None", {}
else:
logger.error("Server did not respond. Exiting", section)
return [1, "{0}: Failed to post-process - {1} did not respond.".format(section, section)]
delete_failed = int(cfg.get("delete_failed", 0))
nzbExtractionBy = cfg.get("nzbExtractionBy", "Downloader")
process_method = cfg.get("process_method")
if clientAgent == core.TORRENT_CLIENTAGENT and core.USELINK == "move-sym":
process_method = "symlink"
remote_path = int(cfg.get("remote_path", 0))
wait_for = int(cfg.get("wait_for", 2))
force = int(cfg.get("force", 0))
delete_on = int(cfg.get("delete_on", 0))
ignore_subs = int(cfg.get("ignore_subs", 0))
status = int(failed)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get("extract", 0))
#get importmode, default to "Move" for consistency with legacy
importMode = cfg.get("importMode","Move")
if not os.path.isdir(dirName) and os.path.isfile(dirName): # If the input directory is a file, assume single file download and split dir/name.
dirName = os.path.split(os.path.normpath(dirName))[0]
SpecificPath = os.path.join(dirName, str(inputName))
cleanName = os.path.splitext(SpecificPath)
if cleanName[1] == ".nzb":
SpecificPath = cleanName[0]
if os.path.isdir(SpecificPath):
dirName = SpecificPath
# Attempt to create the directory if it doesn't exist and ignore any
# error stating that it already exists. This fixes a bug where SickRage
# won't process the directory because it doesn't exist.
try:
os.makedirs(dirName) # Attempt to create the directory
except OSError as e:
# Re-raise the error if it wasn't about the directory not existing
if e.errno != errno.EEXIST:
raise
if 'process_method' not in fork_params or (clientAgent in ['nzbget', 'sabnzbd'] and nzbExtractionBy != "Destination"):
if inputName:
process_all_exceptions(inputName, dirName)
inputName, dirName = convert_to_ascii(inputName, dirName)
# Now check if tv files exist in destination.
if not listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False):
if listMediaFiles(dirName, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dirName))
core.extractFiles(dirName)
inputName, dirName = convert_to_ascii(inputName, dirName)
if listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False): # Check that a video exists. if not, assume failed.
flatten(dirName)
# Check video files for corruption
good_files = 0
num_files = 0
for video in listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False):
num_files += 1
if transcoder.isVideoGood(video, status):
good_files += 1
import_subs(video)
if num_files > 0:
if good_files == num_files and not status == 0:
logger.info('Found Valid Videos. Setting status Success')
status = 0
failed = 0
if good_files < num_files and status == 0:
logger.info('Found corrupt videos. Setting status Failed')
status = 1
failed = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if failureLink:
failureLink += '&corrupt=true'
elif clientAgent == "manual":
logger.warning("No media files found in directory {0} to manually process.".format(dirName), section)
return [0, ""] # Success (as far as this script is concerned)
elif nzbExtractionBy == "Destination":
logger.info("Check for media files ignored because nzbExtractionBy is set to Destination.")
if int(failed) == 0:
logger.info("Setting Status Success.")
status = 0
failed = 0
else:
logger.info("Downloader reported an error during download or verification. Processing this as a failed download.")
status = 1
failed = 1
else:
logger.warning("No media files found in directory {0}. Processing this as a failed download".format(dirName), section)
status = 1
failed = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if status == 0 and core.TRANSCODE == 1: # only transcode successful downloads
result, newDirName = transcoder.Transcode_directory(dirName)
if result == 0:
logger.debug("SUCCESS: Transcoding succeeded for files in {0}".format(dirName), section)
dirName = newDirName
chmod_directory = int(str(cfg.get("chmodDirectory", "0")), 8)
logger.debug("Config setting 'chmodDirectory' currently set to {0}".format(oct(chmod_directory)), section)
if chmod_directory:
logger.info("Attempting to set the octal permission of '{0}' on directory '{1}'".format(oct(chmod_directory), dirName), section)
core.rchmod(dirName, chmod_directory)
else:
logger.error("FAILED: Transcoding failed for files in {0}".format(dirName), section)
return [1, "{0}: Failed to post-process - Transcoding failed".format(section)]
# configure SB params to pass
fork_params['quiet'] = 1
fork_params['proc_type'] = 'manual'
if inputName is not None:
fork_params['nzbName'] = inputName
for param in copy.copy(fork_params):
if param == "failed":
fork_params[param] = failed
del fork_params['proc_type']
if "type" in fork_params:
del fork_params['type']
if param == "return_data":
fork_params[param] = 0
del fork_params['quiet']
if param == "type":
fork_params[param] = 'manual'
if "proc_type" in fork_params:
del fork_params['proc_type']
if param in ["dirName", "dir", "proc_dir", "process_directory", "path"]:
fork_params[param] = dirName
if remote_path:
fork_params[param] = remoteDir(dirName)
if param == "process_method":
if process_method:
fork_params[param] = process_method
else:
del fork_params[param]
if param in ["force", "force_replace"]:
if force:
fork_params[param] = force
else:
del fork_params[param]
if param in ["delete_on", "delete"]:
if delete_on:
fork_params[param] = delete_on
else:
del fork_params[param]
if param == "ignore_subs":
if ignore_subs:
fork_params[param] = ignore_subs
else:
del fork_params[param]
if param == "force_next":
fork_params[param] = 1
# delete any unused params so we don't pass them to SB by mistake
[fork_params.pop(k) for k, v in fork_params.items() if v is None]
if status == 0:
if section == "NzbDrone" and not apikey:
logger.info('No Sonarr apikey entered. Processing completed.')
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
logger.postprocess("SUCCESS: The download succeeded, sending a post-process request", section)
else:
core.FAILED = True
if failureLink:
reportNzb(failureLink, clientAgent)
if 'failed' in fork_params:
logger.postprocess("FAILED: The download failed. Sending 'failed' process request to {0} branch".format(fork), section)
elif section == "NzbDrone":
logger.postprocess("FAILED: The download failed. Sending failed download to {0} for CDH processing".format(fork), section)
return [1, "{0}: Download Failed. Sending back to {1}".format(section, section)] # Return as failed to flag this in the downloader.
else:
logger.postprocess("FAILED: The download failed. {0} branch does not handle failed downloads. Nothing to process".format(fork), section)
if delete_failed and os.path.isdir(dirName) and not os.path.dirname(dirName) == dirName:
logger.postprocess("Deleting failed files and folder {0}".format(dirName), section)
rmDir(dirName)
return [1, "{0}: Failed to post-process. {1} does not support failed downloads".format(section, section)] # Return as failed to flag this in the downloader.
url = None
if section == "SickBeard":
if apikey:
url = "{0}{1}:{2}{3}/api/{4}/?cmd=postprocess".format(protocol, host, port, web_root, apikey)
else:
url = "{0}{1}:{2}{3}/home/postprocess/processEpisode".format(protocol, host, port, web_root)
elif section == "NzbDrone":
url = "{0}{1}:{2}{3}/api/command".format(protocol, host, port, web_root)
url2 = "{0}{1}:{2}{3}/api/config/downloadClient".format(protocol, host, port, web_root)
headers = {"X-Api-Key": apikey}
# params = {'sortKey': 'series.title', 'page': 1, 'pageSize': 1, 'sortDir': 'asc'}
if remote_path:
logger.debug("remote_path: {0}".format(remoteDir(dirName)), section)
data = {"name": "DownloadedEpisodesScan", "path": remoteDir(dirName), "downloadClientId": download_id, "importMode": importMode}
else:
logger.debug("path: {0}".format(dirName), section)
data = {"name": "DownloadedEpisodesScan", "path": dirName, "downloadClientId": download_id, "importMode": importMode}
if not download_id:
data.pop("downloadClientId")
data = json.dumps(data)
try:
if section == "SickBeard":
logger.debug("Opening URL: {0} with params: {1}".format(url, fork_params), section)
s = requests.Session()
if not apikey and username and password:
login = "{0}{1}:{2}{3}/login".format(protocol, host, port, web_root)
login_params = {'username': username, 'password': password}
r = s.get(login, verify=False, timeout=(30,60))
if r.status_code == 401 and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False, timeout=(30, 60))
r = s.get(url, auth=(username, password), params=fork_params, stream=True, verify=False, timeout=(30, 1800))
elif section == "NzbDrone":
logger.debug("Opening URL: {0} with data: {1}".format(url, data), section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error("Unable to open URL: {0}".format(url), section)
return [1, "{0}: Failed to post-process - Unable to connect to {1}".format(section, section)]
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error("Server returned status {0}".format(r.status_code), section)
return [1, "{0}: Failed to post-process - Server returned status {1}".format(section, r.status_code)]
Success = False
Queued = False
Started = False
if section == "SickBeard":
if apikey:
if r.json()['result'] == 'success':
Success = True
else:
for line in r.iter_lines():
if line:
logger.postprocess("{0}".format(line), section)
if "Moving file from" in line:
inputName = os.path.split(line)[1]
if "added to the queue" in line:
Queued = True
if "Processing succeeded" in line or "Successfully processed" in line:
Success = True
if Queued:
time.sleep(60)
elif section == "NzbDrone":
try:
res = json.loads(r.content)
scan_id = int(res['id'])
logger.debug("Scan started with id: {0}".format(scan_id), section)
Started = True
except Exception as e:
logger.warning("No scan id was returned due to: {0}".format(e), section)
scan_id = None
Started = False
if status != 0 and delete_failed and not os.path.dirname(dirName) == dirName:
logger.postprocess("Deleting failed files and folder {0}".format(dirName), section)
rmDir(dirName)
if Success:
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif section == "NzbDrone" and Started:
n = 0
params = {}
url = "{0}/{1}".format(url, scan_id)
while n < 6: # set up wait_for minutes to see if command completes..
time.sleep(10 * wait_for)
command_status = self.command_complete(url, params, headers, section)
if command_status and command_status in ['completed', 'failed']:
break
n += 1
if command_status:
logger.debug("The Scan command return status: {0}".format(command_status), section)
if not os.path.exists(dirName):
logger.debug("The directory {0} has been removed. Renaming was successful.".format(dirName), section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif command_status and command_status in ['completed']:
logger.debug("The Scan command has completed successfully. Renaming was successful.", section)
return [0, "{0}: Successfully post-processed {1}".format(section, inputName)]
elif command_status and command_status in ['failed']:
logger.debug("The Scan command has failed. Renaming was not successful.", section)
# return [1, "%s: Failed to post-process %s" % (section, inputName) ]
if self.CDH(url2, headers, section=section):
logger.debug("The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.".format(section), section)
return [status, "{0}: Complete DownLoad Handling is enabled. Passing back to {1}".format(section, section)]
else:
logger.warning("The Scan command did not return a valid status. Renaming was not successful.", section)
return [1, "{0}: Failed to post-process {1}".format(section, inputName)]
else:
return [1, "{0}: Failed to post-process - Returned log from {1} was not as expected.".format(section, section)] # We did not receive Success confirmation.

View file

@ -0,0 +1,83 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
import core
from core import logger
from core.auto_process.common import ProcessResult
from core.utils import (
convert_to_ascii,
remote_dir,
server_responding,
)
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, status=0, client_agent='manual', input_category=None):
status = int(status)
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
apikey = cfg['apikey']
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
protocol = 'https://' if ssl else 'http://'
remote_path = int(cfg.get('remote_path', 0))
url = '{0}{1}:{2}{3}/api'.format(protocol, host, port, web_root)
if not server_responding(url):
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
status_code=1,
)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
params = {
'apikey': apikey,
'cmd': 'forceProcess',
'dir': remote_dir(dir_name) if remote_path else dir_name,
}
logger.debug('Opening URL: {0} with params: {1}'.format(url, params), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error('Unable to open URL')
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {1}'.format(section, section),
status_code=1,
)
logger.postprocess('{0}'.format(r.text), section)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
elif r.text == 'OK':
logger.postprocess('SUCCESS: ForceProcess for {0} has been started in LazyLibrarian'.format(dir_name), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
else:
logger.error('FAILED: ForceProcess of {0} has Failed in LazyLibrarian'.format(dir_name), section)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(section),
status_code=1,
)

View file

@ -0,0 +1,99 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import requests
import core
from core import logger
from core.auto_process.common import ProcessResult
from core.utils import convert_to_ascii, remote_dir, server_responding
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, status=0, client_agent='manual', input_category=None):
apc_version = '2.04'
comicrn_version = '1.01'
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
apikey = cfg['apikey']
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
remote_path = int(cfg.get('remote_path'), 0)
protocol = 'https://' if ssl else 'http://'
url = '{0}{1}:{2}{3}/api'.format(protocol, host, port, web_root)
if not server_responding(url):
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
status_code=1,
)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
clean_name, ext = os.path.splitext(input_name)
if len(ext) == 4: # we assume this was a standard extension.
input_name = clean_name
params = {
'cmd': 'forceProcess',
'apikey': apikey,
'nzb_folder': remote_dir(dir_name) if remote_path else dir_name,
}
if input_name is not None:
params['nzb_name'] = input_name
params['failed'] = int(status)
params['apc_version'] = apc_version
params['comicrn_version'] = comicrn_version
success = False
logger.debug('Opening URL: {0}'.format(url), section)
try:
r = requests.post(url, params=params, stream=True, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error('Unable to open URL', section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
result = r.text
if not type(result) == list:
result = result.split('\n')
for line in result:
if line:
logger.postprocess('{0}'.format(line), section)
if 'Post Processing SUCCESSFUL' in line:
success = True
if success:
logger.postprocess('SUCCESS: This issue has been processed successfully', section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
else:
logger.warning('The issue does not appear to have successfully processed. Please check your Logs', section)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(section),
status_code=1,
)

View file

@ -0,0 +1,69 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
from core import logger
class ProcessResult(object):
def __init__(self, message, status_code):
self.message = message
self.status_code = status_code
def __iter__(self):
return self.status_code, self.message
def __bool__(self):
return not bool(self.status_code)
def __str__(self):
return 'Processing {0}: {1}'.format(
'succeeded' if bool(self) else 'failed',
self.message,
)
def __repr__(self):
return '<ProcessResult {0}: {1}>'.format(
self.status_code,
self.message,
)
def command_complete(url, params, headers, section):
try:
r = requests.get(url, params=params, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), section)
return None
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return None
else:
try:
return r.json()['status']
except (ValueError, KeyError):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error('{0} did not return expected json data.'.format(section), section)
return None
def completed_download_handling(url2, headers, section='MAIN'):
try:
r = requests.get(url2, params={}, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url2), section)
return False
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return False
else:
try:
return r.json().get('enableCompletedDownloadHandling', False)
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
return False

106
core/auto_process/games.py Normal file
View file

@ -0,0 +1,106 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import shutil
import requests
import core
from core import logger
from core.auto_process.common import ProcessResult
from core.utils import convert_to_ascii, server_responding
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, status=0, client_agent='manual', input_category=None):
status = int(status)
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
apikey = cfg['apikey']
library = cfg.get('library')
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
protocol = 'https://' if ssl else 'http://'
url = '{0}{1}:{2}{3}/api'.format(protocol, host, port, web_root)
if not server_responding(url):
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
status_code=1,
)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
fields = input_name.split('-')
gamez_id = fields[0].replace('[', '').replace(']', '').replace(' ', '')
download_status = 'Downloaded' if status == 0 else 'Wanted'
params = {
'api_key': apikey,
'mode': 'UPDATEREQUESTEDSTATUS',
'db_id': gamez_id,
'status': download_status,
}
logger.debug('Opening URL: {0}'.format(url), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error('Unable to open URL')
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {1}'.format(section, section),
status_code=1,
)
result = r.json()
logger.postprocess('{0}'.format(result), section)
if library:
logger.postprocess('moving files to library: {0}'.format(library), section)
try:
shutil.move(dir_name, os.path.join(library, input_name))
except Exception:
logger.error('Unable to move {0} to {1}'.format(dir_name, os.path.join(library, input_name)), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to move files'.format(section),
status_code=1,
)
else:
logger.error('No library specified to move files to. Please edit your configuration.', section)
return ProcessResult(
message='{0}: Failed to post-process - No library defined in {0}'.format(section),
status_code=1,
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
elif result['success']:
logger.postprocess('SUCCESS: Status for {0} has been set to {1} in Gamez'.format(gamez_id, download_status), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
else:
logger.error('FAILED: Status for {0} has NOT been updated in Gamez'.format(gamez_id), section)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(section),
status_code=1,
)

View file

@ -0,0 +1,155 @@
import time
from core import logger
from core.auto_process.common import ProcessResult
from core.auto_process.managers.sickbeard import SickBeard
import requests
class PyMedusa(SickBeard):
"""PyMedusa class."""
def __init__(self, sb_init):
super(PyMedusa, self).__init__(sb_init)
def _create_url(self):
return '{0}{1}:{2}{3}/home/postprocess/processEpisode'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
class PyMedusaApiV1(SickBeard):
"""PyMedusa apiv1 class."""
def __init__(self, sb_init):
super(PyMedusaApiV1, self).__init__(sb_init)
def _create_url(self):
return '{0}{1}:{2}{3}/api/{4}/'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root, self.sb_init.apikey)
def api_call(self):
self._process_fork_prarams()
url = self._create_url()
logger.debug('Opening URL: {0} with params: {1}'.format(url, self.sb_init.fork_params), self.sb_init.section)
try:
response = self.session.get(url, auth=(self.sb_init.username, self.sb_init.password), params=self.sb_init.fork_params, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(self.sb_init.section),
status_code=1,
)
if response.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(response.status_code), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(self.sb_init.section, response.status_code),
status_code=1,
)
if response.json()['result'] == 'success':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(self.sb_init.section, self.input_name),
status_code=0,
)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(self.sb_init.section),
status_code=1, # We did not receive Success confirmation.
)
class PyMedusaApiV2(SickBeard):
"""PyMedusa apiv2 class."""
def __init__(self, sb_init):
super(PyMedusaApiV2, self).__init__(sb_init)
# Check for an apikey, as this is required with using fork = medusa-apiv2
if not sb_init.apikey:
raise Exception('For the section SickBeard `fork = medusa-apiv2` you also need to configure an `apikey`')
def _create_url(self):
return '{0}{1}:{2}{3}/api/v2/postprocess'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
def _get_identifier_status(self, url):
# Loop through requesting medusa for the status on the queueitem.
try:
response = self.session.get(url, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to get postprocess identifier status', self.sb_init.section)
return False
try:
jdata = response.json()
except ValueError:
return False
return jdata
def api_call(self):
self._process_fork_prarams()
url = self._create_url()
logger.debug('Opening URL: {0}'.format(url), self.sb_init.section)
payload = self.sb_init.fork_params
payload['resource'] = self.sb_init.fork_params['nzbName']
del payload['nzbName']
# Update the session with the x-api-key
self.session.headers.update({
'x-api-key': self.sb_init.apikey,
'Content-type': 'application/json'
})
# Send postprocess request
try:
response = self.session.post(url, json=payload, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to send postprocess request', self.sb_init.section)
return ProcessResult(
message='{0}: Unable to send postprocess request to PyMedusa',
status_code=1,
)
# Get UUID
if response:
try:
jdata = response.json()
except ValueError:
logger.debug('No data returned from provider')
return False
if not jdata.get('status') or not jdata['status'] == 'success':
return False
queueitem_identifier = jdata['queueItem']['identifier']
wait_for = int(self.sb_init.config.get('wait_for', 2))
n = 0
response = {}
url = '{0}/{1}'.format(url, queueitem_identifier)
while n < 12: # set up wait_for minutes to see if command completes..
time.sleep(5 * wait_for)
response = self._get_identifier_status(url)
if response and response.get('success'):
break
if 'error' in response:
break
n += 1
# Log Medusa's PP logs here.
if response.get('output'):
for line in response['output']:
logger.postprocess('{0}'.format(line), self.sb_init.section)
# For now this will most likely always be True. But in the future we could return an exit state
# for when the PP in medusa didn't yield an expected result.
if response.get('success'):
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(self.sb_init.section, self.input_name),
status_code=0,
)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(self.sb_init.section),
status_code=1, # We did not receive Success confirmation.
)

View file

@ -0,0 +1,500 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import copy
import core
from core import logger
from core.auto_process.common import (
ProcessResult,
)
from core.utils import remote_dir
from oauthlib.oauth2 import LegacyApplicationClient
import requests
from requests_oauthlib import OAuth2Session
import six
from six import iteritems
class InitSickBeard(object):
"""Sickbeard init class.
Used to determin which sickbeard fork object to initialize.
"""
def __init__(self, cfg, section, input_category):
# As a bonus let's also put the config on self.
self.config = cfg
self.section = section
self.input_category = input_category
self.host = cfg['host']
self.port = cfg['port']
self.ssl = int(cfg.get('ssl', 0))
self.web_root = cfg.get('web_root', '')
self.protocol = 'https://' if self.ssl else 'http://'
self.username = cfg.get('username', '')
self.password = cfg.get('password', '')
self.apikey = cfg.get('apikey', '')
self.api_version = int(cfg.get('api_version', 2))
self.sso_username = cfg.get('sso_username', '')
self.sso_password = cfg.get('sso_password', '')
self.fork = ''
self.fork_params = None
self.fork_obj = None
replace = {
'medusa': 'Medusa',
'medusa-api': 'Medusa-api',
'sickbeard-api': 'SickBeard-api',
'sickgear': 'SickGear',
'sickchill': 'SickChill',
'stheno': 'Stheno',
}
_val = cfg.get('fork', 'auto')
f1 = replace.get(_val, _val)
try:
self.fork = f1, core.FORKS[f1]
except KeyError:
self.fork = 'auto'
self.protocol = 'https://' if self.ssl else 'http://'
def auto_fork(self):
# auto-detect correct section
# config settings
if core.FORK_SET: # keep using determined fork for multiple (manual) post-processing
logger.info('{section}:{category} fork already set to {fork}'.format
(section=self.section, category=self.input_category, fork=core.FORK_SET[0]))
return core.FORK_SET[0], core.FORK_SET[1]
cfg = dict(core.CFG[self.section][self.input_category])
replace = {
'medusa': 'Medusa',
'medusa-api': 'Medusa-api',
'medusa-apiv1': 'Medusa-api',
'medusa-apiv2': 'Medusa-apiv2',
'sickbeard-api': 'SickBeard-api',
'sickgear': 'SickGear',
'sickchill': 'SickChill',
'stheno': 'Stheno',
}
_val = cfg.get('fork', 'auto')
f1 = replace.get(_val.lower(), _val)
try:
self.fork = f1, core.FORKS[f1]
except KeyError:
self.fork = 'auto'
protocol = 'https://' if self.ssl else 'http://'
if self.section == 'NzbDrone':
logger.info('Attempting to verify {category} fork'.format
(category=self.input_category))
url = '{protocol}{host}:{port}{root}/api/rootfolder'.format(
protocol=protocol, host=self.host, port=self.port, root=self.web_root,
)
headers = {'X-Api-Key': self.apikey}
try:
r = requests.get(url, headers=headers, stream=True, verify=False)
except requests.ConnectionError:
logger.warning('Could not connect to {0}:{1} to verify fork!'.format(self.section, self.input_category))
if not r.ok:
logger.warning('Connection to {section}:{category} failed! '
'Check your configuration'.format
(section=self.section, category=self.input_category))
self.fork = ['default', {}]
elif self.section == 'SiCKRAGE':
logger.info('Attempting to verify {category} fork'.format
(category=self.input_category))
if self.api_version >= 2:
url = '{protocol}{host}:{port}{root}/api/v{api_version}/ping'.format(
protocol=protocol, host=self.host, port=self.port, root=self.web_root, api_version=self.api_version
)
api_params = {}
else:
url = '{protocol}{host}:{port}{root}/api/v{api_version}/{apikey}/'.format(
protocol=protocol, host=self.host, port=self.port, root=self.web_root, api_version=self.api_version, apikey=self.apikey,
)
api_params = {'cmd': 'postprocess', 'help': '1'}
try:
if self.api_version >= 2 and self.sso_username and self.sso_password:
oauth = OAuth2Session(client=LegacyApplicationClient(client_id=core.SICKRAGE_OAUTH_CLIENT_ID))
oauth_token = oauth.fetch_token(client_id=core.SICKRAGE_OAUTH_CLIENT_ID,
token_url=core.SICKRAGE_OAUTH_TOKEN_URL,
username=self.sso_username,
password=self.sso_password)
r = requests.get(url, headers={'Authorization': 'Bearer ' + oauth_token['access_token']}, stream=True, verify=False)
else:
r = requests.get(url, params=api_params, stream=True, verify=False)
if not r.ok:
logger.warning('Connection to {section}:{category} failed! '
'Check your configuration'.format(
section=self.section, category=self.input_category
))
except requests.ConnectionError:
logger.warning('Could not connect to {0}:{1} to verify API version!'.format(self.section, self.input_category))
params = {
'path': None,
'failed': None,
'process_method': None,
'force_replace': None,
'return_data': None,
'type': None,
'delete': None,
'force_next': None,
'is_priority': None
}
self.fork = ['default', params]
elif self.fork == 'auto':
self.detect_fork()
logger.info('{section}:{category} fork set to {fork}'.format
(section=self.section, category=self.input_category, fork=self.fork[0]))
core.FORK_SET = self.fork
self.fork, self.fork_params = self.fork[0], self.fork[1]
# This will create the fork object, and attach to self.fork_obj.
self._init_fork()
return self.fork, self.fork_params
@staticmethod
def _api_check(r, params, rem_params):
try:
json_data = r.json()
except ValueError:
logger.error('Failed to get JSON data from response')
logger.debug('Response received')
raise
try:
json_data = json_data['data']
except KeyError:
logger.error('Failed to get data from JSON')
logger.debug('Response received: {}'.format(json_data))
raise
else:
if six.PY3:
str_type = (str)
else:
str_type = (str, unicode)
if isinstance(json_data, str_type):
return rem_params, False
json_data = json_data.get('data', json_data)
try:
optional_parameters = json_data['optionalParameters'].keys()
# Find excess parameters
excess_parameters = set(params).difference(optional_parameters)
excess_parameters.remove('cmd') # Don't remove cmd from api params
logger.debug('Removing excess parameters: {}'.format(sorted(excess_parameters)))
rem_params.extend(excess_parameters)
return rem_params, True
except:
logger.error('Failed to identify optionalParameters')
return rem_params, False
def detect_fork(self):
"""Try to detect a specific fork."""
detected = False
params = core.ALL_FORKS
rem_params = []
logger.info('Attempting to auto-detect {category} fork'.format(category=self.input_category))
# define the order to test. Default must be first since the default fork doesn't reject parameters.
# then in order of most unique parameters.
if self.apikey:
url = '{protocol}{host}:{port}{root}/api/{apikey}/'.format(
protocol=self.protocol, host=self.host, port=self.port, root=self.web_root, apikey=self.apikey,
)
api_params = {'cmd': 'sg.postprocess', 'help': '1'}
else:
url = '{protocol}{host}:{port}{root}/home/postprocess/'.format(
protocol=self.protocol, host=self.host, port=self.port, root=self.web_root,
)
api_params = {}
# attempting to auto-detect fork
try:
s = requests.Session()
if not self.apikey and self.username and self.password:
login = '{protocol}{host}:{port}{root}/login'.format(
protocol=self.protocol, host=self.host, port=self.port, root=self.web_root)
login_params = {'username': self.username, 'password': self.password}
r = s.get(login, verify=False, timeout=(30, 60))
if r.status_code in [401, 403] and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False)
r = s.get(url, auth=(self.username, self.password), params=api_params, verify=False)
except requests.ConnectionError:
logger.info('Could not connect to {section}:{category} to perform auto-fork detection!'.format
(section=self.section, category=self.input_category))
r = []
if r and r.ok:
if self.apikey:
rem_params, found = self._api_check(r, params, rem_params)
if found:
params['cmd'] = 'sg.postprocess'
else: # try different api set for non-SickGear forks.
api_params = {'cmd': 'help', 'subject': 'postprocess'}
try:
if not self.apikey and self.username and self.password:
r = s.get(url, auth=(self.username, self.password), params=api_params, verify=False)
else:
r = s.get(url, params=api_params, verify=False)
except requests.ConnectionError:
logger.info('Could not connect to {section}:{category} to perform auto-fork detection!'.format
(section=self.section, category=self.input_category))
rem_params, found = self._api_check(r, params, rem_params)
params['cmd'] = 'postprocess'
else:
# Find excess parameters
rem_params.extend(
param
for param in params
if 'name="{param}"'.format(param=param) not in r.text
)
# Remove excess params
for param in rem_params:
params.pop(param)
for fork in sorted(iteritems(core.FORKS), reverse=False):
if params == fork[1]:
detected = True
break
if detected:
self.fork = fork
logger.info('{section}:{category} fork auto-detection successful ...'.format
(section=self.section, category=self.input_category))
elif rem_params:
logger.info('{section}:{category} fork auto-detection found custom params {params}'.format
(section=self.section, category=self.input_category, params=params))
self.fork = ['custom', params]
else:
logger.info('{section}:{category} fork auto-detection failed'.format
(section=self.section, category=self.input_category))
self.fork = list(core.FORKS.items())[list(core.FORKS.keys()).index(core.FORK_DEFAULT)]
def _init_fork(self):
# These need to be imported here, to prevent a circular import.
from .pymedusa import PyMedusa, PyMedusaApiV1, PyMedusaApiV2
mapped_forks = {
'Medusa': PyMedusa,
'Medusa-api': PyMedusaApiV1,
'Medusa-apiv2': PyMedusaApiV2
}
logger.debug('Create object for fork {fork}'.format(fork=self.fork))
if self.fork and mapped_forks.get(self.fork):
# Create the fork object and pass self (SickBeardInit) to it for all the data, like Config.
self.fork_obj = mapped_forks[self.fork](self)
else:
logger.debug('{section}:{category} Could not create a fork object for {fork}. Probaly class not added yet.'.format(
section=self.section, category=self.input_category, fork=self.fork)
)
class SickBeard(object):
"""Sickbeard base class."""
def __init__(self, sb_init):
"""SB constructor."""
self.sb_init = sb_init
self.session = requests.Session()
self.failed = None
self.status = None
self.input_name = None
self.dir_name = None
self.delete_failed = int(self.sb_init.config.get('delete_failed', 0))
self.nzb_extraction_by = self.sb_init.config.get('nzbExtractionBy', 'Downloader')
self.process_method = self.sb_init.config.get('process_method')
self.remote_path = int(self.sb_init.config.get('remote_path', 0))
self.wait_for = int(self.sb_init.config.get('wait_for', 2))
self.force = int(self.sb_init.config.get('force', 0))
self.delete_on = int(self.sb_init.config.get('delete_on', 0))
self.ignore_subs = int(self.sb_init.config.get('ignore_subs', 0))
self.is_priority = int(self.sb_init.config.get('is_priority', 0))
# get importmode, default to 'Move' for consistency with legacy
self.import_mode = self.sb_init.config.get('importMode', 'Move')
# Keep track of result state
self.success = False
def initialize(self, dir_name, input_name=None, failed=False, client_agent='manual'):
"""We need to call this explicitely because we need some variables.
We can't pass these directly through the constructor.
"""
self.dir_name = dir_name
self.input_name = input_name
self.failed = failed
self.status = int(self.failed)
if self.status > 0 and core.NOEXTRACTFAILED:
self.extract = 0
else:
self.extract = int(self.sb_init.config.get('extract', 0))
if client_agent == core.TORRENT_CLIENT_AGENT and core.USE_LINK == 'move-sym':
self.process_method = 'symlink'
def _create_url(self):
if self.sb_init.apikey:
return '{0}{1}:{2}{3}/api/{4}/'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root, self.sb_init.apikey)
return '{0}{1}:{2}{3}/home/postprocess/processEpisode'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
def _process_fork_prarams(self):
# configure SB params to pass
fork_params = self.sb_init.fork_params
fork_params['quiet'] = 1
fork_params['proc_type'] = 'manual'
if self.input_name is not None:
fork_params['nzbName'] = self.input_name
for param in copy.copy(fork_params):
if param == 'failed':
if self.failed > 1:
self.failed = 1
fork_params[param] = self.failed
if 'proc_type' in fork_params:
del fork_params['proc_type']
if 'type' in fork_params:
del fork_params['type']
if param == 'return_data':
fork_params[param] = 0
if 'quiet' in fork_params:
del fork_params['quiet']
if param == 'type':
if 'type' in fork_params: # only set if we haven't already deleted for 'failed' above.
fork_params[param] = 'manual'
if 'proc_type' in fork_params:
del fork_params['proc_type']
if param in ['dir_name', 'dir', 'proc_dir', 'process_directory', 'path']:
fork_params[param] = self.dir_name
if self.remote_path:
fork_params[param] = remote_dir(self.dir_name)
# SickChill allows multiple path types. Only retunr 'path'
if param == 'proc_dir' and 'path' in fork_params:
del fork_params['proc_dir']
if param == 'process_method':
if self.process_method:
fork_params[param] = self.process_method
else:
del fork_params[param]
if param in ['force', 'force_replace']:
if self.force:
fork_params[param] = self.force
else:
del fork_params[param]
if param in ['delete_on', 'delete']:
if self.delete_on:
fork_params[param] = self.delete_on
else:
del fork_params[param]
if param == 'ignore_subs':
if self.ignore_subs:
fork_params[param] = self.ignore_subs
else:
del fork_params[param]
if param == 'is_priority':
if self.is_priority:
fork_params[param] = self.is_priority
else:
del fork_params[param]
if param == 'force_next':
fork_params[param] = 1
# delete any unused params so we don't pass them to SB by mistake
[fork_params.pop(k) for k, v in list(fork_params.items()) if v is None]
def api_call(self):
"""Perform a base sickbeard api call."""
self._process_fork_prarams()
url = self._create_url()
logger.debug('Opening URL: {0} with params: {1}'.format(url, self.sb_init.fork_params), self.sb_init.section)
try:
if not self.sb_init.apikey and self.sb_init.username and self.sb_init.password:
# If not using the api, we need to login using user/pass first.
login = '{0}{1}:{2}{3}/login'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
login_params = {'username': self.sb_init.username, 'password': self.sb_init.password}
r = self.session.get(login, verify=False, timeout=(30, 60))
if r.status_code in [401, 403] and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
self.session.post(login, data=login_params, stream=True, verify=False, timeout=(30, 60))
response = self.session.get(url, auth=(self.sb_init.username, self.sb_init.password), params=self.sb_init.fork_params, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(self.sb_init.section),
status_code=1,
)
if response.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(response.status_code), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(self.sb_init.section, response.status_code),
status_code=1,
)
return self.process_response(response)
def process_response(self, response):
"""Iterate over the lines returned, and log.
:param response: Streamed Requests response object.
This method will need to be overwritten in the forks, for alternative response handling.
"""
for line in response.iter_lines():
if line:
line = line.decode('utf-8')
logger.postprocess('{0}'.format(line), self.sb_init.section)
# if 'Moving file from' in line:
# input_name = os.path.split(line)[1]
# if 'added to the queue' in line:
# queued = True
# For the refactoring i'm only considering vanilla sickbeard, as for the base class.
if 'Processing succeeded' in line or 'Successfully processed' in line:
self.success = True
if self.success:
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(self.sb_init.section, self.input_name),
status_code=0,
)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(self.sb_init.section),
status_code=1, # We did not receive Success confirmation.
)

592
core/auto_process/movies.py Normal file
View file

@ -0,0 +1,592 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import json
import os
import time
import requests
import core
from core import logger, transcoder
from core.auto_process.common import (
ProcessResult,
command_complete,
completed_download_handling,
)
from core.plugins.downloaders.nzb.utils import report_nzb
from core.plugins.subtitles import import_subs, rename_subs
from core.scene_exceptions import process_all_exceptions
from core.utils import (
convert_to_ascii,
find_download,
find_imdbid,
list_media_files,
remote_dir,
remove_dir,
server_responding,
)
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, status=0, client_agent='manual', download_id='', input_category=None, failure_link=None):
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
apikey = cfg['apikey']
if section == 'CouchPotato':
method = cfg['method']
else:
method = None
# added importMode for Radarr config
if section == 'Radarr':
import_mode = cfg.get('importMode', 'Move')
else:
import_mode = None
delete_failed = int(cfg['delete_failed'])
wait_for = int(cfg['wait_for'])
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
remote_path = int(cfg.get('remote_path', 0))
protocol = 'https://' if ssl else 'http://'
omdbapikey = cfg.get('omdbapikey', '')
no_status_check = int(cfg.get('no_status_check', 0))
status = int(status)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get('extract', 0))
imdbid, dir_name = find_imdbid(dir_name, input_name, omdbapikey)
if section == 'CouchPotato':
base_url = '{0}{1}:{2}{3}/api/{4}/'.format(protocol, host, port, web_root, apikey)
if section == 'Radarr':
base_url = '{0}{1}:{2}{3}/api/v3/command'.format(protocol, host, port, web_root)
url2 = '{0}{1}:{2}{3}/api/v3/config/downloadClient'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey, 'Content-Type': 'application/json'}
if section == 'Watcher3':
base_url = '{0}{1}:{2}{3}/postprocessing'.format(protocol, host, port, web_root)
if not apikey:
logger.info('No CouchPotato or Radarr apikey entered. Performing transcoder functions only')
release = None
elif server_responding(base_url):
if section == 'CouchPotato':
release = get_release(base_url, imdbid, download_id)
else:
release = None
else:
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
status_code=1,
)
# pull info from release found if available
release_id = None
media_id = None
downloader = None
release_status_old = None
if release:
try:
release_id = list(release.keys())[0]
media_id = release[release_id]['media_id']
download_id = release[release_id]['download_info']['id']
downloader = release[release_id]['download_info']['downloader']
release_status_old = release[release_id]['status']
except Exception:
pass
if not os.path.isdir(dir_name) and os.path.isfile(dir_name): # If the input directory is a file, assume single file download and split dir/name.
dir_name = os.path.split(os.path.normpath(dir_name))[0]
specific_path = os.path.join(dir_name, str(input_name))
clean_name = os.path.splitext(specific_path)
if clean_name[1] == '.nzb':
specific_path = clean_name[0]
if os.path.isdir(specific_path):
dir_name = specific_path
process_all_exceptions(input_name, dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
if not list_media_files(dir_name, media=True, audio=False, meta=False, archives=False) and list_media_files(dir_name, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dir_name))
core.extract_files(dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
good_files = 0
valid_files = 0
num_files = 0
# Check video files for corruption
for video in list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
num_files += 1
if transcoder.is_video_good(video, status):
good_files += 1
if not core.REQUIRE_LAN or transcoder.is_video_good(video, status, require_lan=core.REQUIRE_LAN):
valid_files += 1
import_subs(video)
rename_subs(dir_name)
if num_files and valid_files == num_files:
if status:
logger.info('Status shown as failed from Downloader, but {0} valid video files found. Setting as success.'.format(good_files), section)
status = 0
elif num_files and valid_files < num_files:
logger.info('Status shown as success from Downloader, but corrupt video files found. Setting as failed.', section)
status = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if good_files == num_files:
logger.debug('Video marked as failed due to missing required language: {0}'.format(core.REQUIRE_LAN), section)
else:
logger.debug('Video marked as failed due to missing playable audio or video', section)
if good_files < num_files and failure_link: # only report corrupt files
failure_link += '&corrupt=true'
elif client_agent == 'manual':
logger.warning('No media files found in directory {0} to manually process.'.format(dir_name), section)
return ProcessResult(
message='',
status_code=0, # Success (as far as this script is concerned)
)
else:
logger.warning('No media files found in directory {0}. Processing this as a failed download'.format(dir_name), section)
status = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if status == 0:
if core.TRANSCODE == 1:
result, new_dir_name = transcoder.transcode_directory(dir_name)
if result == 0:
logger.debug('Transcoding succeeded for files in {0}'.format(dir_name), section)
dir_name = new_dir_name
chmod_directory = int(str(cfg.get('chmodDirectory', '0')), 8)
logger.debug('Config setting \'chmodDirectory\' currently set to {0}'.format(oct(chmod_directory)), section)
if chmod_directory:
logger.info('Attempting to set the octal permission of \'{0}\' on directory \'{1}\''.format(oct(chmod_directory), dir_name), section)
core.rchmod(dir_name, chmod_directory)
else:
logger.error('Transcoding failed for files in {0}'.format(dir_name), section)
return ProcessResult(
message='{0}: Failed to post-process - Transcoding failed'.format(section),
status_code=1,
)
for video in list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
if not release and '.cp(tt' not in video and imdbid:
video_name, video_ext = os.path.splitext(video)
video2 = '{0}.cp({1}){2}'.format(video_name, imdbid, video_ext)
if not (client_agent in [core.TORRENT_CLIENT_AGENT, 'manual'] and core.USE_LINK == 'move-sym'):
logger.debug('Renaming: {0} to: {1}'.format(video, video2))
os.rename(video, video2)
if not apikey: # If only using Transcoder functions, exit here.
logger.info('No CouchPotato or Radarr or Watcher3 apikey entered. Processing completed.')
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
params = {
'media_folder': remote_dir(dir_name) if remote_path else dir_name,
}
if download_id and release_id:
params['downloader'] = downloader or client_agent
params['download_id'] = download_id
if section == 'CouchPotato':
if method == 'manage':
command = 'manage.update'
params.clear()
else:
command = 'renamer.scan'
url = '{0}{1}'.format(base_url, command)
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params), section)
logger.postprocess('Starting {0} scan for {1}'.format(method, input_name), section)
if section == 'Radarr':
payload = {'name': 'DownloadedMoviesScan', 'path': params['media_folder'], 'downloadClientId': download_id, 'importMode': import_mode}
if not download_id:
payload.pop('downloadClientId')
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(base_url, payload), section)
logger.postprocess('Starting DownloadedMoviesScan scan for {0}'.format(input_name), section)
if section == 'Watcher3':
if input_name and os.path.isfile(os.path.join(dir_name, input_name)):
params['media_folder'] = os.path.join(params['media_folder'], input_name)
payload = {'apikey': apikey, 'path': params['media_folder'], 'guid': download_id, 'mode': 'complete'}
if not download_id:
payload.pop('guid')
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(base_url, payload), section)
logger.postprocess('Starting postprocessing scan for {0}'.format(input_name), section)
try:
if section == 'CouchPotato':
r = requests.get(url, params=params, verify=False, timeout=(30, 1800))
elif section == 'Watcher3':
r = requests.post(base_url, data=payload, verify=False, timeout=(30, 1800))
else:
r = requests.post(base_url, data=json.dumps(payload), headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL', section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
elif section == 'CouchPotato' and result['success']:
logger.postprocess('SUCCESS: Finished {0} scan for folder {1}'.format(method, dir_name), section)
if method == 'manage':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif section == 'Radarr':
try:
if isinstance(result, list):
scan_id = int(result[0]['id'])
else:
scan_id = int(result['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
scan_id = None
elif section == 'Watcher3' and result['status'] == 'finished':
logger.postprocess('Watcher3 updated status to {0}'.format(result['tasks']['update_movie_status']))
if result['tasks']['update_movie_status'] == 'Finished':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=status,
)
else:
return ProcessResult(
message='{0}: Failed to post-process - changed status to {1}'.format(section, result['tasks']['update_movie_status']),
status_code=1,
)
else:
logger.error('FAILED: {0} scan was unable to finish for folder {1}. exiting!'.format(method, dir_name),
section)
return ProcessResult(
message='{0}: Failed to post-process - Server did not return success'.format(section),
status_code=1,
)
else:
core.FAILED = True
logger.postprocess('FAILED DOWNLOAD DETECTED FOR {0}'.format(input_name), section)
if failure_link:
report_nzb(failure_link, client_agent)
if section == 'Radarr':
logger.postprocess('SUCCESS: Sending failed download to {0} for CDH processing'.format(section), section)
return ProcessResult(
message='{0}: Sending failed download back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
) # Return failed flag, but log the event as successful.
elif section == 'Watcher3':
logger.postprocess('Sending failed download to {0} for CDH processing'.format(section), section)
path = remote_dir(dir_name) if remote_path else dir_name
if input_name and os.path.isfile(os.path.join(dir_name, input_name)):
path = os.path.join(path, input_name)
payload = {'apikey': apikey, 'path': path, 'guid': download_id, 'mode': 'failed'}
r = requests.post(base_url, data=payload, verify=False, timeout=(30, 1800))
result = r.json()
logger.postprocess('Watcher3 response: {0}'.format(result))
if result['status'] == 'finished':
return ProcessResult(
message='{0}: Sending failed download back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
) # Return failed flag, but log the event as successful.
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
remove_dir(dir_name)
if not release_id and not media_id:
logger.error('Could not find a downloaded movie in the database matching {0}, exiting!'.format(input_name),
section)
return ProcessResult(
message='{0}: Failed to post-process - Failed download not found in {0}'.format(section),
status_code=1,
)
if release_id:
logger.postprocess('Setting failed release {0} to ignored ...'.format(input_name), section)
url = '{url}release.ignore'.format(url=base_url)
params = {'id': release_id}
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
status_code=1,
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
)
elif result['success']:
logger.postprocess('SUCCESS: {0} has been set to ignored ...'.format(input_name), section)
else:
logger.warning('FAILED: Unable to set {0} to ignored!'.format(input_name), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to set {1} to ignored'.format(section, input_name),
status_code=1,
)
logger.postprocess('Trying to snatch the next highest ranked release.', section)
url = '{0}movie.searcher.try_next'.format(base_url)
logger.debug('Opening URL: {0}'.format(url), section)
try:
r = requests.get(url, params={'media_id': media_id}, verify=False, timeout=(30, 600))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
elif result['success']:
logger.postprocess('SUCCESS: Snatched the next highest release ...', section)
return ProcessResult(
message='{0}: Successfully snatched next highest release'.format(section),
status_code=0,
)
else:
logger.postprocess('SUCCESS: Unable to find a new release to snatch now. CP will keep searching!', section)
return ProcessResult(
status_code=0,
message='{0}: No new release found now. {0} will keep searching'.format(section),
)
# Added a release that was not in the wanted list so confirm rename successful by finding this movie media.list.
if not release:
download_id = None # we don't want to filter new releases based on this.
if no_status_check:
return ProcessResult(
status_code=0,
message='{0}: Successfully processed but no change in status confirmed'.format(section),
)
# we will now check to see if CPS has finished renaming before returning to TorrentToMedia and unpausing.
timeout = time.time() + 60 * wait_for
while time.time() < timeout: # only wait 2 (default) minutes, then return.
logger.postprocess('Checking for status change, please stand by ...', section)
if section == 'CouchPotato':
release = get_release(base_url, imdbid, download_id, release_id)
scan_id = None
else:
release = None
if release:
try:
release_id = list(release.keys())[0]
release_status_new = release[release_id]['status']
if release_status_old is None: # we didn't have a release before, but now we do.
title = release[release_id]['title']
logger.postprocess('SUCCESS: Movie {0} has now been added to CouchPotato with release status of [{1}]'.format(
title, str(release_status_new).upper()), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
if release_status_new != release_status_old:
logger.postprocess('SUCCESS: Release {0} has now been marked with a status of [{1}]'.format(
release_id, str(release_status_new).upper()), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
except Exception:
pass
elif scan_id:
url = '{0}/{1}'.format(base_url, scan_id)
command_status = command_complete(url, params, headers, section)
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
if command_status in ['completed']:
logger.debug('The Scan command has completed successfully. Renaming was successful.', section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status in ['failed']:
logger.debug('The Scan command has failed. Renaming was not successful.', section)
# return ProcessResult(
# message='{0}: Failed to post-process {1}'.format(section, input_name),
# status_code=1,
# )
if not os.path.isdir(dir_name):
logger.postprocess('SUCCESS: Input Directory [{0}] has been processed and removed'.format(
dir_name), section)
return ProcessResult(
status_code=0,
message='{0}: Successfully post-processed {1}'.format(section, input_name),
)
elif not list_media_files(dir_name, media=True, audio=False, meta=False, archives=True):
logger.postprocess('SUCCESS: Input Directory [{0}] has no remaining media files. This has been fully processed.'.format(
dir_name), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
# pause and let CouchPotatoServer/Radarr catch its breath
time.sleep(10 * wait_for)
# The status hasn't changed. we have waited wait_for minutes which is more than enough. uTorrent can resume seeding now.
if section == 'Radarr' and completed_download_handling(url2, headers, section=section):
logger.debug('The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.'.format(section), section)
return ProcessResult(
message='{0}: Complete DownLoad Handling is enabled. Passing back to {0}'.format(section),
status_code=status,
)
logger.warning(
'{0} does not appear to have changed status after {1} minutes, Please check your logs.'.format(input_name, wait_for),
section,
)
return ProcessResult(
status_code=1,
message='{0}: Failed to post-process - No change in status'.format(section),
)
def get_release(base_url, imdb_id=None, download_id=None, release_id=None):
results = {}
params = {}
# determine cmd and params to send to CouchPotato to get our results
section = 'movies'
cmd = 'media.list'
if release_id or imdb_id:
section = 'media'
cmd = 'media.get'
params['id'] = release_id or imdb_id
if not (release_id or imdb_id or download_id):
logger.debug('No information available to filter CP results')
return results
url = '{0}{1}'.format(base_url, cmd)
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params))
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url))
return results
try:
result = r.json()
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error('CouchPotato returned the following non-json data')
for line in r.iter_lines():
logger.error('{0}'.format(line))
return results
if not result['success']:
if 'error' in result:
logger.error('{0}'.format(result['error']))
else:
logger.error('no media found for id {0}'.format(params['id']))
return results
# Gather release info and return it back, no need to narrow results
if release_id:
try:
cur_id = result[section]['_id']
results[cur_id] = result[section]
return results
except Exception:
pass
# Gather release info and proceed with trying to narrow results to one release choice
movies = result[section]
if not isinstance(movies, list):
movies = [movies]
for movie in movies:
if movie['status'] not in ['active', 'done']:
continue
releases = movie['releases']
if not releases:
continue
for release in releases:
try:
if release['status'] not in ['snatched', 'downloaded', 'done']:
continue
if download_id:
if download_id.lower() != release['download_info']['id'].lower():
continue
cur_id = release['_id']
results[cur_id] = release
results[cur_id]['title'] = movie['title']
except Exception:
continue
# Narrow results by removing old releases by comparing their last_edit field
if len(results) > 1:
rem_id = set()
for id1, x1 in results.items():
for x2 in results.values():
try:
if x2['last_edit'] > x1['last_edit']:
rem_id.add(id1)
except Exception:
continue
for id in rem_id:
results.pop(id)
# Search downloads on clients for a match to try and narrow our results down to 1
if len(results) > 1:
rem_id = set()
for cur_id, x in results.items():
try:
if not find_download(str(x['download_info']['downloader']).lower(), x['download_info']['id']):
rem_id.add(cur_id)
except Exception:
continue
for id in rem_id:
results.pop(id)
return results

273
core/auto_process/music.py Normal file
View file

@ -0,0 +1,273 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import json
import os
import time
import requests
import core
from core import logger
from core.auto_process.common import command_complete, ProcessResult
from core.scene_exceptions import process_all_exceptions
from core.utils import convert_to_ascii, list_media_files, remote_dir, remove_dir, server_responding
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, status=0, client_agent='manual', input_category=None):
status = int(status)
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
apikey = cfg['apikey']
wait_for = int(cfg['wait_for'])
ssl = int(cfg.get('ssl', 0))
delete_failed = int(cfg['delete_failed'])
web_root = cfg.get('web_root', '')
remote_path = int(cfg.get('remote_path', 0))
protocol = 'https://' if ssl else 'http://'
status = int(status)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get('extract', 0))
if section == 'Lidarr':
url = '{0}{1}:{2}{3}/api/v1'.format(protocol, host, port, web_root)
else:
url = '{0}{1}:{2}{3}/api'.format(protocol, host, port, web_root)
if not server_responding(url):
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
status_code=1,
)
if not os.path.isdir(dir_name) and os.path.isfile(dir_name): # If the input directory is a file, assume single file download and split dir/name.
dir_name = os.path.split(os.path.normpath(dir_name))[0]
specific_path = os.path.join(dir_name, str(input_name))
clean_name = os.path.splitext(specific_path)
if clean_name[1] == '.nzb':
specific_path = clean_name[0]
if os.path.isdir(specific_path):
dir_name = specific_path
process_all_exceptions(input_name, dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
if not list_media_files(dir_name, media=False, audio=True, meta=False, archives=False) and list_media_files(dir_name, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dir_name))
core.extract_files(dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
# if listMediaFiles(dir_name, media=False, audio=True, meta=False, archives=False) and status:
# logger.info('Status shown as failed from Downloader, but valid video files found. Setting as successful.', section)
# status = 0
if status == 0 and section == 'HeadPhones':
params = {
'apikey': apikey,
'cmd': 'forceProcess',
'dir': remote_dir(dir_name) if remote_path else dir_name,
}
res = force_process(params, url, apikey, input_name, dir_name, section, wait_for)
if res.status_code in [0, 1]:
return res
params = {
'apikey': apikey,
'cmd': 'forceProcess',
'dir': os.path.split(remote_dir(dir_name))[0] if remote_path else os.path.split(dir_name)[0],
}
res = force_process(params, url, apikey, input_name, dir_name, section, wait_for)
if res.status_code in [0, 1]:
return res
# The status hasn't changed. uTorrent can resume seeding now.
logger.warning('The music album does not appear to have changed status after {0} minutes. Please check your Logs'.format(wait_for), section)
return ProcessResult(
message='{0}: Failed to post-process - No change in wanted status'.format(section),
status_code=1,
)
elif status == 0 and section == 'Lidarr':
url = '{0}{1}:{2}{3}/api/v1/command'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey}
if remote_path:
logger.debug('remote_path: {0}'.format(remote_dir(dir_name)), section)
data = {'name': 'Rename', 'path': remote_dir(dir_name)}
else:
logger.debug('path: {0}'.format(dir_name), section)
data = {'name': 'Rename', 'path': dir_name}
data = json.dumps(data)
try:
logger.debug('Opening URL: {0} with data: {1}'.format(url, data), section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
try:
res = r.json()
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to start scan'.format(section),
status_code=1,
)
n = 0
params = {}
url = '{0}/{1}'.format(url, scan_id)
while n < 6: # set up wait_for minutes to see if command completes..
time.sleep(10 * wait_for)
command_status = command_complete(url, params, headers, section)
if command_status and command_status in ['completed', 'failed']:
break
n += 1
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
if not os.path.exists(dir_name):
logger.debug('The directory {0} has been removed. Renaming was successful.'.format(dir_name), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status and command_status in ['completed']:
logger.debug('The Scan command has completed successfully. Renaming was successful.', section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status and command_status in ['failed']:
logger.debug('The Scan command has failed. Renaming was not successful.', section)
# return ProcessResult(
# message='{0}: Failed to post-process {1}'.format(section, input_name),
# status_code=1,
# )
else:
logger.debug('The Scan command did not return status completed. Passing back to {0} to attempt complete download handling.'.format(section), section)
return ProcessResult(
message='{0}: Passing back to {0} to attempt Complete Download Handling'.format(section),
status_code=status,
)
else:
if section == 'Lidarr':
logger.postprocess('FAILED: The download failed. Sending failed download to {0} for CDH processing'.format(section), section)
return ProcessResult(
message='{0}: Download Failed. Sending back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
)
else:
logger.warning('FAILED DOWNLOAD DETECTED', section)
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
remove_dir(dir_name)
return ProcessResult(
message='{0}: Failed to post-process. {0} does not support failed downloads'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
)
def get_status(url, apikey, dir_name):
logger.debug('Attempting to get current status for release:{0}'.format(os.path.basename(dir_name)))
params = {
'apikey': apikey,
'cmd': 'getHistory',
}
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params))
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.RequestException:
logger.error('Unable to open URL')
return None
try:
result = r.json()
except ValueError:
# ValueError catches simplejson's JSONDecodeError and json's ValueError
return None
for album in result:
if os.path.basename(dir_name) == album['FolderName']:
return album['Status'].lower()
def force_process(params, url, apikey, input_name, dir_name, section, wait_for):
release_status = get_status(url, apikey, dir_name)
if not release_status:
logger.error('Could not find a status for {0}, is it in the wanted list ?'.format(input_name), section)
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
logger.debug('Result: {0}'.format(r.text), section)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
elif r.text == 'OK':
logger.postprocess('SUCCESS: Post-Processing started for {0} in folder {1} ...'.format(input_name, dir_name), section)
else:
logger.error('FAILED: Post-Processing has NOT started for {0} in folder {1}. exiting!'.format(input_name, dir_name), section)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(section),
status_code=1,
)
# we will now wait for this album to be processed before returning to TorrentToMedia and unpausing.
timeout = time.time() + 60 * wait_for
while time.time() < timeout:
current_status = get_status(url, apikey, dir_name)
if current_status is not None and current_status != release_status: # Something has changed. CPS must have processed this movie.
logger.postprocess('SUCCESS: This release is now marked as status [{0}]'.format(current_status), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
if not os.path.isdir(dir_name):
logger.postprocess('SUCCESS: The input directory {0} has been removed Processing must have finished.'.format(dir_name), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
time.sleep(10 * wait_for)
# The status hasn't changed.
return ProcessResult(
message='no change',
status_code=2,
)

486
core/auto_process/tv.py Normal file
View file

@ -0,0 +1,486 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import copy
import errno
import json
import os
import time
import requests
from oauthlib.oauth2 import LegacyApplicationClient
from requests_oauthlib import OAuth2Session
import core
from core import logger, transcoder
from core.auto_process.common import (
ProcessResult,
command_complete,
completed_download_handling,
)
from core.auto_process.managers.sickbeard import InitSickBeard
from core.plugins.downloaders.nzb.utils import report_nzb
from core.plugins.subtitles import import_subs, rename_subs
from core.scene_exceptions import process_all_exceptions
from core.utils import (
convert_to_ascii,
flatten,
list_media_files,
remote_dir,
remove_dir,
server_responding,
)
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, failed=False, client_agent='manual', download_id=None, input_category=None, failure_link=None):
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
protocol = 'https://' if ssl else 'http://'
username = cfg.get('username', '')
password = cfg.get('password', '')
apikey = cfg.get('apikey', '')
api_version = int(cfg.get('api_version', 2))
sso_username = cfg.get('sso_username', '')
sso_password = cfg.get('sso_password', '')
# Refactor into an OO structure.
# For now let's do botch the OO and the serialized code, until everything has been migrated.
init_sickbeard = InitSickBeard(cfg, section, input_category)
if server_responding('{0}{1}:{2}{3}'.format(protocol, host, port, web_root)):
# auto-detect correct fork
# During reactor we also return fork, fork_params. But these are also stored in the object.
# Should be changed after refactor.
fork, fork_params = init_sickbeard.auto_fork()
elif not username and not apikey and not sso_username:
logger.info('No SickBeard / SiCKRAGE username or Sonarr apikey entered. Performing transcoder functions only')
fork, fork_params = 'None', {}
else:
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
status_code=1,
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
)
delete_failed = int(cfg.get('delete_failed', 0))
nzb_extraction_by = cfg.get('nzbExtractionBy', 'Downloader')
process_method = cfg.get('process_method')
if client_agent == core.TORRENT_CLIENT_AGENT and core.USE_LINK == 'move-sym':
process_method = 'symlink'
remote_path = int(cfg.get('remote_path', 0))
wait_for = int(cfg.get('wait_for', 2))
force = int(cfg.get('force', 0))
delete_on = int(cfg.get('delete_on', 0))
ignore_subs = int(cfg.get('ignore_subs', 0))
status = int(failed)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get('extract', 0))
# get importmode, default to 'Move' for consistency with legacy
import_mode = cfg.get('importMode', 'Move')
if not os.path.isdir(dir_name) and os.path.isfile(dir_name): # If the input directory is a file, assume single file download and split dir/name.
dir_name = os.path.split(os.path.normpath(dir_name))[0]
specific_path = os.path.join(dir_name, str(input_name))
clean_name = os.path.splitext(specific_path)
if clean_name[1] == '.nzb':
specific_path = clean_name[0]
if os.path.isdir(specific_path):
dir_name = specific_path
# Attempt to create the directory if it doesn't exist and ignore any
# error stating that it already exists. This fixes a bug where SickRage
# won't process the directory because it doesn't exist.
if dir_name:
try:
os.makedirs(dir_name) # Attempt to create the directory
except OSError as e:
# Re-raise the error if it wasn't about the directory not existing
if e.errno != errno.EEXIST:
raise
if 'process_method' not in fork_params or (client_agent in ['nzbget', 'sabnzbd'] and nzb_extraction_by != 'Destination'):
if input_name:
process_all_exceptions(input_name, dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
# Now check if tv files exist in destination.
if not list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
if list_media_files(dir_name, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dir_name))
core.extract_files(dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
if list_media_files(dir_name, media=True, audio=False, meta=False, archives=False): # Check that a video exists. if not, assume failed.
flatten(dir_name)
# Check video files for corruption
good_files = 0
valid_files = 0
num_files = 0
for video in list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
num_files += 1
if transcoder.is_video_good(video, status):
good_files += 1
if not core.REQUIRE_LAN or transcoder.is_video_good(video, status, require_lan=core.REQUIRE_LAN):
valid_files += 1
import_subs(video)
rename_subs(dir_name)
if num_files > 0:
if valid_files == num_files and not status == 0:
logger.info('Found Valid Videos. Setting status Success')
status = 0
failed = 0
if valid_files < num_files and status == 0:
logger.info('Found corrupt videos. Setting status Failed')
status = 1
failed = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if good_files == num_files:
logger.debug('Video marked as failed due to missing required language: {0}'.format(core.REQUIRE_LAN), section)
else:
logger.debug('Video marked as failed due to missing playable audio or video', section)
if good_files < num_files and failure_link: # only report corrupt files
failure_link += '&corrupt=true'
elif client_agent == 'manual':
logger.warning('No media files found in directory {0} to manually process.'.format(dir_name), section)
return ProcessResult(
message='',
status_code=0, # Success (as far as this script is concerned)
)
elif nzb_extraction_by == 'Destination':
logger.info('Check for media files ignored because nzbExtractionBy is set to Destination.')
if int(failed) == 0:
logger.info('Setting Status Success.')
status = 0
failed = 0
else:
logger.info('Downloader reported an error during download or verification. Processing this as a failed download.')
status = 1
failed = 1
else:
logger.warning('No media files found in directory {0}. Processing this as a failed download'.format(dir_name), section)
status = 1
failed = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if status == 0 and core.TRANSCODE == 1: # only transcode successful downloads
result, new_dir_name = transcoder.transcode_directory(dir_name)
if result == 0:
logger.debug('SUCCESS: Transcoding succeeded for files in {0}'.format(dir_name), section)
dir_name = new_dir_name
chmod_directory = int(str(cfg.get('chmodDirectory', '0')), 8)
logger.debug('Config setting \'chmodDirectory\' currently set to {0}'.format(oct(chmod_directory)), section)
if chmod_directory:
logger.info('Attempting to set the octal permission of \'{0}\' on directory \'{1}\''.format(oct(chmod_directory), dir_name), section)
core.rchmod(dir_name, chmod_directory)
else:
logger.error('FAILED: Transcoding failed for files in {0}'.format(dir_name), section)
return ProcessResult(
message='{0}: Failed to post-process - Transcoding failed'.format(section),
status_code=1,
)
# Part of the refactor
if init_sickbeard.fork_obj:
init_sickbeard.fork_obj.initialize(dir_name, input_name, failed, client_agent='manual')
# configure SB params to pass
# We don't want to remove params, for the Forks that have been refactored.
# As we don't want to duplicate this part of the code.
if not init_sickbeard.fork_obj:
fork_params['quiet'] = 1
fork_params['proc_type'] = 'manual'
if input_name is not None:
fork_params['nzbName'] = input_name
for param in copy.copy(fork_params):
if param == 'failed':
if failed > 1:
failed = 1
fork_params[param] = failed
if 'proc_type' in fork_params:
del fork_params['proc_type']
if 'type' in fork_params:
del fork_params['type']
if param == 'return_data':
fork_params[param] = 0
if 'quiet' in fork_params:
del fork_params['quiet']
if param == 'type':
if 'type' in fork_params: # only set if we haven't already deleted for 'failed' above.
fork_params[param] = 'manual'
if 'proc_type' in fork_params:
del fork_params['proc_type']
if param in ['dir_name', 'dir', 'proc_dir', 'process_directory', 'path']:
fork_params[param] = dir_name
if remote_path:
fork_params[param] = remote_dir(dir_name)
if param == 'process_method':
if process_method:
fork_params[param] = process_method
else:
del fork_params[param]
if param in ['force', 'force_replace']:
if force:
fork_params[param] = force
else:
del fork_params[param]
if param in ['delete_on', 'delete']:
if delete_on:
fork_params[param] = delete_on
else:
del fork_params[param]
if param == 'ignore_subs':
if ignore_subs:
fork_params[param] = ignore_subs
else:
del fork_params[param]
if param == 'force_next':
fork_params[param] = 1
# delete any unused params so we don't pass them to SB by mistake
[fork_params.pop(k) for k, v in list(fork_params.items()) if v is None]
if status == 0:
if section == 'NzbDrone' and not apikey:
logger.info('No Sonarr apikey entered. Processing completed.')
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
logger.postprocess('SUCCESS: The download succeeded, sending a post-process request', section)
else:
core.FAILED = True
if failure_link:
report_nzb(failure_link, client_agent)
if 'failed' in fork_params:
logger.postprocess('FAILED: The download failed. Sending \'failed\' process request to {0} branch'.format(fork), section)
elif section == 'NzbDrone':
logger.postprocess('FAILED: The download failed. Sending failed download to {0} for CDH processing'.format(fork), section)
return ProcessResult(
message='{0}: Download Failed. Sending back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
)
else:
logger.postprocess('FAILED: The download failed. {0} branch does not handle failed downloads. Nothing to process'.format(fork), section)
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
remove_dir(dir_name)
return ProcessResult(
message='{0}: Failed to post-process. {0} does not support failed downloads'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
)
url = None
if section == 'SickBeard':
if apikey:
url = '{0}{1}:{2}{3}/api/{4}/'.format(protocol, host, port, web_root, apikey)
if not 'cmd' in fork_params:
if 'SickGear' in fork:
fork_params['cmd'] = 'sg.postprocess'
else:
fork_params['cmd'] = 'postprocess'
elif fork == 'Stheno':
url = '{0}{1}:{2}{3}/home/postprocess/process_episode'.format(protocol, host, port, web_root)
else:
url = '{0}{1}:{2}{3}/home/postprocess/processEpisode'.format(protocol, host, port, web_root)
elif section == 'SiCKRAGE':
if api_version >= 2:
url = '{0}{1}:{2}{3}/api/v{4}/postprocess'.format(protocol, host, port, web_root, api_version)
else:
url = '{0}{1}:{2}{3}/api/v{4}/{5}/'.format(protocol, host, port, web_root, api_version, apikey)
elif section == 'NzbDrone':
url = '{0}{1}:{2}{3}/api/v3/command'.format(protocol, host, port, web_root)
url2 = '{0}{1}:{2}{3}/api/v3/config/downloadClient'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey, "Content-Type": "application/json"}
# params = {'sortKey': 'series.title', 'page': 1, 'pageSize': 1, 'sortDir': 'asc'}
if remote_path:
logger.debug('remote_path: {0}'.format(remote_dir(dir_name)), section)
data = {'name': 'DownloadedEpisodesScan', 'path': remote_dir(dir_name), 'downloadClientId': download_id, 'importMode': import_mode}
else:
logger.debug('path: {0}'.format(dir_name), section)
data = {'name': 'DownloadedEpisodesScan', 'path': dir_name, 'downloadClientId': download_id, 'importMode': import_mode}
if not download_id:
data.pop('downloadClientId')
data = json.dumps(data)
try:
if section == 'SickBeard':
if init_sickbeard.fork_obj:
return init_sickbeard.fork_obj.api_call()
else:
s = requests.Session()
logger.debug('Opening URL: {0} with params: {1}'.format(url, fork_params), section)
if not apikey and username and password:
login = '{0}{1}:{2}{3}/login'.format(protocol, host, port, web_root)
login_params = {'username': username, 'password': password}
r = s.get(login, verify=False, timeout=(30, 60))
if r.status_code in [401, 403] and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False, timeout=(30, 60))
r = s.get(url, auth=(username, password), params=fork_params, stream=True, verify=False, timeout=(30, 1800))
elif section == 'SiCKRAGE':
s = requests.Session()
if api_version >= 2 and sso_username and sso_password:
oauth = OAuth2Session(client=LegacyApplicationClient(client_id=core.SICKRAGE_OAUTH_CLIENT_ID))
oauth_token = oauth.fetch_token(client_id=core.SICKRAGE_OAUTH_CLIENT_ID,
token_url=core.SICKRAGE_OAUTH_TOKEN_URL,
username=sso_username,
password=sso_password)
s.headers.update({'Authorization': 'Bearer ' + oauth_token['access_token']})
params = {
'path': fork_params['path'],
'failed': str(bool(fork_params['failed'])).lower(),
'processMethod': 'move',
'forceReplace': str(bool(fork_params['force_replace'])).lower(),
'returnData': str(bool(fork_params['return_data'])).lower(),
'delete': str(bool(fork_params['delete'])).lower(),
'forceNext': str(bool(fork_params['force_next'])).lower(),
'nzbName': fork_params['nzbName']
}
else:
params = fork_params
r = s.get(url, params=params, stream=True, verify=False, timeout=(30, 1800))
elif section == 'NzbDrone':
logger.debug('Opening URL: {0} with data: {1}'.format(url, data), section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
success = False
queued = False
started = False
if section == 'SickBeard':
if apikey:
if r.json()['result'] == 'success':
success = True
else:
for line in r.iter_lines():
if line:
line = line.decode('utf-8')
logger.postprocess('{0}'.format(line), section)
if 'Moving file from' in line:
input_name = os.path.split(line)[1]
if 'added to the queue' in line:
queued = True
if 'Processing succeeded' in line or 'Successfully processed' in line:
success = True
if queued:
time.sleep(60)
elif section == 'SiCKRAGE':
if api_version >= 2:
success = True
else:
if r.json()['result'] == 'success':
success = True
elif section == 'NzbDrone':
try:
res = r.json()
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
started = True
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
scan_id = None
started = False
if status != 0 and delete_failed and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
remove_dir(dir_name)
if success:
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif section == 'NzbDrone' and started:
n = 0
params = {}
url = '{0}/{1}'.format(url, scan_id)
while n < 6: # set up wait_for minutes to see if command completes..
time.sleep(10 * wait_for)
command_status = command_complete(url, params, headers, section)
if command_status and command_status in ['completed', 'failed']:
break
n += 1
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
if not os.path.exists(dir_name):
logger.debug('The directory {0} has been removed. Renaming was successful.'.format(dir_name), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status and command_status in ['completed']:
logger.debug('The Scan command has completed successfully. Renaming was successful.', section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status and command_status in ['failed']:
logger.debug('The Scan command has failed. Renaming was not successful.', section)
# return ProcessResult(
# message='{0}: Failed to post-process {1}'.format(section, input_name),
# status_code=1,
# )
if completed_download_handling(url2, headers, section=section):
logger.debug('The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.'.format(section),
section)
return ProcessResult(
message='{0}: Complete DownLoad Handling is enabled. Passing back to {0}'.format(section),
status_code=status,
)
else:
logger.warning('The Scan command did not return a valid status. Renaming was not successful.', section)
return ProcessResult(
message='{0}: Failed to post-process {1}'.format(section, input_name),
status_code=1,
)
else:
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(section),
status_code=1, # We did not receive Success confirmation.
)

645
core/configuration.py Normal file
View file

@ -0,0 +1,645 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import copy
import os
import shutil
from itertools import chain
import configobj
from six import iteritems
import core
from core import logger
class Section(configobj.Section, object):
def isenabled(self):
# checks if subsection enabled, returns true/false if subsection specified otherwise returns true/false in {}
if not self.sections:
try:
value = list(ConfigObj.find_key(self, 'enabled'))[0]
except Exception:
value = 0
if int(value) == 1:
return self
else:
to_return = copy.deepcopy(self)
for section_name, subsections in to_return.items():
for subsection in subsections:
try:
value = list(ConfigObj.find_key(subsections, 'enabled'))[0]
except Exception:
value = 0
if int(value) != 1:
del to_return[section_name][subsection]
# cleanout empty sections and subsections
for key in [k for (k, v) in to_return.items() if not v]:
del to_return[key]
return to_return
def findsection(self, key):
to_return = copy.deepcopy(self)
for subsection in to_return:
try:
value = list(ConfigObj.find_key(to_return[subsection], key))[0]
except Exception:
value = None
if not value:
del to_return[subsection]
else:
for category in to_return[subsection]:
if category != key:
del to_return[subsection][category]
# cleanout empty sections and subsections
for key in [k for (k, v) in to_return.items() if not v]:
del to_return[key]
return to_return
def __getitem__(self, key):
if key in self.keys():
return dict.__getitem__(self, key)
to_return = copy.deepcopy(self)
for section, subsections in to_return.items():
if section in key:
continue
if isinstance(subsections, Section) and subsections.sections:
for subsection, options in subsections.items():
if subsection in key:
continue
if key in options:
return options[key]
del subsections[subsection]
else:
if section not in key:
del to_return[section]
# cleanout empty sections and subsections
for key in [k for (k, v) in to_return.items() if not v]:
del to_return[key]
return to_return
class ConfigObj(configobj.ConfigObj, Section):
def __init__(self, *args, **kw):
if len(args) == 0:
args = (core.CONFIG_FILE,)
super(configobj.ConfigObj, self).__init__(*args, **kw)
self.interpolation = False
@staticmethod
def find_key(node, kv):
if isinstance(node, list):
for i in node:
for x in ConfigObj.find_key(i, kv):
yield x
elif isinstance(node, dict):
if kv in node:
yield node[kv]
for j in node.values():
for x in ConfigObj.find_key(j, kv):
yield x
@staticmethod
def migrate():
global CFG_NEW, CFG_OLD
CFG_NEW = None
CFG_OLD = None
try:
# check for autoProcessMedia.cfg and create if it does not exist
if not os.path.isfile(core.CONFIG_FILE):
shutil.copyfile(core.CONFIG_SPEC_FILE, core.CONFIG_FILE)
CFG_OLD = config(core.CONFIG_FILE)
except Exception as error:
logger.error('Error {msg} when copying to .cfg'.format(msg=error))
try:
# check for autoProcessMedia.cfg.spec and create if it does not exist
if not os.path.isfile(core.CONFIG_SPEC_FILE):
shutil.copyfile(core.CONFIG_FILE, core.CONFIG_SPEC_FILE)
CFG_NEW = config(core.CONFIG_SPEC_FILE)
except Exception as error:
logger.error('Error {msg} when copying to .spec'.format(msg=error))
# check for autoProcessMedia.cfg and autoProcessMedia.cfg.spec and if they don't exist return and fail
if CFG_NEW is None or CFG_OLD is None:
return False
subsections = {}
# gather all new-style and old-style sub-sections
for newsection in CFG_NEW:
if CFG_NEW[newsection].sections:
subsections.update({newsection: CFG_NEW[newsection].sections})
for section in CFG_OLD:
if CFG_OLD[section].sections:
subsections.update({section: CFG_OLD[section].sections})
for option, value in CFG_OLD[section].items():
if option in ['category',
'cpsCategory',
'sbCategory',
'srCategory',
'hpCategory',
'mlCategory',
'gzCategory',
'raCategory',
'ndCategory',
'W3Category']:
if not isinstance(value, list):
value = [value]
# add subsection
subsections.update({section: value})
CFG_OLD[section].pop(option)
continue
def cleanup_values(values, section):
for option, value in iteritems(values):
if section in ['CouchPotato']:
if option == ['outputDirectory']:
CFG_NEW['Torrent'][option] = os.path.split(os.path.normpath(value))[0]
values.pop(option)
if section in ['CouchPotato', 'HeadPhones', 'Gamez', 'Mylar']:
if option in ['username', 'password']:
values.pop(option)
if section in ['Mylar']:
if option == 'wait_for': # remove old format
values.pop(option)
if section in ['SickBeard', 'NzbDrone']:
if option == 'failed_fork': # change this old format
values['failed'] = 'auto'
values.pop(option)
if option == 'outputDirectory': # move this to new location format
CFG_NEW['Torrent'][option] = os.path.split(os.path.normpath(value))[0]
values.pop(option)
if section in ['Torrent']:
if option in ['compressedExtensions', 'mediaExtensions', 'metaExtensions', 'minSampleSize']:
CFG_NEW['Extensions'][option] = value
values.pop(option)
if option == 'useLink': # Sym links supported now as well.
if value in ['1', 1]:
value = 'hard'
elif value in ['0', 0]:
value = 'no'
values[option] = value
if option == 'forceClean':
CFG_NEW['General']['force_clean'] = value
values.pop(option)
if option == 'qBittorrenHost': # We had a typo that is now fixed.
CFG_NEW['Torrent']['qBittorrentHost'] = value
values.pop(option)
if section in ['Transcoder']:
if option in ['niceness']:
CFG_NEW['Posix'][option] = value
values.pop(option)
if option == 'remote_path':
if value and value not in ['0', '1', 0, 1]:
value = 1
elif not value:
value = 0
values[option] = value
# remove any options that we no longer need so they don't migrate into our new config
if not list(ConfigObj.find_key(CFG_NEW, option)):
try:
values.pop(option)
except Exception:
pass
return values
def process_section(section, subsections=None):
if subsections:
for subsection in subsections:
if subsection in CFG_OLD.sections:
values = cleanup_values(CFG_OLD[subsection], section)
if subsection not in CFG_NEW[section].sections:
CFG_NEW[section][subsection] = {}
for option, value in values.items():
CFG_NEW[section][subsection][option] = value
elif subsection in CFG_OLD[section].sections:
values = cleanup_values(CFG_OLD[section][subsection], section)
if subsection not in CFG_NEW[section].sections:
CFG_NEW[section][subsection] = {}
for option, value in values.items():
CFG_NEW[section][subsection][option] = value
else:
values = cleanup_values(CFG_OLD[section], section)
if section not in CFG_NEW.sections:
CFG_NEW[section] = {}
for option, value in values.items():
CFG_NEW[section][option] = value
# convert old-style categories to new-style sub-sections
for section in CFG_OLD.keys():
subsection = None
if section in list(chain.from_iterable(subsections.values())):
subsection = section
section = ''.join([k for k, v in iteritems(subsections) if subsection in v])
process_section(section, subsection)
elif section in subsections.keys():
subsection = subsections[section]
process_section(section, subsection)
elif section in CFG_OLD.keys():
process_section(section, subsection)
# migrate SiCRKAGE settings from SickBeard section to new dedicated SiCRKAGE section
if CFG_OLD['SickBeard']['tv']['enabled'] and CFG_OLD['SickBeard']['tv']['fork'] == 'sickrage-api':
for option, value in iteritems(CFG_OLD['SickBeard']['tv']):
if option in CFG_NEW['SiCKRAGE']['tv']:
CFG_NEW['SiCKRAGE']['tv'][option] = value
# set API version to 1 if API key detected and no SSO username is set
if CFG_NEW['SiCKRAGE']['tv']['apikey'] and not CFG_NEW['SiCKRAGE']['tv']['sso_username']:
CFG_NEW['SiCKRAGE']['tv']['api_version'] = 1
# disable SickBeard section
CFG_NEW['SickBeard']['tv']['enabled'] = 0
CFG_NEW['SickBeard']['tv']['fork'] = 'auto'
# create a backup of our old config
CFG_OLD.filename = '{config}.old'.format(config=core.CONFIG_FILE)
CFG_OLD.write()
# write our new config to autoProcessMedia.cfg
CFG_NEW.filename = core.CONFIG_FILE
CFG_NEW.write()
return True
@staticmethod
def addnzbget():
# load configs into memory
cfg_new = config()
try:
if 'NZBPO_NDCATEGORY' in os.environ and 'NZBPO_SBCATEGORY' in os.environ:
if os.environ['NZBPO_NDCATEGORY'] == os.environ['NZBPO_SBCATEGORY']:
logger.warning('{x} category is set for SickBeard and Sonarr. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_NDCATEGORY']))
if 'NZBPO_RACATEGORY' in os.environ and 'NZBPO_CPSCATEGORY' in os.environ:
if os.environ['NZBPO_RACATEGORY'] == os.environ['NZBPO_CPSCATEGORY']:
logger.warning('{x} category is set for CouchPotato and Radarr. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_RACATEGORY']))
if 'NZBPO_RACATEGORY' in os.environ and 'NZBPO_W3CATEGORY' in os.environ:
if os.environ['NZBPO_RACATEGORY'] == os.environ['NZBPO_W3CATEGORY']:
logger.warning('{x} category is set for Watcher3 and Radarr. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_RACATEGORY']))
if 'NZBPO_W3CATEGORY' in os.environ and 'NZBPO_CPSCATEGORY' in os.environ:
if os.environ['NZBPO_W3CATEGORY'] == os.environ['NZBPO_CPSCATEGORY']:
logger.warning('{x} category is set for CouchPotato and Watcher3. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_W3CATEGORY']))
if 'NZBPO_LICATEGORY' in os.environ and 'NZBPO_HPCATEGORY' in os.environ:
if os.environ['NZBPO_LICATEGORY'] == os.environ['NZBPO_HPCATEGORY']:
logger.warning('{x} category is set for HeadPhones and Lidarr. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_LICATEGORY']))
section = 'Nzb'
key = 'NZBOP_DESTDIR'
if key in os.environ:
option = 'default_downloadDirectory'
value = os.environ[key]
cfg_new[section][option] = value
section = 'General'
env_keys = ['AUTO_UPDATE', 'CHECK_MEDIA', 'REQUIRE_LAN', 'SAFE_MODE', 'NO_EXTRACT_FAILED']
cfg_keys = ['auto_update', 'check_media', 'require_lan', 'safe_mode', 'no_extract_failed']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
cfg_new[section][option] = value
section = 'Network'
env_keys = ['MOUNTPOINTS']
cfg_keys = ['mount_points']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
cfg_new[section][option] = value
section = 'CouchPotato'
env_cat_key = 'NZBPO_CPSCATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'METHOD', 'DELETE_FAILED', 'REMOTE_PATH',
'WAIT_FOR', 'WATCH_DIR', 'OMDBAPIKEY']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'method', 'delete_failed', 'remote_path',
'wait_for', 'watch_dir', 'omdbapikey']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_CPS{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['Radarr'].sections:
cfg_new['Radarr'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['Watcher3'].sections:
cfg_new['Watcher3'][env_cat_key]['enabled'] = 0
section = 'Watcher3'
env_cat_key = 'NZBPO_W3CATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'METHOD', 'DELETE_FAILED', 'REMOTE_PATH',
'WAIT_FOR', 'WATCH_DIR', 'OMDBAPIKEY']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'method', 'delete_failed', 'remote_path',
'wait_for', 'watch_dir', 'omdbapikey']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_W3{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['Radarr'].sections:
cfg_new['Radarr'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['CouchPotato'].sections:
cfg_new['CouchPotato'][env_cat_key]['enabled'] = 0
section = 'SickBeard'
env_cat_key = 'NZBPO_SBCATEGORY'
env_keys = ['ENABLED', 'HOST', 'PORT', 'APIKEY', 'USERNAME', 'PASSWORD', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED', 'TORRENT_NOLINK',
'NZBEXTRACTIONBY', 'REMOTE_PATH', 'PROCESS_METHOD']
cfg_keys = ['enabled', 'host', 'port', 'apikey', 'username', 'password', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed', 'Torrent_NoLink',
'nzbExtractionBy', 'remote_path', 'process_method']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_SB{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['SiCKRAGE'].sections:
cfg_new['SiCKRAGE'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['NzbDrone'].sections:
cfg_new['NzbDrone'][env_cat_key]['enabled'] = 0
section = 'SiCKRAGE'
env_cat_key = 'NZBPO_SRCATEGORY'
env_keys = ['ENABLED', 'HOST', 'PORT', 'APIKEY', 'API_VERSION', 'SSO_USERNAME', 'SSO_PASSWORD', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK',
'DELETE_FAILED', 'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'REMOTE_PATH', 'PROCESS_METHOD']
cfg_keys = ['enabled', 'host', 'port', 'apikey', 'api_version', 'sso_username', 'sso_password', 'ssl', 'web_root', 'watch_dir', 'fork',
'delete_failed', 'Torrent_NoLink', 'nzbExtractionBy', 'remote_path', 'process_method']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_SR{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['SickBeard'].sections:
cfg_new['SickBeard'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['NzbDrone'].sections:
cfg_new['NzbDrone'][env_cat_key]['enabled'] = 0
section = 'HeadPhones'
env_cat_key = 'NZBPO_HPCATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'WAIT_FOR', 'WATCH_DIR', 'REMOTE_PATH', 'DELETE_FAILED']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'wait_for', 'watch_dir', 'remote_path', 'delete_failed']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_HP{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['Lidarr'].sections:
cfg_new['Lidarr'][env_cat_key]['enabled'] = 0
section = 'Mylar'
env_cat_key = 'NZBPO_MYCATEGORY'
env_keys = ['ENABLED', 'HOST', 'PORT', 'USERNAME', 'PASSWORD', 'APIKEY', 'SSL', 'WEB_ROOT', 'WATCH_DIR',
'REMOTE_PATH']
cfg_keys = ['enabled', 'host', 'port', 'username', 'password', 'apikey', 'ssl', 'web_root', 'watch_dir',
'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_MY{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
section = 'Gamez'
env_cat_key = 'NZBPO_GZCATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'LIBRARY', 'REMOTE_PATH']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'watch_dir', 'library', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_GZ{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
section = 'LazyLibrarian'
env_cat_key = 'NZBPO_LLCATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'REMOTE_PATH']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'watch_dir', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_LL{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
section = 'NzbDrone'
env_cat_key = 'NZBPO_NDCATEGORY'
env_keys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'WAIT_FOR', 'DELETE_FAILED', 'REMOTE_PATH', 'IMPORTMODE']
# new cfgKey added for importMode
cfg_keys = ['enabled', 'host', 'apikey', 'port', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed',
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path', 'importMode']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_ND{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['SickBeard'].sections:
cfg_new['SickBeard'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['SiCKRAGE'].sections:
cfg_new['SiCKRAGE'][env_cat_key]['enabled'] = 0
section = 'Radarr'
env_cat_key = 'NZBPO_RACATEGORY'
env_keys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'WAIT_FOR', 'DELETE_FAILED', 'REMOTE_PATH', 'OMDBAPIKEY', 'IMPORTMODE']
# new cfgKey added for importMode
cfg_keys = ['enabled', 'host', 'apikey', 'port', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed',
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path', 'omdbapikey', 'importMode']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_RA{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['CouchPotato'].sections:
cfg_new['CouchPotato'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['Wacther3'].sections:
cfg_new['Watcher3'][env_cat_key]['enabled'] = 0
section = 'Lidarr'
env_cat_key = 'NZBPO_LICATEGORY'
env_keys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'WAIT_FOR', 'DELETE_FAILED', 'REMOTE_PATH']
cfg_keys = ['enabled', 'host', 'apikey', 'port', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed',
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_LI{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['HeadPhones'].sections:
cfg_new['HeadPhones'][env_cat_key]['enabled'] = 0
section = 'Extensions'
env_keys = ['COMPRESSEDEXTENSIONS', 'MEDIAEXTENSIONS', 'METAEXTENSIONS']
cfg_keys = ['compressedExtensions', 'mediaExtensions', 'metaExtensions']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
cfg_new[section][option] = value
section = 'Posix'
env_keys = ['NICENESS', 'IONICE_CLASS', 'IONICE_CLASSDATA']
cfg_keys = ['niceness', 'ionice_class', 'ionice_classdata']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
cfg_new[section][option] = value
section = 'Transcoder'
env_keys = ['TRANSCODE', 'DUPLICATE', 'IGNOREEXTENSIONS', 'OUTPUTFASTSTART', 'OUTPUTVIDEOPATH',
'PROCESSOUTPUT', 'AUDIOLANGUAGE', 'ALLAUDIOLANGUAGES', 'SUBLANGUAGES',
'ALLSUBLANGUAGES', 'EMBEDSUBS', 'BURNINSUBTITLE', 'EXTRACTSUBS', 'EXTERNALSUBDIR',
'OUTPUTDEFAULT', 'OUTPUTVIDEOEXTENSION', 'OUTPUTVIDEOCODEC', 'VIDEOCODECALLOW',
'OUTPUTVIDEOPRESET', 'OUTPUTVIDEOFRAMERATE', 'OUTPUTVIDEOBITRATE', 'OUTPUTAUDIOCODEC',
'AUDIOCODECALLOW', 'OUTPUTAUDIOBITRATE', 'OUTPUTQUALITYPERCENT', 'GETSUBS',
'OUTPUTAUDIOTRACK2CODEC', 'AUDIOCODEC2ALLOW', 'OUTPUTAUDIOTRACK2BITRATE',
'OUTPUTAUDIOOTHERCODEC', 'AUDIOOTHERCODECALLOW', 'OUTPUTAUDIOOTHERBITRATE',
'OUTPUTSUBTITLECODEC', 'OUTPUTAUDIOCHANNELS', 'OUTPUTAUDIOTRACK2CHANNELS',
'OUTPUTAUDIOOTHERCHANNELS', 'OUTPUTVIDEORESOLUTION']
cfg_keys = ['transcode', 'duplicate', 'ignoreExtensions', 'outputFastStart', 'outputVideoPath',
'processOutput', 'audioLanguage', 'allAudioLanguages', 'subLanguages',
'allSubLanguages', 'embedSubs', 'burnInSubtitle', 'extractSubs', 'externalSubDir',
'outputDefault', 'outputVideoExtension', 'outputVideoCodec', 'VideoCodecAllow',
'outputVideoPreset', 'outputVideoFramerate', 'outputVideoBitrate', 'outputAudioCodec',
'AudioCodecAllow', 'outputAudioBitrate', 'outputQualityPercent', 'getSubs',
'outputAudioTrack2Codec', 'AudioCodec2Allow', 'outputAudioTrack2Bitrate',
'outputAudioOtherCodec', 'AudioOtherCodecAllow', 'outputAudioOtherBitrate',
'outputSubtitleCodec', 'outputAudioChannels', 'outputAudioTrack2Channels',
'outputAudioOtherChannels', 'outputVideoResolution']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
cfg_new[section][option] = value
section = 'WakeOnLan'
env_keys = ['WAKE', 'HOST', 'PORT', 'MAC']
cfg_keys = ['wake', 'host', 'port', 'mac']
for index in range(len(env_keys)):
key = 'NZBPO_WOL{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
cfg_new[section][option] = value
section = 'UserScript'
env_cat_key = 'NZBPO_USCATEGORY'
env_keys = ['USER_SCRIPT_MEDIAEXTENSIONS', 'USER_SCRIPT_PATH', 'USER_SCRIPT_PARAM', 'USER_SCRIPT_RUNONCE',
'USER_SCRIPT_SUCCESSCODES', 'USER_SCRIPT_CLEAN', 'USDELAY', 'USREMOTE_PATH']
cfg_keys = ['user_script_mediaExtensions', 'user_script_path', 'user_script_param', 'user_script_runOnce',
'user_script_successCodes', 'user_script_clean', 'delay', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
except Exception as error:
logger.debug('Error {msg} when applying NZBGet config'.format(msg=error))
try:
# write our new config to autoProcessMedia.cfg
cfg_new.filename = core.CONFIG_FILE
cfg_new.write()
except Exception as error:
logger.debug('Error {msg} when writing changes to .cfg'.format(msg=error))
return cfg_new
configobj.Section = Section
configobj.ConfigObj = ConfigObj
config = ConfigObj

72
core/databases.py Normal file
View file

@ -0,0 +1,72 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from core import logger, main_db
from core.utils import backup_versioned_file
MIN_DB_VERSION = 1 # oldest db version we support migrating from
MAX_DB_VERSION = 2
def backup_database(version):
logger.info('Backing up database before upgrade')
if not backup_versioned_file(main_db.db_filename(), version):
logger.log_error_and_exit('Database backup failed, abort upgrading database')
else:
logger.info('Proceeding with upgrade')
# ======================
# = Main DB Migrations =
# ======================
# Add new migrations at the bottom of the list; subclass the previous migration.
class InitialSchema(main_db.SchemaUpgrade):
def test(self):
no_update = False
if self.has_table('db_version'):
cur_db_version = self.check_db_version()
no_update = not cur_db_version < MAX_DB_VERSION
return no_update
def execute(self):
if not self.has_table('downloads') and not self.has_table('db_version'):
queries = [
'CREATE TABLE db_version (db_version INTEGER);',
'CREATE TABLE downloads (input_directory TEXT, input_name TEXT, input_hash TEXT, input_id TEXT, client_agent TEXT, status INTEGER, last_update NUMERIC, CONSTRAINT pk_downloadID PRIMARY KEY (input_directory, input_name));',
'INSERT INTO db_version (db_version) VALUES (2);',
]
for query in queries:
self.connection.action(query)
else:
cur_db_version = self.check_db_version()
if cur_db_version < MIN_DB_VERSION:
logger.log_error_and_exit(u'Your database version ({current}) is too old to migrate '
u'from what this version of nzbToMedia supports ({min}).'
u'\nPlease remove nzbtomedia.db file to begin fresh.'.format
(current=cur_db_version, min=MIN_DB_VERSION))
if cur_db_version > MAX_DB_VERSION:
logger.log_error_and_exit(u'Your database version ({current}) has been incremented '
u'past what this version of nzbToMedia supports ({max}).'
u'\nIf you have used other forks of nzbToMedia, your database '
u'may be unusable due to their modifications.'.format
(current=cur_db_version, max=MAX_DB_VERSION))
if cur_db_version < MAX_DB_VERSION: # We need to upgrade.
queries = [
'CREATE TABLE downloads2 (input_directory TEXT, input_name TEXT, input_hash TEXT, input_id TEXT, client_agent TEXT, status INTEGER, last_update NUMERIC, CONSTRAINT pk_downloadID PRIMARY KEY (input_directory, input_name));',
'INSERT INTO downloads2 SELECT * FROM downloads;',
'DROP TABLE IF EXISTS downloads;',
'ALTER TABLE downloads2 RENAME TO downloads;',
'INSERT INTO db_version (db_version) VALUES (2);',
]
for query in queries:
self.connection.action(query)

View file

@ -1,2 +0,0 @@
# coding=utf-8
__all__ = ["mainDB"]

View file

@ -1,65 +0,0 @@
# coding=utf-8
from core import logger, nzbToMediaDB
from core.nzbToMediaUtil import backupVersionedFile
MIN_DB_VERSION = 1 # oldest db version we support migrating from
MAX_DB_VERSION = 2
def backupDatabase(version):
logger.info("Backing up database before upgrade")
if not backupVersionedFile(nzbToMediaDB.dbFilename(), version):
logger.log_error_and_exit("Database backup failed, abort upgrading database")
else:
logger.info("Proceeding with upgrade")
# ======================
# = Main DB Migrations =
# ======================
# Add new migrations at the bottom of the list; subclass the previous migration.
class InitialSchema(nzbToMediaDB.SchemaUpgrade):
def test(self):
no_update = False
if self.hasTable("db_version"):
cur_db_version = self.checkDBVersion()
no_update = not cur_db_version < MAX_DB_VERSION
return no_update
def execute(self):
if not self.hasTable("downloads") and not self.hasTable("db_version"):
queries = [
"CREATE TABLE db_version (db_version INTEGER);",
"CREATE TABLE downloads (input_directory TEXT, input_name TEXT, input_hash TEXT, input_id TEXT, client_agent TEXT, status INTEGER, last_update NUMERIC, CONSTRAINT pk_downloadID PRIMARY KEY (input_directory, input_name));",
"INSERT INTO db_version (db_version) VALUES (2);"
]
for query in queries:
self.connection.action(query)
else:
cur_db_version = self.checkDBVersion()
if cur_db_version < MIN_DB_VERSION:
logger.log_error_and_exit(u"Your database version ({current}) is too old to migrate "
u"from what this version of nzbToMedia supports ({min})."
u"\nPlease remove nzbtomedia.db file to begin fresh.".format
(current=cur_db_version, min=MIN_DB_VERSION))
if cur_db_version > MAX_DB_VERSION:
logger.log_error_and_exit(u"Your database version ({current}) has been incremented "
u"past what this version of nzbToMedia supports ({max})."
u"\nIf you have used other forks of nzbToMedia, your database "
u"may be unusable due to their modifications.".format
(current=cur_db_version, max=MAX_DB_VERSION))
if cur_db_version < MAX_DB_VERSION: # We need to upgrade.
queries = [
"CREATE TABLE downloads2 (input_directory TEXT, input_name TEXT, input_hash TEXT, input_id TEXT, client_agent TEXT, status INTEGER, last_update NUMERIC, CONSTRAINT pk_downloadID PRIMARY KEY (input_directory, input_name));",
"INSERT INTO downloads2 SELECT * FROM downloads;",
"DROP TABLE IF EXISTS downloads;",
"ALTER TABLE downloads2 RENAME TO downloads;",
"INSERT INTO db_version (db_version) VALUES (2);"
]
for query in queries:
self.connection.action(query)

View file

@ -1 +1,192 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import platform
import shutil
import stat
import subprocess
from subprocess import Popen, call
from time import sleep
import core
def extract(file_path, output_destination):
success = 0
# Using Windows
if platform.system() == 'Windows':
if not os.path.exists(core.SEVENZIP):
core.logger.error('EXTRACTOR: Could not find 7-zip, Exiting')
return False
wscriptlocation = os.path.join(os.environ['WINDIR'], 'system32', 'wscript.exe')
invislocation = os.path.join(core.APP_ROOT, 'core', 'extractor', 'bin', 'invisible.vbs')
cmd_7zip = [wscriptlocation, invislocation, str(core.SHOWEXTRACT), core.SEVENZIP, 'x', '-y']
ext_7zip = ['.rar', '.zip', '.tar.gz', 'tgz', '.tar.bz2', '.tbz', '.tar.lzma', '.tlz', '.7z', '.xz', '.gz']
extract_commands = dict.fromkeys(ext_7zip, cmd_7zip)
# Using unix
else:
required_cmds = ['unrar', 'unzip', 'tar', 'unxz', 'unlzma', '7zr', 'bunzip2', 'gunzip']
# ## Possible future suport:
# gunzip: gz (cmd will delete original archive)
# ## the following do not extract to dest dir
# '.xz': ['xz', '-d --keep'],
# '.lzma': ['xz', '-d --format=lzma --keep'],
# '.bz2': ['bzip2', '-d --keep'],
extract_commands = {
'.rar': ['unrar', 'x', '-o+', '-y'],
'.tar': ['tar', '-xf'],
'.zip': ['unzip'],
'.tar.gz': ['tar', '-xzf'], '.tgz': ['tar', '-xzf'],
'.tar.bz2': ['tar', '-xjf'], '.tbz': ['tar', '-xjf'],
'.tar.lzma': ['tar', '--lzma', '-xf'], '.tlz': ['tar', '--lzma', '-xf'],
'.tar.xz': ['tar', '--xz', '-xf'], '.txz': ['tar', '--xz', '-xf'],
'.7z': ['7zr', 'x'],
'.gz': ['gunzip'],
}
# Test command exists and if not, remove
if not os.getenv('TR_TORRENT_DIR'):
devnull = open(os.devnull, 'w')
for cmd in required_cmds:
if call(['which', cmd], stdout=devnull,
stderr=devnull): # note, returns 0 if exists, or 1 if doesn't exist.
for k, v in extract_commands.items():
if cmd in v[0]:
if not call(['which', '7zr'], stdout=devnull, stderr=devnull): # we do have '7zr'
extract_commands[k] = ['7zr', 'x', '-y']
elif not call(['which', '7z'], stdout=devnull, stderr=devnull): # we do have '7z'
extract_commands[k] = ['7z', 'x', '-y']
elif not call(['which', '7za'], stdout=devnull, stderr=devnull): # we do have '7za'
extract_commands[k] = ['7za', 'x', '-y']
else:
core.logger.error('EXTRACTOR: {cmd} not found, '
'disabling support for {feature}'.format
(cmd=cmd, feature=k))
del extract_commands[k]
devnull.close()
else:
core.logger.warning('EXTRACTOR: Cannot determine which tool to use when called from Transmission')
if not extract_commands:
core.logger.warning('EXTRACTOR: No archive extracting programs found, plugin will be disabled')
ext = os.path.splitext(file_path)
cmd = []
if ext[1] in ('.gz', '.bz2', '.lzma'):
# Check if this is a tar
if os.path.splitext(ext[0])[1] == '.tar':
cmd = extract_commands['.tar{ext}'.format(ext=ext[1])]
else: # Try gunzip
cmd = extract_commands[ext[1]]
elif ext[1] in ('.1', '.01', '.001') and os.path.splitext(ext[0])[1] in ('.rar', '.zip', '.7z'):
cmd = extract_commands[os.path.splitext(ext[0])[1]]
elif ext[1] in ('.cb7', '.cba', '.cbr', '.cbt', '.cbz'): # don't extract these comic book archives.
return False
else:
if ext[1] in extract_commands:
cmd = extract_commands[ext[1]]
else:
core.logger.debug('EXTRACTOR: Unknown file type: {ext}'.format
(ext=ext[1]))
return False
# Create outputDestination folder
core.make_dir(output_destination)
if core.PASSWORDS_FILE and os.path.isfile(os.path.normpath(core.PASSWORDS_FILE)):
passwords = [line.strip() for line in open(os.path.normpath(core.PASSWORDS_FILE))]
else:
passwords = []
core.logger.info('Extracting {file} to {destination}'.format
(file=file_path, destination=output_destination))
core.logger.debug('Extracting {cmd} {file} {destination}'.format
(cmd=cmd, file=file_path, destination=output_destination))
orig_files = []
orig_dirs = []
for directory, subdirs, files in os.walk(output_destination):
for subdir in subdirs:
orig_dirs.append(os.path.join(directory, subdir))
for file in files:
orig_files.append(os.path.join(directory, file))
pwd = os.getcwd() # Get our Present Working Directory
os.chdir(output_destination) # Not all unpack commands accept full paths, so just extract into this directory
devnull = open(os.devnull, 'w')
try: # now works same for nt and *nix
info = None
cmd.append(file_path) # add filePath to final cmd arg.
if platform.system() == 'Windows':
info = subprocess.STARTUPINFO()
info.dwFlags |= subprocess.STARTF_USESHOWWINDOW
else:
cmd = core.NICENESS + cmd
cmd2 = cmd
if not 'gunzip' in cmd: #gunzip doesn't support password
cmd2.append('-p-') # don't prompt for password.
p = Popen(cmd2, stdout=devnull, stderr=devnull, startupinfo=info) # should extract files fine.
res = p.wait()
if res == 0: # Both Linux and Windows return 0 for successful.
core.logger.info('EXTRACTOR: Extraction was successful for {file} to {destination}'.format
(file=file_path, destination=output_destination))
success = 1
elif len(passwords) > 0 and not 'gunzip' in cmd:
core.logger.info('EXTRACTOR: Attempting to extract with passwords')
for password in passwords:
if password == '': # if edited in windows or otherwise if blank lines.
continue
cmd2 = cmd
# append password here.
passcmd = '-p{pwd}'.format(pwd=password)
cmd2.append(passcmd)
p = Popen(cmd2, stdout=devnull, stderr=devnull, startupinfo=info) # should extract files fine.
res = p.wait()
if (res >= 0 and platform == 'Windows') or res == 0:
core.logger.info('EXTRACTOR: Extraction was successful '
'for {file} to {destination} using password: {pwd}'.format
(file=file_path, destination=output_destination, pwd=password))
success = 1
break
else:
continue
except Exception:
core.logger.error('EXTRACTOR: Extraction failed for {file}. '
'Could not call command {cmd}'.format
(file=file_path, cmd=cmd))
os.chdir(pwd)
return False
devnull.close()
os.chdir(pwd) # Go back to our Original Working Directory
if success:
# sleep to let files finish writing to disk
sleep(3)
perms = stat.S_IMODE(os.lstat(os.path.split(file_path)[0]).st_mode)
for directory, subdirs, files in os.walk(output_destination):
for subdir in subdirs:
if not os.path.join(directory, subdir) in orig_files:
try:
os.chmod(os.path.join(directory, subdir), perms)
except Exception:
pass
for file in files:
if not os.path.join(directory, file) in orig_files:
try:
shutil.copymode(file_path, os.path.join(directory, file))
except Exception:
pass
return True
else:
core.logger.error('EXTRACTOR: Extraction failed for {file}. '
'Result was {result}'.format
(file=file_path, result=res))
return False

Binary file not shown.

Binary file not shown.

View file

@ -3,19 +3,20 @@
License for use and distribution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7-Zip Copyright (C) 1999-2012 Igor Pavlov.
7-Zip Copyright (C) 1999-2018 Igor Pavlov.
Licenses for files are:
The licenses for files are:
1) 7z.dll: GNU LGPL + unRAR restriction
2) All other files: GNU LGPL
1) 7z.dll:
- The "GNU LGPL" as main license for most of the code
- The "GNU LGPL" with "unRAR license restriction" for some code
- The "BSD 3-clause License" for some code
2) All other files: the "GNU LGPL".
The GNU LGPL + unRAR restriction means that you must follow both
GNU LGPL rules and unRAR restriction rules.
Redistributions in binary form must reproduce related license information from this file.
Note:
You can use 7-Zip on any computer, including a computer in a commercial
Note:
You can use 7-Zip on any computer, including a computer in a commercial
organization. You don't need to register or pay for 7-Zip.
@ -32,25 +33,58 @@
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You can receive a copy of the GNU Lesser General Public License from
You can receive a copy of the GNU Lesser General Public License from
http://www.gnu.org/
unRAR restriction
-----------------
The decompression engine for RAR archives was developed using source
BSD 3-clause License
--------------------
The "BSD 3-clause License" is used for the code in 7z.dll that implements LZFSE data decompression.
That code was derived from the code in the "LZFSE compression library" developed by Apple Inc,
that also uses the "BSD 3-clause License":
----
Copyright (c) 2015-2016, Apple Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder(s) nor the names of any contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
----
unRAR license restriction
-------------------------
The decompression engine for RAR archives was developed using source
code of unRAR program.
All copyrights to original unRAR code are owned by Alexander Roshal.
The license for original unRAR code has the following restriction:
The unRAR sources cannot be used to re-create the RAR compression algorithm,
which is proprietary. Distribution of modified unRAR sources in separate form
The unRAR sources cannot be used to re-create the RAR compression algorithm,
which is proprietary. Distribution of modified unRAR sources in separate form
or as a part of other software is permitted, provided that it is clearly
stated in the documentation and source comments that the code may
not be used to develop a RAR (WinRAR) compatible archiver.
--
Igor Pavlov
Igor Pavlov

View file

@ -1 +0,0 @@
start /B /wait wscript "%~dp0\invisible.vbs" %*

View file

@ -1,15 +1,15 @@
set args = WScript.Arguments
num = args.Count
if num = 0 then
WScript.Echo "Usage: [CScript | WScript] invis.vbs aScript.bat <some script arguments>"
if num < 2 then
WScript.Echo "Usage: [CScript | WScript] invis.vbs aScript.bat <visible or invisible 1/0> <some script arguments>"
WScript.Quit 1
end if
sargs = ""
if num > 1 then
if num > 2 then
sargs = " "
for k = 1 to num - 1
for k = 2 to num - 1
anArg = args.Item(k)
sargs = sargs & anArg & " "
next
@ -17,4 +17,5 @@ end if
Set WshShell = WScript.CreateObject("WScript.Shell")
WshShell.Run """" & WScript.Arguments(0) & """" & sargs, 0, True
returnValue = WshShell.Run("""" & args(1) & """" & sargs, args(0), True)
WScript.Quit(returnValue)

Binary file not shown.

Binary file not shown.

View file

@ -3,19 +3,20 @@
License for use and distribution
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
7-Zip Copyright (C) 1999-2012 Igor Pavlov.
7-Zip Copyright (C) 1999-2018 Igor Pavlov.
Licenses for files are:
The licenses for files are:
1) 7z.dll: GNU LGPL + unRAR restriction
2) All other files: GNU LGPL
1) 7z.dll:
- The "GNU LGPL" as main license for most of the code
- The "GNU LGPL" with "unRAR license restriction" for some code
- The "BSD 3-clause License" for some code
2) All other files: the "GNU LGPL".
The GNU LGPL + unRAR restriction means that you must follow both
GNU LGPL rules and unRAR restriction rules.
Redistributions in binary form must reproduce related license information from this file.
Note:
You can use 7-Zip on any computer, including a computer in a commercial
Note:
You can use 7-Zip on any computer, including a computer in a commercial
organization. You don't need to register or pay for 7-Zip.
@ -32,25 +33,58 @@
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You can receive a copy of the GNU Lesser General Public License from
You can receive a copy of the GNU Lesser General Public License from
http://www.gnu.org/
unRAR restriction
-----------------
The decompression engine for RAR archives was developed using source
BSD 3-clause License
--------------------
The "BSD 3-clause License" is used for the code in 7z.dll that implements LZFSE data decompression.
That code was derived from the code in the "LZFSE compression library" developed by Apple Inc,
that also uses the "BSD 3-clause License":
----
Copyright (c) 2015-2016, Apple Inc. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder(s) nor the names of any contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
----
unRAR license restriction
-------------------------
The decompression engine for RAR archives was developed using source
code of unRAR program.
All copyrights to original unRAR code are owned by Alexander Roshal.
The license for original unRAR code has the following restriction:
The unRAR sources cannot be used to re-create the RAR compression algorithm,
which is proprietary. Distribution of modified unRAR sources in separate form
The unRAR sources cannot be used to re-create the RAR compression algorithm,
which is proprietary. Distribution of modified unRAR sources in separate form
or as a part of other software is permitted, provided that it is clearly
stated in the documentation and source comments that the code may
not be used to develop a RAR (WinRAR) compatible archiver.
--
Igor Pavlov
Igor Pavlov

View file

@ -1,179 +0,0 @@
# coding=utf-8
import os
import platform
import shutil
import stat
from time import sleep
import core
from subprocess import call, Popen
import subprocess
def extract(filePath, outputDestination):
success = 0
# Using Windows
if platform.system() == 'Windows':
if not os.path.exists(core.SEVENZIP):
core.logger.error("EXTRACTOR: Could not find 7-zip, Exiting")
return False
invislocation = os.path.join(core.PROGRAM_DIR, 'core', 'extractor', 'bin', 'invisible.cmd')
cmd_7zip = [invislocation, core.SEVENZIP, "x", "-y"]
ext_7zip = [".rar", ".zip", ".tar.gz", "tgz", ".tar.bz2", ".tbz", ".tar.lzma", ".tlz", ".7z", ".xz"]
EXTRACT_COMMANDS = dict.fromkeys(ext_7zip, cmd_7zip)
# Using unix
else:
required_cmds = ["unrar", "unzip", "tar", "unxz", "unlzma", "7zr", "bunzip2"]
# ## Possible future suport:
# gunzip: gz (cmd will delete original archive)
# ## the following do not extract to dest dir
# ".xz": ["xz", "-d --keep"],
# ".lzma": ["xz", "-d --format=lzma --keep"],
# ".bz2": ["bzip2", "-d --keep"],
EXTRACT_COMMANDS = {
".rar": ["unrar", "x", "-o+", "-y"],
".tar": ["tar", "-xf"],
".zip": ["unzip"],
".tar.gz": ["tar", "-xzf"], ".tgz": ["tar", "-xzf"],
".tar.bz2": ["tar", "-xjf"], ".tbz": ["tar", "-xjf"],
".tar.lzma": ["tar", "--lzma", "-xf"], ".tlz": ["tar", "--lzma", "-xf"],
".tar.xz": ["tar", "--xz", "-xf"], ".txz": ["tar", "--xz", "-xf"],
".7z": ["7zr", "x"],
}
# Test command exists and if not, remove
if not os.getenv('TR_TORRENT_DIR'):
devnull = open(os.devnull, 'w')
for cmd in required_cmds:
if call(['which', cmd], stdout=devnull,
stderr=devnull): # note, returns 0 if exists, or 1 if doesn't exist.
for k, v in EXTRACT_COMMANDS.items():
if cmd in v[0]:
if not call(["which", "7zr"], stdout=devnull, stderr=devnull): # we do have "7zr"
EXTRACT_COMMANDS[k] = ["7zr", "x", "-y"]
elif not call(["which", "7z"], stdout=devnull, stderr=devnull): # we do have "7z"
EXTRACT_COMMANDS[k] = ["7z", "x", "-y"]
elif not call(["which", "7za"], stdout=devnull, stderr=devnull): # we do have "7za"
EXTRACT_COMMANDS[k] = ["7za", "x", "-y"]
else:
core.logger.error("EXTRACTOR: {cmd} not found, "
"disabling support for {feature}".format
(cmd=cmd, feature=k))
del EXTRACT_COMMANDS[k]
devnull.close()
else:
core.logger.warning("EXTRACTOR: Cannot determine which tool to use when called from Transmission")
if not EXTRACT_COMMANDS:
core.logger.warning("EXTRACTOR: No archive extracting programs found, plugin will be disabled")
ext = os.path.splitext(filePath)
cmd = []
if ext[1] in (".gz", ".bz2", ".lzma"):
# Check if this is a tar
if os.path.splitext(ext[0])[1] == ".tar":
cmd = EXTRACT_COMMANDS[".tar{ext}".format(ext=ext[1])]
elif ext[1] in (".1", ".01", ".001") and os.path.splitext(ext[0])[1] in (".rar", ".zip", ".7z"):
cmd = EXTRACT_COMMANDS[os.path.splitext(ext[0])[1]]
elif ext[1] in (".cb7", ".cba", ".cbr", ".cbt", ".cbz"): # don't extract these comic book archives.
return False
else:
if ext[1] in EXTRACT_COMMANDS:
cmd = EXTRACT_COMMANDS[ext[1]]
else:
core.logger.debug("EXTRACTOR: Unknown file type: {ext}".format
(ext=ext[1]))
return False
# Create outputDestination folder
core.makeDir(outputDestination)
if core.PASSWORDSFILE != "" and os.path.isfile(os.path.normpath(core.PASSWORDSFILE)):
passwords = [line.strip() for line in open(os.path.normpath(core.PASSWORDSFILE))]
else:
passwords = []
core.logger.info("Extracting {file} to {destination}".format
(file=filePath, destination=outputDestination))
core.logger.debug("Extracting {cmd} {file} {destination}".format
(cmd=cmd, file=filePath, destination=outputDestination))
origFiles = []
origDirs = []
for dir, subdirs, files in os.walk(outputDestination):
for subdir in subdirs:
origDirs.append(os.path.join(dir, subdir))
for file in files:
origFiles.append(os.path.join(dir, file))
pwd = os.getcwd() # Get our Present Working Directory
os.chdir(outputDestination) # Not all unpack commands accept full paths, so just extract into this directory
devnull = open(os.devnull, 'w')
try: # now works same for nt and *nix
info = None
cmd.append(filePath) # add filePath to final cmd arg.
if platform.system() == 'Windows':
info = subprocess.STARTUPINFO()
info.dwFlags |= subprocess.STARTF_USESHOWWINDOW
else:
cmd = core.NICENESS + cmd
cmd2 = cmd
cmd2.append("-p-") # don't prompt for password.
p = Popen(cmd2, stdout=devnull, stderr=devnull, startupinfo=info) # should extract files fine.
res = p.wait()
if (res >= 0 and os.name == 'nt') or res == 0: # for windows chp returns process id if successful or -1*Error code. Linux returns 0 for successful.
core.logger.info("EXTRACTOR: Extraction was successful for {file} to {destination}".format
(file=filePath, destination=outputDestination))
success = 1
elif len(passwords) > 0:
core.logger.info("EXTRACTOR: Attempting to extract with passwords")
for password in passwords:
if password == "": # if edited in windows or otherwise if blank lines.
continue
cmd2 = cmd
# append password here.
passcmd = "-p{pwd}".format(pwd=password)
cmd2.append(passcmd)
p = Popen(cmd2, stdout=devnull, stderr=devnull, startupinfo=info) # should extract files fine.
res = p.wait()
if (res >= 0 and platform == 'Windows') or res == 0:
core.logger.info("EXTRACTOR: Extraction was successful "
"for {file} to {destination} using password: {pwd}".format
(file=filePath, destination=outputDestination, pwd=password))
success = 1
break
else:
continue
except:
core.logger.error("EXTRACTOR: Extraction failed for {file}. "
"Could not call command {cmd}".format
(file=filePath, cmd=cmd))
os.chdir(pwd)
return False
devnull.close()
os.chdir(pwd) # Go back to our Original Working Directory
if success:
# sleep to let files finish writing to disk
sleep(3)
perms = stat.S_IMODE(os.lstat(os.path.split(filePath)[0]).st_mode)
for dir, subdirs, files in os.walk(outputDestination):
for subdir in subdirs:
if not os.path.join(dir, subdir) in origFiles:
try:
os.chmod(os.path.join(dir, subdir), perms)
except:
pass
for file in files:
if not os.path.join(dir, file) in origFiles:
try:
shutil.copymode(filePath, os.path.join(dir, file))
except:
pass
return True
else:
core.logger.error("EXTRACTOR: Extraction failed for {file}. "
"Result was {result}".format
(file=filePath, result=res))
return False

View file

@ -1,66 +0,0 @@
# coding=utf-8
import requests
from six import iteritems
class GitHub(object):
"""
Simple api wrapper for the Github API v3.
"""
def __init__(self, github_repo_user, github_repo, branch='master'):
self.github_repo_user = github_repo_user
self.github_repo = github_repo
self.branch = branch
def _access_API(self, path, params=None):
"""
Access the API at the path given and with the optional params given.
"""
url = 'https://api.github.com/{path}'.format(path='/'.join(path))
if params and type(params) is dict:
url += '?{params}'.format(params='&'.join(['{key}={value}'.format(key=k, value=v)
for k, v in iteritems(params)]))
data = requests.get(url, verify=False)
if data.ok:
json_data = data.json()
return json_data
else:
return []
def commits(self):
"""
Uses the API to get a list of the 100 most recent commits from the specified user/repo/branch, starting from HEAD.
user: The github username of the person whose repo you're querying
repo: The repo name to query
branch: Optional, the branch name to show commits from
Returns a deserialized json object containing the commit info. See http://developer.github.com/v3/repos/commits/
"""
access_API = self._access_API(['repos', self.github_repo_user, self.github_repo, 'commits'],
params={'per_page': 100, 'sha': self.branch})
return access_API
def compare(self, base, head, per_page=1):
"""
Uses the API to get a list of compares between base and head.
user: The github username of the person whose repo you're querying
repo: The repo name to query
base: Start compare from branch
head: Current commit sha or branch name to compare
per_page: number of items per page
Returns a deserialized json object containing the compare info. See http://developer.github.com/v3/repos/commits/
"""
access_API = self._access_API(
['repos', self.github_repo_user, self.github_repo, 'compare', '{base}...{head}'.format(base=base, head=head)],
params={'per_page': per_page})
return access_API

59
core/github_api.py Normal file
View file

@ -0,0 +1,59 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
class GitHub(object):
"""Simple api wrapper for the Github API v3."""
def __init__(self, github_repo_user, github_repo, branch='master'):
self.github_repo_user = github_repo_user
self.github_repo = github_repo
self.branch = branch
def _access_api(self, path, params=None):
"""Access API at given an API path and optional parameters."""
url = 'https://api.github.com/{path}'.format(path='/'.join(path))
data = requests.get(url, params=params, verify=False)
return data.json() if data.ok else []
def commits(self):
"""
Get the 100 most recent commits from the specified user/repo/branch, starting from HEAD.
user: The github username of the person whose repo you're querying
repo: The repo name to query
branch: Optional, the branch name to show commits from
Returns a deserialized json object containing the commit info. See http://developer.github.com/v3/repos/commits/
"""
return self._access_api(
['repos', self.github_repo_user, self.github_repo, 'commits'],
params={'per_page': 100, 'sha': self.branch},
)
def compare(self, base, head, per_page=1):
"""
Get compares between base and head.
user: The github username of the person whose repo you're querying
repo: The repo name to query
base: Start compare from branch
head: Current commit sha or branch name to compare
per_page: number of items per page
Returns a deserialized json object containing the compare info. See http://developer.github.com/v3/repos/commits/
"""
return self._access_api(
['repos', self.github_repo_user, self.github_repo, 'compare',
'{base}...{head}'.format(base=base, head=head)],
params={'per_page': per_page},
)

View file

@ -1 +0,0 @@
# coding=utf-8

View file

@ -1,123 +0,0 @@
# coding=utf-8
# Linktastic Module
# - A python2/3 compatible module that can create hardlinks/symlinks on windows-based systems
#
# Linktastic is distributed under the MIT License. The follow are the terms and conditions of using Linktastic.
#
# The MIT License (MIT)
# Copyright (c) 2012 Solipsis Development
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and
# associated documentation files (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial
# portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT
# LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import subprocess
from subprocess import CalledProcessError
import os
if os.name == 'nt':
info = subprocess.STARTUPINFO()
info.dwFlags |= subprocess.STARTF_USESHOWWINDOW
# Prevent spaces from messing with us!
def _escape_param(param):
return '"{0}"'.format(param)
# Private function to create link on nt-based systems
def _link_windows(src, dest):
try:
subprocess.check_output(
'cmd /C mklink /H {0} {1}'.format(_escape_param(dest), _escape_param(src)),
stderr=subprocess.STDOUT, startupinfo=info)
except CalledProcessError as err:
raise IOError(err.output.decode('utf-8'))
# TODO, find out what kind of messages Windows sends us from mklink
# print(stdout)
# assume if they ret-coded 0 we're good
def _symlink_windows(src, dest):
try:
subprocess.check_output(
'cmd /C mklink {0} {1}'.format(_escape_param(dest), _escape_param(src)),
stderr=subprocess.STDOUT, startupinfo=info)
except CalledProcessError as err:
raise IOError(err.output.decode('utf-8'))
# TODO, find out what kind of messages Windows sends us from mklink
# print(stdout)
# assume if they ret-coded 0 we're good
def _dirlink_windows(src, dest):
try:
subprocess.check_output(
'cmd /C mklink /J {0} {1}'.format(_escape_param(dest), _escape_param(src)),
stderr=subprocess.STDOUT, startupinfo=info)
except CalledProcessError as err:
raise IOError(err.output.decode('utf-8'))
# TODO, find out what kind of messages Windows sends us from mklink
# print(stdout)
# assume if they ret-coded 0 we're good
def _junctionlink_windows(src, dest):
try:
subprocess.check_output(
'cmd /C mklink /D {0} {1}'.format(_escape_param(dest), _escape_param(src)),
stderr=subprocess.STDOUT, startupinfo=info)
except CalledProcessError as err:
raise IOError(err.output.decode('utf-8'))
# TODO, find out what kind of messages Windows sends us from mklink
# print(stdout)
# assume if they ret-coded 0 we're good
# Create a hard link to src named as dest
# This version of link, unlike os.link, supports nt systems as well
def link(src, dest):
if os.name == 'nt':
_link_windows(src, dest)
else:
os.link(src, dest)
# Create a symlink to src named as dest, but don't fail if you're on nt
def symlink(src, dest):
if os.name == 'nt':
_symlink_windows(src, dest)
else:
os.symlink(src, dest)
# Create a symlink to src named as dest, but don't fail if you're on nt
def dirlink(src, dest):
if os.name == 'nt':
_dirlink_windows(src, dest)
else:
os.symlink(src, dest)
# Create a symlink to src named as dest, but don't fail if you're on nt
def junctionlink(src, dest):
if os.name == 'nt':
_junctionlink_windows(src, dest)
else:
os.symlink(src, dest)

View file

@ -1,11 +1,19 @@
# coding=utf-8
from __future__ import with_statement
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import logging
import os
import sys
import threading
import logging
import core
import functools
# number of log files to keep
NUM_LOGS = 3
@ -58,10 +66,10 @@ class NTMRotatingLogHandler(object):
handler.flush()
handler.close()
def initLogging(self, consoleLogging=True):
def init_logging(self, console_logging=True):
if consoleLogging:
self.console_logging = consoleLogging
if console_logging:
self.console_logging = console_logging
old_handler = None
@ -85,9 +93,9 @@ class NTMRotatingLogHandler(object):
console.setFormatter(DispatchingFormatter(
{'nzbtomedia': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S'),
'postprocess': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S'),
'db': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S')
'db': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S'),
},
logging.Formatter('%(message)s'), ))
logging.Formatter('%(message)s')))
# add the handler to the root logger
logging.getLogger('nzbtomedia').addHandler(console)
@ -111,10 +119,7 @@ class NTMRotatingLogHandler(object):
self.close_log(old_handler)
def _config_handler(self):
"""
Configure a file handler to log at file_name and return it.
"""
"""Configure a file handler to log at file_name and return it."""
file_handler = logging.FileHandler(self.log_file_path, encoding='utf-8')
file_handler.setLevel(DB)
@ -122,29 +127,29 @@ class NTMRotatingLogHandler(object):
file_handler.setFormatter(DispatchingFormatter(
{'nzbtomedia': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S'),
'postprocess': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S'),
'db': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S')
'db': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S'),
},
logging.Formatter('%(message)s'), ))
logging.Formatter('%(message)s')))
return file_handler
def _log_file_name(self, i):
"""
Returns a numbered log file name depending on i. If i==0 it just uses logName, if not it appends
it to the extension (blah.log.3 for i == 3)
Return a numbered log file name depending on i.
If i==0 it just uses logName, if not it appends it to the extension
e.g. (blah.log.3 for i == 3)
i: Log number to ues
"""
return self.log_file_path + ('.{0}'.format(i) if i else '')
def _num_logs(self):
"""
Scans the log folder and figures out how many log files there are already on disk
Scan the log folder and figure out how many log files there are already on disk.
Returns: The number of the last used file (eg. mylog.log.3 would return 3). If there are no logs it returns -1
"""
cur_log = 0
while os.path.isfile(self._log_file_name(cur_log)):
cur_log += 1
@ -180,7 +185,7 @@ class NTMRotatingLogHandler(object):
pp_logger.addHandler(new_file_handler)
db_logger.addHandler(new_file_handler)
def log(self, toLog, logLevel=MESSAGE, section='MAIN'):
def log(self, to_log, log_level=MESSAGE, section='MAIN'):
with self.log_lock:
@ -193,35 +198,34 @@ class NTMRotatingLogHandler(object):
self.writes_since_check += 1
try:
message = u"{0}: {1}".format(section.upper(), toLog)
message = u'{0}: {1}'.format(section.upper(), to_log)
except UnicodeError:
message = u"{0}: Message contains non-utf-8 string".format(section.upper())
message = u'{0}: Message contains non-utf-8 string'.format(section.upper())
out_line = message
ntm_logger = logging.getLogger('nzbtomedia')
pp_logger = logging.getLogger('postprocess')
db_logger = logging.getLogger('db')
setattr(pp_logger, 'postprocess', lambda *args: pp_logger.log(POSTPROCESS, *args))
setattr(db_logger, 'db', lambda *args: db_logger.log(DB, *args))
pp_logger.postprocess = functools.partial(pp_logger.log, POSTPROCESS)
db_logger.db = functools.partial(db_logger.log, DB)
try:
if logLevel == DEBUG:
if log_level == DEBUG:
if core.LOG_DEBUG == 1:
ntm_logger.debug(out_line)
elif logLevel == MESSAGE:
elif log_level == MESSAGE:
ntm_logger.info(out_line)
elif logLevel == WARNING:
elif log_level == WARNING:
ntm_logger.warning(out_line)
elif logLevel == ERROR:
elif log_level == ERROR:
ntm_logger.error(out_line)
elif logLevel == POSTPROCESS:
elif log_level == POSTPROCESS:
pp_logger.postprocess(out_line)
elif logLevel == DB:
elif log_level == DB:
if core.LOG_DB == 1:
db_logger.db(out_line)
else:
ntm_logger.info(logLevel, out_line)
ntm_logger.info(log_level, out_line)
except ValueError:
pass
@ -249,32 +253,32 @@ class DispatchingFormatter(object):
ntm_log_instance = NTMRotatingLogHandler(core.LOG_FILE, NUM_LOGS, LOG_SIZE)
def log(toLog, logLevel=MESSAGE, section='MAIN'):
ntm_log_instance.log(toLog, logLevel, section)
def log(to_log, log_level=MESSAGE, section='MAIN'):
ntm_log_instance.log(to_log, log_level, section)
def info(toLog, section='MAIN'):
log(toLog, MESSAGE, section)
def info(to_log, section='MAIN'):
log(to_log, MESSAGE, section)
def error(toLog, section='MAIN'):
log(toLog, ERROR, section)
def error(to_log, section='MAIN'):
log(to_log, ERROR, section)
def warning(toLog, section='MAIN'):
log(toLog, WARNING, section)
def warning(to_log, section='MAIN'):
log(to_log, WARNING, section)
def debug(toLog, section='MAIN'):
log(toLog, DEBUG, section)
def debug(to_log, section='MAIN'):
log(to_log, DEBUG, section)
def postprocess(toLog, section='POSTPROCESS'):
log(toLog, POSTPROCESS, section)
def postprocess(to_log, section='POSTPROCESS'):
log(to_log, POSTPROCESS, section)
def db(toLog, section='DB'):
log(toLog, DB, section)
def db(to_log, section='DB'):
log(to_log, DB, section)
def log_error_and_exit(error_msg):

340
core/main_db.py Normal file
View file

@ -0,0 +1,340 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os.path
import re
import sqlite3
import sys
import time
from six import text_type, PY2
import core
from core import logger
from core import permissions
if PY2:
class Row(sqlite3.Row, object):
"""
Row factory that uses Byte Strings for keys.
The sqlite3.Row in Python 2 does not support unicode keys.
This overrides __getitem__ to attempt to encode the key to bytes first.
"""
def __getitem__(self, item):
"""
Get an item from the row by index or key.
:param item: Index or Key of item to return.
:return: An item from the sqlite3.Row.
"""
try:
# sqlite3.Row column names should be Bytes in Python 2
item = item.encode()
except AttributeError:
pass # assume item is a numeric index
return super(Row, self).__getitem__(item)
else:
from sqlite3 import Row
def db_filename(filename='nzbtomedia.db', suffix=None):
"""
Return the correct location of the database file.
@param filename: The sqlite database filename to use. If not specified,
will be made to be nzbtomedia.db
@param suffix: The suffix to append to the filename. A '.' will be added
automatically, i.e. suffix='v0' will make dbfile.db.v0
@return: the correct location of the database file.
"""
if suffix:
filename = '{0}.{1}'.format(filename, suffix)
return core.os.path.join(core.APP_ROOT, filename)
class DBConnection(object):
def __init__(self, filename='nzbtomedia.db', suffix=None, row_type=None):
self.filename = filename
path = db_filename(filename)
try:
self.connection = sqlite3.connect(path, 20)
except sqlite3.OperationalError as error:
if os.path.exists(path):
logger.error('Please check permissions on database: {0}'.format(path))
else:
logger.error('Database file does not exist')
logger.error('Please check permissions on directory: {0}'.format(path))
path = os.path.dirname(path)
mode = permissions.mode(path)
owner, group = permissions.ownership(path)
logger.error(
"=== PERMISSIONS ===========================\n"
" Path : {0}\n"
" Mode : {1}\n"
" Owner: {2}\n"
" Group: {3}\n"
"===========================================".format(path, mode, owner, group),
)
else:
self.connection.row_factory = Row
def check_db_version(self):
result = None
try:
result = self.select('SELECT db_version FROM db_version')
except sqlite3.OperationalError as e:
if 'no such table: db_version' in e.args[0]:
return 0
if result:
return int(result[0]['db_version'])
else:
return 0
def fetch(self, query, args=None):
if query is None:
return
sql_result = None
attempt = 0
while attempt < 5:
try:
if args is None:
logger.log('{name}: {query}'.format(name=self.filename, query=query), logger.DB)
cursor = self.connection.cursor()
cursor.execute(query)
sql_result = cursor.fetchone()[0]
else:
logger.log('{name}: {query} with args {args}'.format
(name=self.filename, query=query, args=args), logger.DB)
cursor = self.connection.cursor()
cursor.execute(query, args)
sql_result = cursor.fetchone()[0]
# get out of the connection attempt loop since we were successful
break
except sqlite3.OperationalError as error:
if 'unable to open database file' in error.args[0] or 'database is locked' in error.args[0]:
logger.log(u'DB error: {msg}'.format(msg=error), logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log(u'DB error: {msg}'.format(msg=error), logger.ERROR)
raise
except sqlite3.DatabaseError as error:
logger.log(u'Fatal error executing query: {msg}'.format(msg=error), logger.ERROR)
raise
return sql_result
def mass_action(self, querylist, log_transaction=False):
if querylist is None:
return
sql_result = []
attempt = 0
while attempt < 5:
try:
for qu in querylist:
if len(qu) == 1:
if log_transaction:
logger.log(qu[0], logger.DEBUG)
sql_result.append(self.connection.execute(qu[0]))
elif len(qu) > 1:
if log_transaction:
logger.log(u'{query} with args {args}'.format(query=qu[0], args=qu[1]), logger.DEBUG)
sql_result.append(self.connection.execute(qu[0], qu[1]))
self.connection.commit()
logger.log(u'Transaction with {x} query\'s executed'.format(x=len(querylist)), logger.DEBUG)
return sql_result
except sqlite3.OperationalError as error:
sql_result = []
if self.connection:
self.connection.rollback()
if 'unable to open database file' in error.args[0] or 'database is locked' in error.args[0]:
logger.log(u'DB error: {msg}'.format(msg=error), logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log(u'DB error: {msg}'.format(msg=error), logger.ERROR)
raise
except sqlite3.DatabaseError as error:
if self.connection:
self.connection.rollback()
logger.log(u'Fatal error executing query: {msg}'.format(msg=error), logger.ERROR)
raise
return sql_result
def action(self, query, args=None):
if query is None:
return
sql_result = None
attempt = 0
while attempt < 5:
try:
if args is None:
logger.log(u'{name}: {query}'.format(name=self.filename, query=query), logger.DB)
sql_result = self.connection.execute(query)
else:
logger.log(u'{name}: {query} with args {args}'.format
(name=self.filename, query=query, args=args), logger.DB)
sql_result = self.connection.execute(query, args)
self.connection.commit()
# get out of the connection attempt loop since we were successful
break
except sqlite3.OperationalError as error:
if 'unable to open database file' in error.args[0] or 'database is locked' in error.args[0]:
logger.log(u'DB error: {msg}'.format(msg=error), logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log(u'DB error: {msg}'.format(msg=error), logger.ERROR)
raise
except sqlite3.DatabaseError as error:
logger.log(u'Fatal error executing query: {msg}'.format(msg=error), logger.ERROR)
raise
return sql_result
def select(self, query, args=None):
sql_results = self.action(query, args).fetchall()
if sql_results is None:
return []
return sql_results
def upsert(self, table_name, value_dict, key_dict):
def gen_params(my_dict):
return [
'{key} = ?'.format(key=k)
for k in my_dict.keys()
]
changes_before = self.connection.total_changes
items = list(value_dict.values()) + list(key_dict.values())
self.action(
'UPDATE {table} '
'SET {params} '
'WHERE {conditions}'.format(
table=table_name,
params=', '.join(gen_params(value_dict)),
conditions=' AND '.join(gen_params(key_dict)),
),
items,
)
if self.connection.total_changes == changes_before:
self.action(
'INSERT OR IGNORE INTO {table} ({columns}) '
'VALUES ({values})'.format(
table=table_name,
columns=', '.join(map(text_type, value_dict.keys())),
values=', '.join(['?'] * len(value_dict.values())),
),
list(value_dict.values()),
)
def table_info(self, table_name):
# FIXME ? binding is not supported here, but I cannot find a way to escape a string manually
cursor = self.connection.execute('PRAGMA table_info({0})'.format(table_name))
return {
column['name']: {'type': column['type']}
for column in cursor
}
def sanity_check_database(connection, sanity_check):
sanity_check(connection).check()
class DBSanityCheck(object):
def __init__(self, connection):
self.connection = connection
def check(self):
pass
# ===============
# = Upgrade API =
# ===============
def upgrade_database(connection, schema):
logger.log(u'Checking database structure...', logger.MESSAGE)
try:
_process_upgrade(connection, schema)
except Exception as error:
logger.error(error)
sys.exit(1)
def pretty_name(class_name):
return ' '.join([x.group() for x in re.finditer('([A-Z])([a-z0-9]+)', class_name)])
def _process_upgrade(connection, upgrade_class):
instance = upgrade_class(connection)
logger.log(u'Checking {name} database upgrade'.format
(name=pretty_name(upgrade_class.__name__)), logger.DEBUG)
if not instance.test():
logger.log(u'Database upgrade required: {name}'.format
(name=pretty_name(upgrade_class.__name__)), logger.MESSAGE)
try:
instance.execute()
except sqlite3.DatabaseError as error:
print(u'Error in {name}: {msg}'.format
(name=upgrade_class.__name__, msg=error))
raise
logger.log(u'{name} upgrade completed'.format
(name=upgrade_class.__name__), logger.DEBUG)
else:
logger.log(u'{name} upgrade not required'.format
(name=upgrade_class.__name__), logger.DEBUG)
for upgradeSubClass in upgrade_class.__subclasses__():
_process_upgrade(connection, upgradeSubClass)
# Base migration class. All future DB changes should be subclassed from this class
class SchemaUpgrade(object):
def __init__(self, connection):
self.connection = connection
def has_table(self, table_name):
return len(self.connection.action('SELECT 1 FROM sqlite_master WHERE name = ?;', (table_name,)).fetchall()) > 0
def has_column(self, table_name, column):
return column in self.connection.table_info(table_name)
def add_column(self, table, column, data_type='NUMERIC', default=0):
self.connection.action('ALTER TABLE {0} ADD {1} {2}'.format(table, column, data_type))
self.connection.action('UPDATE {0} SET {1} = ?'.format(table, column), (default,))
def check_db_version(self):
result = self.connection.select('SELECT db_version FROM db_version')
if result:
return int(result[-1]['db_version'])
else:
return 0
def inc_db_version(self):
new_version = self.check_db_version() + 1
self.connection.action('UPDATE db_version SET db_version = ?', [new_version])
return new_version

View file

@ -1,115 +0,0 @@
# coding=utf-8
import requests
from six import iteritems
import core
from core import logger
def autoFork(section, inputCategory):
# auto-detect correct section
# config settings
cfg = dict(core.CFG[section][inputCategory])
host = cfg.get("host")
port = cfg.get("port")
username = cfg.get("username")
password = cfg.get("password")
apikey = cfg.get("apikey")
ssl = int(cfg.get("ssl", 0))
web_root = cfg.get("web_root", "")
replace = {'sickrage':'SickRage', 'sickchill':'SickChill', 'sickgear':'SickGear', 'medusa':'Medusa', 'sickbeard-api':'SickBeard-api'}
f1 = replace[cfg.get("fork", "auto")] if cfg.get("fork", "auto") in replace else cfg.get("fork", "auto")
try:
fork = core.FORKS.items()[core.FORKS.keys().index(f1)]
except:
fork = "auto"
protocol = "https://" if ssl else "http://"
detected = False
if section == "NzbDrone":
logger.info("Attempting to verify {category} fork".format
(category=inputCategory))
url = "{protocol}{host}:{port}{root}/api/rootfolder".format(
protocol=protocol, host=host, port=port, root=web_root)
headers = {"X-Api-Key": apikey}
try:
r = requests.get(url, headers=headers, stream=True, verify=False)
except requests.ConnectionError:
logger.warning("Could not connect to {0}:{1} to verify fork!".format(section, inputCategory))
if not r.ok:
logger.warning("Connection to {section}:{category} failed! "
"Check your configuration".format
(section=section, category=inputCategory))
fork = ['default', {}]
elif fork == "auto":
params = core.ALL_FORKS
rem_params = []
logger.info("Attempting to auto-detect {category} fork".format(category=inputCategory))
# define the order to test. Default must be first since the default fork doesn't reject parameters.
# then in order of most unique parameters.
if apikey:
url = "{protocol}{host}:{port}{root}/api/{apikey}/?cmd=help&subject=postprocess".format(
protocol=protocol, host=host, port=port, root=web_root, apikey=apikey)
else:
url = "{protocol}{host}:{port}{root}/home/postprocess/".format(
protocol=protocol, host=host, port=port, root=web_root)
# attempting to auto-detect fork
try:
s = requests.Session()
if not apikey and username and password:
login = "{protocol}{host}:{port}{root}/login".format(
protocol=protocol, host=host, port=port, root=web_root)
login_params = {'username': username, 'password': password}
r = s.get(login, verify=False, timeout=(30,60))
if r.status_code == 401 and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False)
r = s.get(url, auth=(username, password), verify=False)
except requests.ConnectionError:
logger.info("Could not connect to {section}:{category} to perform auto-fork detection!".format
(section=section, category=inputCategory))
r = []
if r and r.ok:
if apikey:
optionalParameters = []
try:
optionalParameters = r.json()['data']['optionalParameters'].keys()
except:
optionalParameters = r.json()['data']['data']['optionalParameters'].keys()
for param in params:
if param not in optionalParameters:
rem_params.append(param)
else:
for param in params:
if 'name="{param}"'.format(param=param) not in r.text:
rem_params.append(param)
for param in rem_params:
params.pop(param)
for fork in sorted(iteritems(core.FORKS), reverse=False):
if params == fork[1]:
detected = True
break
if detected:
logger.info("{section}:{category} fork auto-detection successful ...".format
(section=section, category=inputCategory))
elif rem_params:
logger.info("{section}:{category} fork auto-detection found custom params {params}".format
(section=section, category=inputCategory, params=params))
fork = ['custom', params]
else:
logger.info("{section}:{category} fork auto-detection failed".format
(section=section, category=inputCategory))
fork = core.FORKS.items()[core.FORKS.keys().index(core.FORK_DEFAULT)]
logger.info("{section}:{category} fork set to {fork}".format
(section=section, category=inputCategory, fork=fork[0]))
return fork[0], fork[1]

View file

@ -1,534 +0,0 @@
# coding=utf-8
from six import iteritems
import os
import shutil
import copy
import core
from configobj import *
from core import logger
from itertools import chain
class Section(configobj.Section, object):
def isenabled(section):
# checks if subsection enabled, returns true/false if subsection specified otherwise returns true/false in {}
if not section.sections:
try:
value = list(ConfigObj.find_key(section, 'enabled'))[0]
except:
value = 0
if int(value) == 1:
return section
else:
to_return = copy.deepcopy(section)
for section_name, subsections in to_return.items():
for subsection in subsections:
try:
value = list(ConfigObj.find_key(subsections, 'enabled'))[0]
except:
value = 0
if int(value) != 1:
del to_return[section_name][subsection]
# cleanout empty sections and subsections
for key in [k for (k, v) in to_return.items() if not v]:
del to_return[key]
return to_return
def findsection(section, key):
to_return = copy.deepcopy(section)
for subsection in to_return:
try:
value = list(ConfigObj.find_key(to_return[subsection], key))[0]
except:
value = None
if not value:
del to_return[subsection]
else:
for category in to_return[subsection]:
if category != key:
del to_return[subsection][category]
# cleanout empty sections and subsections
for key in [k for (k, v) in to_return.items() if not v]:
del to_return[key]
return to_return
def __getitem__(self, key):
if key in self.keys():
return dict.__getitem__(self, key)
to_return = copy.deepcopy(self)
for section, subsections in to_return.items():
if section in key:
continue
if isinstance(subsections, Section) and subsections.sections:
for subsection, options in subsections.items():
if subsection in key:
continue
if key in options:
return options[key]
del subsections[subsection]
else:
if section not in key:
del to_return[section]
# cleanout empty sections and subsections
for key in [k for (k, v) in to_return.items() if not v]:
del to_return[key]
return to_return
class ConfigObj(configobj.ConfigObj, Section):
def __init__(self, *args, **kw):
if len(args) == 0:
args = (core.CONFIG_FILE,)
super(configobj.ConfigObj, self).__init__(*args, **kw)
self.interpolation = False
@staticmethod
def find_key(node, kv):
if isinstance(node, list):
for i in node:
for x in ConfigObj.find_key(i, kv):
yield x
elif isinstance(node, dict):
if kv in node:
yield node[kv]
for j in node.values():
for x in ConfigObj.find_key(j, kv):
yield x
@staticmethod
def migrate():
global CFG_NEW, CFG_OLD
CFG_NEW = None
CFG_OLD = None
try:
# check for autoProcessMedia.cfg and create if it does not exist
if not os.path.isfile(core.CONFIG_FILE):
shutil.copyfile(core.CONFIG_SPEC_FILE, core.CONFIG_FILE)
CFG_OLD = config(core.CONFIG_FILE)
except Exception as error:
logger.debug("Error {msg} when copying to .cfg".format(msg=error))
try:
# check for autoProcessMedia.cfg.spec and create if it does not exist
if not os.path.isfile(core.CONFIG_SPEC_FILE):
shutil.copyfile(core.CONFIG_FILE, core.CONFIG_SPEC_FILE)
CFG_NEW = config(core.CONFIG_SPEC_FILE)
except Exception as error:
logger.debug("Error {msg} when copying to .spec".format(msg=error))
# check for autoProcessMedia.cfg and autoProcessMedia.cfg.spec and if they don't exist return and fail
if CFG_NEW is None or CFG_OLD is None:
return False
subsections = {}
# gather all new-style and old-style sub-sections
for newsection, newitems in CFG_NEW.items():
if CFG_NEW[newsection].sections:
subsections.update({newsection: CFG_NEW[newsection].sections})
for section, items in CFG_OLD.items():
if CFG_OLD[section].sections:
subsections.update({section: CFG_OLD[section].sections})
for option, value in CFG_OLD[section].items():
if option in ["category", "cpsCategory", "sbCategory", "hpCategory", "mlCategory", "gzCategory", "raCategory", "ndCategory"]:
if not isinstance(value, list):
value = [value]
# add subsection
subsections.update({section: value})
CFG_OLD[section].pop(option)
continue
def cleanup_values(values, section):
for option, value in iteritems(values):
if section in ['CouchPotato']:
if option == ['outputDirectory']:
CFG_NEW['Torrent'][option] = os.path.split(os.path.normpath(value))[0]
values.pop(option)
if section in ['CouchPotato', 'HeadPhones', 'Gamez', 'Mylar']:
if option in ['username', 'password']:
values.pop(option)
if section in ["SickBeard", "Mylar"]:
if option == "wait_for": # remove old format
values.pop(option)
if section in ["SickBeard", "NzbDrone"]:
if option == "failed_fork": # change this old format
values['failed'] = 'auto'
values.pop(option)
if option == "outputDirectory": # move this to new location format
CFG_NEW['Torrent'][option] = os.path.split(os.path.normpath(value))[0]
values.pop(option)
if section in ["Torrent"]:
if option in ["compressedExtensions", "mediaExtensions", "metaExtensions", "minSampleSize"]:
CFG_NEW['Extensions'][option] = value
values.pop(option)
if option == "useLink": # Sym links supported now as well.
if value in ['1', 1]:
value = 'hard'
elif value in ['0', 0]:
value = 'no'
values[option] = value
if option == "forceClean":
CFG_NEW['General']['force_clean'] = value
values.pop(option)
if section in ["Transcoder"]:
if option in ["niceness"]:
CFG_NEW['Posix'][option] = value
values.pop(option)
if option == "remote_path":
if value and value not in ['0', '1', 0, 1]:
value = 1
elif not value:
value = 0
values[option] = value
# remove any options that we no longer need so they don't migrate into our new config
if not list(ConfigObj.find_key(CFG_NEW, option)):
try:
values.pop(option)
except:
pass
return values
def process_section(section, subsections=None):
if subsections:
for subsection in subsections:
if subsection in CFG_OLD.sections:
values = cleanup_values(CFG_OLD[subsection], section)
if subsection not in CFG_NEW[section].sections:
CFG_NEW[section][subsection] = {}
for option, value in values.items():
CFG_NEW[section][subsection][option] = value
elif subsection in CFG_OLD[section].sections:
values = cleanup_values(CFG_OLD[section][subsection], section)
if subsection not in CFG_NEW[section].sections:
CFG_NEW[section][subsection] = {}
for option, value in values.items():
CFG_NEW[section][subsection][option] = value
else:
values = cleanup_values(CFG_OLD[section], section)
if section not in CFG_NEW.sections:
CFG_NEW[section] = {}
for option, value in values.items():
CFG_NEW[section][option] = value
# convert old-style categories to new-style sub-sections
for section in CFG_OLD.keys():
subsection = None
if section in list(chain.from_iterable(subsections.values())):
subsection = section
section = ''.join([k for k, v in iteritems(subsections) if subsection in v])
process_section(section, subsection)
elif section in subsections.keys():
subsection = subsections[section]
process_section(section, subsection)
elif section in CFG_OLD.keys():
process_section(section, subsection)
# create a backup of our old config
CFG_OLD.filename ="{config}.old".format(config=core.CONFIG_FILE)
CFG_OLD.write()
# write our new config to autoProcessMedia.cfg
CFG_NEW.filename = core.CONFIG_FILE
CFG_NEW.write()
return True
@staticmethod
def addnzbget():
# load configs into memory
CFG_NEW = config()
try:
if 'NZBPO_NDCATEGORY' in os.environ and 'NZBPO_SBCATEGORY' in os.environ:
if os.environ['NZBPO_NDCATEGORY'] == os.environ['NZBPO_SBCATEGORY']:
logger.warning("{x} category is set for SickBeard and Sonarr. "
"Please check your config in NZBGet".format
(x=os.environ['NZBPO_NDCATEGORY']))
if 'NZBPO_RACATEGORY' in os.environ and 'NZBPO_CPSCATEGORY' in os.environ:
if os.environ['NZBPO_RACATEGORY'] == os.environ['NZBPO_CPSCATEGORY']:
logger.warning("{x} category is set for CouchPotato and Radarr. "
"Please check your config in NZBGet".format
(x=os.environ['NZBPO_RACATEGORY']))
if 'NZBPO_LICATEGORY' in os.environ and 'NZBPO_HPCATEGORY' in os.environ:
if os.environ['NZBPO_LICATEGORY'] == os.environ['NZBPO_HPCATEGORY']:
logger.warning("{x} category is set for HeadPhones and Lidarr. "
"Please check your config in NZBGet".format
(x=os.environ['NZBPO_LICATEGORY']))
section = "Nzb"
key = 'NZBOP_DESTDIR'
if key in os.environ:
option = 'default_downloadDirectory'
value = os.environ[key]
CFG_NEW[section][option] = value
section = "General"
envKeys = ['AUTO_UPDATE', 'CHECK_MEDIA', 'SAFE_MODE', 'NO_EXTRACT_FAILED']
cfgKeys = ['auto_update', 'check_media', 'safe_mode', 'no_extract_failed']
for index in range(len(envKeys)):
key = 'NZBPO_{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
CFG_NEW[section][option] = value
section = "Network"
envKeys = ['MOUNTPOINTS']
cfgKeys = ['mount_points']
for index in range(len(envKeys)):
key = 'NZBPO_{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
CFG_NEW[section][option] = value
section = "CouchPotato"
envCatKey = 'NZBPO_CPSCATEGORY'
envKeys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'METHOD', 'DELETE_FAILED', 'REMOTE_PATH',
'WAIT_FOR', 'WATCH_DIR', 'OMDBAPIKEY']
cfgKeys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'method', 'delete_failed', 'remote_path',
'wait_for', 'watch_dir', 'omdbapikey']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_CPS{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
if os.environ[envCatKey] in CFG_NEW['Radarr'].sections:
CFG_NEW['Radarr'][envCatKey]['enabled'] = 0
section = "SickBeard"
envCatKey = 'NZBPO_SBCATEGORY'
envKeys = ['ENABLED', 'HOST', 'PORT', 'APIKEY', 'USERNAME', 'PASSWORD', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK',
'DELETE_FAILED', 'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'REMOTE_PATH', 'PROCESS_METHOD']
cfgKeys = ['enabled', 'host', 'port', 'apikey', 'username', 'password', 'ssl', 'web_root', 'watch_dir', 'fork',
'delete_failed', 'Torrent_NoLink', 'nzbExtractionBy', 'remote_path', 'process_method']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_SB{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
if os.environ[envCatKey] in CFG_NEW['NzbDrone'].sections:
CFG_NEW['NzbDrone'][envCatKey]['enabled'] = 0
section = "HeadPhones"
envCatKey = 'NZBPO_HPCATEGORY'
envKeys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'WAIT_FOR', 'WATCH_DIR', 'REMOTE_PATH', 'DELETE_FAILED']
cfgKeys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'wait_for', 'watch_dir', 'remote_path', 'delete_failed']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_HP{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
if os.environ[envCatKey] in CFG_NEW['Lidarr'].sections:
CFG_NEW['Lidarr'][envCatKey]['enabled'] = 0
section = "Mylar"
envCatKey = 'NZBPO_MYCATEGORY'
envKeys = ['ENABLED', 'HOST', 'PORT', 'USERNAME', 'PASSWORD', 'APIKEY', 'SSL', 'WEB_ROOT', 'WATCH_DIR',
'REMOTE_PATH']
cfgKeys = ['enabled', 'host', 'port', 'username', 'password', 'apikey', 'ssl', 'web_root', 'watch_dir',
'remote_path']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_MY{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
section = "Gamez"
envCatKey = 'NZBPO_GZCATEGORY'
envKeys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'LIBRARY', 'REMOTE_PATH']
cfgKeys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'watch_dir', 'library', 'remote_path']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_GZ{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
section = "NzbDrone"
envCatKey = 'NZBPO_NDCATEGORY'
envKeys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'WAIT_FOR', 'DELETE_FAILED', 'REMOTE_PATH', 'IMPORTMODE']
#new cfgKey added for importMode
cfgKeys = ['enabled', 'host', 'apikey', 'port', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed',
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path','importMode']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_ND{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
if os.environ[envCatKey] in CFG_NEW['SickBeard'].sections:
CFG_NEW['SickBeard'][envCatKey]['enabled'] = 0
section = "Radarr"
envCatKey = 'NZBPO_RACATEGORY'
envKeys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'WAIT_FOR', 'DELETE_FAILED', 'REMOTE_PATH', 'OMDBAPIKEY', 'IMPORTMODE']
#new cfgKey added for importMode
cfgKeys = ['enabled', 'host', 'apikey', 'port', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed',
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path', 'omdbapikey','importMode']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_RA{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
if os.environ[envCatKey] in CFG_NEW['CouchPotato'].sections:
CFG_NEW['CouchPotato'][envCatKey]['enabled'] = 0
section = "Lidarr"
envCatKey = 'NZBPO_LICATEGORY'
envKeys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'WAIT_FOR', 'DELETE_FAILED', 'REMOTE_PATH']
cfgKeys = ['enabled', 'host', 'apikey', 'port', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed',
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_LI{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
if os.environ[envCatKey] in CFG_NEW['HeadPhones'].sections:
CFG_NEW['HeadPhones'][envCatKey]['enabled'] = 0
section = "Extensions"
envKeys = ['COMPRESSEDEXTENSIONS', 'MEDIAEXTENSIONS', 'METAEXTENSIONS']
cfgKeys = ['compressedExtensions', 'mediaExtensions', 'metaExtensions']
for index in range(len(envKeys)):
key = 'NZBPO_{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
CFG_NEW[section][option] = value
section = "Posix"
envKeys = ['NICENESS', 'IONICE_CLASS', 'IONICE_CLASSDATA']
cfgKeys = ['niceness', 'ionice_class', 'ionice_classdata']
for index in range(len(envKeys)):
key = 'NZBPO_{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
CFG_NEW[section][option] = value
section = "Transcoder"
envKeys = ['TRANSCODE', 'DUPLICATE', 'IGNOREEXTENSIONS', 'OUTPUTFASTSTART', 'OUTPUTVIDEOPATH',
'PROCESSOUTPUT', 'AUDIOLANGUAGE', 'ALLAUDIOLANGUAGES', 'SUBLANGUAGES',
'ALLSUBLANGUAGES', 'EMBEDSUBS', 'BURNINSUBTITLE', 'EXTRACTSUBS', 'EXTERNALSUBDIR',
'OUTPUTDEFAULT', 'OUTPUTVIDEOEXTENSION', 'OUTPUTVIDEOCODEC', 'VIDEOCODECALLOW',
'OUTPUTVIDEOPRESET', 'OUTPUTVIDEOFRAMERATE', 'OUTPUTVIDEOBITRATE', 'OUTPUTAUDIOCODEC',
'AUDIOCODECALLOW', 'OUTPUTAUDIOBITRATE', 'OUTPUTQUALITYPERCENT', 'GETSUBS',
'OUTPUTAUDIOTRACK2CODEC', 'AUDIOCODEC2ALLOW', 'OUTPUTAUDIOTRACK2BITRATE',
'OUTPUTAUDIOOTHERCODEC', 'AUDIOOTHERCODECALLOW', 'OUTPUTAUDIOOTHERBITRATE',
'OUTPUTSUBTITLECODEC', 'OUTPUTAUDIOCHANNELS', 'OUTPUTAUDIOTRACK2CHANNELS',
'OUTPUTAUDIOOTHERCHANNELS','OUTPUTVIDEORESOLUTION']
cfgKeys = ['transcode', 'duplicate', 'ignoreExtensions', 'outputFastStart', 'outputVideoPath',
'processOutput', 'audioLanguage', 'allAudioLanguages', 'subLanguages',
'allSubLanguages', 'embedSubs', 'burnInSubtitle', 'extractSubs', 'externalSubDir',
'outputDefault', 'outputVideoExtension', 'outputVideoCodec', 'VideoCodecAllow',
'outputVideoPreset', 'outputVideoFramerate', 'outputVideoBitrate', 'outputAudioCodec',
'AudioCodecAllow', 'outputAudioBitrate', 'outputQualityPercent', 'getSubs',
'outputAudioTrack2Codec', 'AudioCodec2Allow', 'outputAudioTrack2Bitrate',
'outputAudioOtherCodec', 'AudioOtherCodecAllow', 'outputAudioOtherBitrate',
'outputSubtitleCodec', 'outputAudioChannels', 'outputAudioTrack2Channels',
'outputAudioOtherChannels', 'outputVideoResolution']
for index in range(len(envKeys)):
key = 'NZBPO_{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
CFG_NEW[section][option] = value
section = "WakeOnLan"
envKeys = ['WAKE', 'HOST', 'PORT', 'MAC']
cfgKeys = ['wake', 'host', 'port', 'mac']
for index in range(len(envKeys)):
key = 'NZBPO_WOL{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
CFG_NEW[section][option] = value
section = "UserScript"
envCatKey = 'NZBPO_USCATEGORY'
envKeys = ['USER_SCRIPT_MEDIAEXTENSIONS', 'USER_SCRIPT_PATH', 'USER_SCRIPT_PARAM', 'USER_SCRIPT_RUNONCE',
'USER_SCRIPT_SUCCESSCODES', 'USER_SCRIPT_CLEAN', 'USDELAY', 'USREMOTE_PATH']
cfgKeys = ['user_script_mediaExtensions', 'user_script_path', 'user_script_param', 'user_script_runOnce',
'user_script_successCodes', 'user_script_clean', 'delay', 'remote_path']
if envCatKey in os.environ:
for index in range(len(envKeys)):
key = 'NZBPO_{index}'.format(index=envKeys[index])
if key in os.environ:
option = cfgKeys[index]
value = os.environ[key]
if os.environ[envCatKey] not in CFG_NEW[section].sections:
CFG_NEW[section][os.environ[envCatKey]] = {}
CFG_NEW[section][os.environ[envCatKey]][option] = value
CFG_NEW[section][os.environ[envCatKey]]['enabled'] = 1
except Exception as error:
logger.debug("Error {msg} when applying NZBGet config".format(msg=error))
try:
# write our new config to autoProcessMedia.cfg
CFG_NEW.filename = core.CONFIG_FILE
CFG_NEW.write()
except Exception as error:
logger.debug("Error {msg} when writing changes to .cfg".format(msg=error))
return CFG_NEW
configobj.Section = Section
configobj.ConfigObj = ConfigObj
config = ConfigObj

View file

@ -1,284 +0,0 @@
# coding=utf-8
from __future__ import print_function, with_statement
import re
import sqlite3
import time
import core
from core import logger
def dbFilename(filename="nzbtomedia.db", suffix=None):
"""
@param filename: The sqlite database filename to use. If not specified,
will be made to be nzbtomedia.db
@param suffix: The suffix to append to the filename. A '.' will be added
automatically, i.e. suffix='v0' will make dbfile.db.v0
@return: the correct location of the database file.
"""
if suffix:
filename = "{0}.{1}".format(filename, suffix)
return core.os.path.join(core.PROGRAM_DIR, filename)
class DBConnection(object):
def __init__(self, filename="nzbtomedia.db", suffix=None, row_type=None):
self.filename = filename
self.connection = sqlite3.connect(dbFilename(filename), 20)
if row_type == "dict":
self.connection.row_factory = self._dict_factory
else:
self.connection.row_factory = sqlite3.Row
def checkDBVersion(self):
result = None
try:
result = self.select("SELECT db_version FROM db_version")
except sqlite3.OperationalError as e:
if "no such table: db_version" in e.args[0]:
return 0
if result:
return int(result[0]["db_version"])
else:
return 0
def fetch(self, query, args=None):
if query is None:
return
sqlResult = None
attempt = 0
while attempt < 5:
try:
if args is None:
logger.log("{name}: {query}".format(name=self.filename, query=query), logger.DB)
cursor = self.connection.cursor()
cursor.execute(query)
sqlResult = cursor.fetchone()[0]
else:
logger.log("{name}: {query} with args {args}".format
(name=self.filename, query=query, args=args), logger.DB)
cursor = self.connection.cursor()
cursor.execute(query, args)
sqlResult = cursor.fetchone()[0]
# get out of the connection attempt loop since we were successful
break
except sqlite3.OperationalError as error:
if "unable to open database file" in error.args[0] or "database is locked" in error.args[0]:
logger.log(u"DB error: {msg}".format(msg=error), logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log(u"DB error: {msg}".format(msg=error), logger.ERROR)
raise
except sqlite3.DatabaseError as error:
logger.log(u"Fatal error executing query: {msg}".format(msg=error), logger.ERROR)
raise
return sqlResult
def mass_action(self, querylist, logTransaction=False):
if querylist is None:
return
sqlResult = []
attempt = 0
while attempt < 5:
try:
for qu in querylist:
if len(qu) == 1:
if logTransaction:
logger.log(qu[0], logger.DEBUG)
sqlResult.append(self.connection.execute(qu[0]))
elif len(qu) > 1:
if logTransaction:
logger.log(u"{query} with args {args}".format(query=qu[0], args=qu[1]), logger.DEBUG)
sqlResult.append(self.connection.execute(qu[0], qu[1]))
self.connection.commit()
logger.log(u"Transaction with {x} query's executed".format(x=len(querylist)), logger.DEBUG)
return sqlResult
except sqlite3.OperationalError as error:
sqlResult = []
if self.connection:
self.connection.rollback()
if "unable to open database file" in error.args[0] or "database is locked" in error.args[0]:
logger.log(u"DB error: {msg}".format(msg=error), logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log(u"DB error: {msg}".format(msg=error), logger.ERROR)
raise
except sqlite3.DatabaseError as error:
if self.connection:
self.connection.rollback()
logger.log(u"Fatal error executing query: {msg}".format(msg=error), logger.ERROR)
raise
return sqlResult
def action(self, query, args=None):
if query is None:
return
sqlResult = None
attempt = 0
while attempt < 5:
try:
if args is None:
logger.log(u"{name}: {query}".format(name=self.filename, query=query), logger.DB)
sqlResult = self.connection.execute(query)
else:
logger.log(u"{name}: {query} with args {args}".format
(name=self.filename, query=query, args=args), logger.DB)
sqlResult = self.connection.execute(query, args)
self.connection.commit()
# get out of the connection attempt loop since we were successful
break
except sqlite3.OperationalError as error:
if "unable to open database file" in error.args[0] or "database is locked" in error.args[0]:
logger.log(u"DB error: {msg}".format(msg=error), logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log(u"DB error: {msg}".format(msg=error), logger.ERROR)
raise
except sqlite3.DatabaseError as error:
logger.log(u"Fatal error executing query: {msg}".format(msg=error), logger.ERROR)
raise
return sqlResult
def select(self, query, args=None):
sqlResults = self.action(query, args).fetchall()
if sqlResults is None:
return []
return sqlResults
def upsert(self, tableName, valueDict, keyDict):
changesBefore = self.connection.total_changes
genParams = lambda myDict: ["{key} = ?".format(key=k) for k in myDict.keys()]
self.action(
"UPDATE {table} "
"SET {params} "
"WHERE {conditions}".format(
table=tableName,
params=", ".join(genParams(valueDict)),
conditions=" AND ".join(genParams(keyDict))),
valueDict.values() + keyDict.values()
)
if self.connection.total_changes == changesBefore:
self.action(
"INSERT OR IGNORE INTO {table} ({columns}) "
"VALUES ({values})".format(
table=tableName,
columns=", ".join(valueDict.keys() + keyDict.keys()),
values=", ".join(["?"] * len(valueDict.keys() + keyDict.keys()))
)
, valueDict.values() + keyDict.values()
)
def tableInfo(self, tableName):
# FIXME ? binding is not supported here, but I cannot find a way to escape a string manually
cursor = self.connection.execute("PRAGMA table_info({0})".format(tableName))
columns = {}
for column in cursor:
columns[column['name']] = {'type': column['type']}
return columns
# http://stackoverflow.com/questions/3300464/how-can-i-get-dict-from-sqlite-query
def _dict_factory(self, cursor, row):
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
def sanityCheckDatabase(connection, sanity_check):
sanity_check(connection).check()
class DBSanityCheck(object):
def __init__(self, connection):
self.connection = connection
def check(self):
pass
# ===============
# = Upgrade API =
# ===============
def upgradeDatabase(connection, schema):
logger.log(u"Checking database structure...", logger.MESSAGE)
_processUpgrade(connection, schema)
def prettyName(class_name):
return ' '.join([x.group() for x in re.finditer("([A-Z])([a-z0-9]+)", class_name)])
def _processUpgrade(connection, upgradeClass):
instance = upgradeClass(connection)
logger.log(u"Checking {name} database upgrade".format
(name=prettyName(upgradeClass.__name__)), logger.DEBUG)
if not instance.test():
logger.log(u"Database upgrade required: {name}".format
(name=prettyName(upgradeClass.__name__)), logger.MESSAGE)
try:
instance.execute()
except sqlite3.DatabaseError as error:
print(u"Error in {name}: {msg}".format
(name=upgradeClass.__name__, msg=error))
raise
logger.log(u"{name} upgrade completed".format
(name=upgradeClass.__name__), logger.DEBUG)
else:
logger.log(u"{name} upgrade not required".format
(name=upgradeClass.__name__), logger.DEBUG)
for upgradeSubClass in upgradeClass.__subclasses__():
_processUpgrade(connection, upgradeSubClass)
# Base migration class. All future DB changes should be subclassed from this class
class SchemaUpgrade(object):
def __init__(self, connection):
self.connection = connection
def hasTable(self, tableName):
return len(self.connection.action("SELECT 1 FROM sqlite_master WHERE name = ?;", (tableName,)).fetchall()) > 0
def hasColumn(self, tableName, column):
return column in self.connection.tableInfo(tableName)
def addColumn(self, table, column, type="NUMERIC", default=0):
self.connection.action("ALTER TABLE {0} ADD {1} {2}".format(table, column, type))
self.connection.action("UPDATE {0} SET {1} = ?".format(table, column), (default,))
def checkDBVersion(self):
result = self.connection.select("SELECT db_version FROM db_version")
if result:
return int(result[-1]["db_version"])
else:
return 0
def incDBVersion(self):
new_version = self.checkDBVersion() + 1
self.connection.action("UPDATE db_version SET db_version = ?", [new_version])
return new_version

View file

@ -1,186 +0,0 @@
# coding=utf-8
import os
import re
import core
import shlex
import platform
import subprocess
from core import logger
from core.nzbToMediaUtil import listMediaFiles
reverse_list = [r"\.\d{2}e\d{2}s\.", r"\.[pi]0801\.", r"\.p027\.", r"\.[pi]675\.", r"\.[pi]084\.", r"\.p063\.",
r"\b[45]62[xh]\.", r"\.yarulb\.", r"\.vtd[hp]\.",
r"\.ld[.-]?bew\.", r"\.pir.?(dov|dvd|bew|db|rb)\.", r"\brdvd\.", r"\.vts\.", r"\.reneercs\.",
r"\.dcv\.", r"\b(pir|mac)dh\b", r"\.reporp\.", r"\.kcaper\.",
r"\.lanretni\.", r"\b3ca\b", r"\.cstn\."]
reverse_pattern = re.compile('|'.join(reverse_list), flags=re.IGNORECASE)
season_pattern = re.compile(r"(.*\.\d{2}e\d{2}s\.)(.*)", flags=re.IGNORECASE)
word_pattern = re.compile(r"([^A-Z0-9]*[A-Z0-9]+)")
media_list = [r"\.s\d{2}e\d{2}\.", r"\.1080[pi]\.", r"\.720p\.", r"\.576[pi]", r"\.480[pi]\.", r"\.360p\.",
r"\.[xh]26[45]\b", r"\.bluray\.", r"\.[hp]dtv\.",
r"\.web[.-]?dl\.", r"\.(vod|dvd|web|bd|br).?rip\.", r"\.dvdr\b", r"\.stv\.", r"\.screener\.", r"\.vcd\.",
r"\bhd(cam|rip)\b", r"\.proper\.", r"\.repack\.",
r"\.internal\.", r"\bac3\b", r"\.ntsc\.", r"\.pal\.", r"\.secam\.", r"\bdivx\b", r"\bxvid\b"]
media_pattern = re.compile('|'.join(media_list), flags=re.IGNORECASE)
garbage_name = re.compile(r"^[a-zA-Z0-9]*$")
char_replace = [[r"(\w)1\.(\w)", r"\1i\2"]
]
def process_all_exceptions(name, dirname):
par2(dirname)
rename_script(dirname)
for filename in listMediaFiles(dirname):
newfilename = None
parentDir = os.path.dirname(filename)
head, fileExtension = os.path.splitext(os.path.basename(filename))
if reverse_pattern.search(head) is not None:
exception = reverse_filename
elif garbage_name.search(head) is not None:
exception = replace_filename
else:
exception = None
newfilename = filename
if not newfilename:
newfilename = exception(filename, parentDir, name)
if core.GROUPS:
newfilename = strip_groups(newfilename)
if newfilename != filename:
rename_file(filename, newfilename)
def strip_groups(filename):
if not core.GROUPS:
return filename
dirname, file = os.path.split(filename)
head, fileExtension = os.path.splitext(file)
newname = head.replace(' ', '.')
for group in core.GROUPS:
newname = newname.replace(group, '')
newname = newname.replace('[]', '')
newfile = newname + fileExtension
newfilePath = os.path.join(dirname, newfile)
return newfilePath
def rename_file(filename, newfilePath):
if os.path.isfile(newfilePath):
newfilePath = os.path.splitext(newfilePath)[0] + ".NTM" + os.path.splitext(newfilePath)[1]
logger.debug("Replacing file name {old} with download name {new}".format
(old=filename, new=newfilePath), "EXCEPTION")
try:
os.rename(filename, newfilePath)
except Exception as error:
logger.error("Unable to rename file due to: {error}".format(error=error), "EXCEPTION")
def replace_filename(filename, dirname, name):
head, fileExtension = os.path.splitext(os.path.basename(filename))
if media_pattern.search(os.path.basename(dirname).replace(' ', '.')) is not None:
newname = os.path.basename(dirname).replace(' ', '.')
logger.debug("Replacing file name {old} with directory name {new}".format(old=head, new=newname), "EXCEPTION")
elif media_pattern.search(name.replace(' ', '.').lower()) is not None:
newname = name.replace(' ', '.')
logger.debug("Replacing file name {old} with download name {new}".format
(old=head, new=newname), "EXCEPTION")
else:
logger.warning("No name replacement determined for {name}".format(name=head), "EXCEPTION")
newname = name
newfile = newname + fileExtension
newfilePath = os.path.join(dirname, newfile)
return newfilePath
def reverse_filename(filename, dirname, name):
head, fileExtension = os.path.splitext(os.path.basename(filename))
na_parts = season_pattern.search(head)
if na_parts is not None:
word_p = word_pattern.findall(na_parts.group(2))
if word_p:
new_words = ""
for wp in word_p:
if wp[0] == ".":
new_words += "."
new_words += re.sub(r"\W", "", wp)
else:
new_words = na_parts.group(2)
for cr in char_replace:
new_words = re.sub(cr[0], cr[1], new_words)
newname = new_words[::-1] + na_parts.group(1)[::-1]
else:
newname = head[::-1].title()
newname = newname.replace(' ', '.')
logger.debug("Reversing filename {old} to {new}".format
(old=head, new=newname), "EXCEPTION")
newfile = newname + fileExtension
newfilePath = os.path.join(dirname, newfile)
return newfilePath
def rename_script(dirname):
rename_file = ""
for dir, dirs, files in os.walk(dirname):
for file in files:
if re.search('(rename\S*\.(sh|bat)$)', file, re.IGNORECASE):
rename_file = os.path.join(dir, file)
dirname = dir
break
if rename_file:
rename_lines = [line.strip() for line in open(rename_file)]
for line in rename_lines:
if re.search('^(mv|Move)', line, re.IGNORECASE):
cmd = shlex.split(line)[1:]
else:
continue
if len(cmd) == 2 and os.path.isfile(os.path.join(dirname, cmd[0])):
orig = os.path.join(dirname, cmd[0])
dest = os.path.join(dirname, cmd[1].split('\\')[-1].split('/')[-1])
if os.path.isfile(dest):
continue
logger.debug("Renaming file {source} to {destination}".format
(source=orig, destination=dest), "EXCEPTION")
try:
os.rename(orig, dest)
except Exception as error:
logger.error("Unable to rename file due to: {error}".format(error=error), "EXCEPTION")
def par2(dirname):
newlist = []
sofar = 0
parfile = ""
objects = []
if os.path.exists(dirname):
objects = os.listdir(dirname)
for item in objects:
if item.endswith(".par2"):
size = os.path.getsize(os.path.join(dirname, item))
if size > sofar:
sofar = size
parfile = item
if core.PAR2CMD and parfile:
pwd = os.getcwd() # Get our Present Working Directory
os.chdir(dirname) # set directory to run par on.
if platform.system() == 'Windows':
bitbucket = open('NUL')
else:
bitbucket = open('/dev/null')
logger.info("Running par2 on file {0}.".format(parfile), "PAR2")
command = [core.PAR2CMD, 'r', parfile, "*"]
cmd = ""
for item in command:
cmd = "{cmd} {item}".format(cmd=cmd, item=item)
logger.debug("calling command:{0}".format(cmd), "PAR2")
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
proc.communicate()
result = proc.returncode
except:
logger.error("par2 file processing for {0} has failed".format(parfile), "PAR2")
if result == 0:
logger.info("par2 file processing succeeded", "PAR2")
os.chdir(pwd)
bitbucket.close()
# dict for custom groups
# we can add more to this list
# _customgroups = {'Q o Q': process_qoq, '-ECI': process_eci}

View file

@ -1,116 +0,0 @@
# coding=utf-8
import os
import core
from subprocess import Popen
from core.transcoder import transcoder
from core.nzbToMediaUtil import import_subs, listMediaFiles, rmDir
from core import logger
def external_script(outputDestination, torrentName, torrentLabel, settings):
final_result = 0 # start at 0.
num_files = 0
try:
core.USER_SCRIPT_MEDIAEXTENSIONS = settings["user_script_mediaExtensions"].lower()
if isinstance(core.USER_SCRIPT_MEDIAEXTENSIONS, str):
core.USER_SCRIPT_MEDIAEXTENSIONS = core.USER_SCRIPT_MEDIAEXTENSIONS.split(',')
except:
core.USER_SCRIPT_MEDIAEXTENSIONS = []
core.USER_SCRIPT = settings.get("user_script_path")
if not core.USER_SCRIPT or core.USER_SCRIPT == "None": # do nothing and return success.
return [0, ""]
try:
core.USER_SCRIPT_PARAM = settings["user_script_param"]
if isinstance(core.USER_SCRIPT_PARAM, str):
core.USER_SCRIPT_PARAM = core.USER_SCRIPT_PARAM.split(',')
except:
core.USER_SCRIPT_PARAM = []
try:
core.USER_SCRIPT_SUCCESSCODES = settings["user_script_successCodes"]
if isinstance(core.USER_SCRIPT_SUCCESSCODES, str):
core.USER_SCRIPT_SUCCESSCODES = core.USER_SCRIPT_SUCCESSCODES.split(',')
except:
core.USER_SCRIPT_SUCCESSCODES = 0
core.USER_SCRIPT_CLEAN = int(settings.get("user_script_clean", 1))
core.USER_SCRIPT_RUNONCE = int(settings.get("user_script_runOnce", 1))
if core.CHECK_MEDIA:
for video in listMediaFiles(outputDestination, media=True, audio=False, meta=False, archives=False):
if transcoder.isVideoGood(video, 0):
import_subs(video)
else:
logger.info("Corrupt video file found {0}. Deleting.".format(video), "USERSCRIPT")
os.unlink(video)
for dirpath, dirnames, filenames in os.walk(outputDestination):
for file in filenames:
filePath = core.os.path.join(dirpath, file)
fileName, fileExtension = os.path.splitext(file)
if fileExtension in core.USER_SCRIPT_MEDIAEXTENSIONS or "all" in core.USER_SCRIPT_MEDIAEXTENSIONS:
num_files += 1
if core.USER_SCRIPT_RUNONCE == 1 and num_files > 1: # we have already run once, so just continue to get number of files.
continue
command = [core.USER_SCRIPT]
for param in core.USER_SCRIPT_PARAM:
if param == "FN":
command.append('{0}'.format(file))
continue
elif param == "FP":
command.append('{0}'.format(filePath))
continue
elif param == "TN":
command.append('{0}'.format(torrentName))
continue
elif param == "TL":
command.append('{0}'.format(torrentLabel))
continue
elif param == "DN":
if core.USER_SCRIPT_RUNONCE == 1:
command.append('{0}'.format(outputDestination))
else:
command.append('{0}'.format(dirpath))
continue
else:
command.append(param)
continue
cmd = ""
for item in command:
cmd = "{cmd} {item}".format(cmd=cmd, item=item)
logger.info("Running script {cmd} on file {path}.".format(cmd=cmd, path=filePath), "USERSCRIPT")
try:
p = Popen(command)
res = p.wait()
if str(res) in core.USER_SCRIPT_SUCCESSCODES: # Linux returns 0 for successful.
logger.info("UserScript {0} was successfull".format(command[0]))
result = 0
else:
logger.error("UserScript {0} has failed with return code: {1}".format(command[0], res), "USERSCRIPT")
logger.info(
"If the UserScript completed successfully you should add {0} to the user_script_successCodes".format(
res), "USERSCRIPT")
result = int(1)
except:
logger.error("UserScript {0} has failed".format(command[0]), "USERSCRIPT")
result = int(1)
final_result += result
num_files_new = 0
for dirpath, dirnames, filenames in os.walk(outputDestination):
for file in filenames:
fileName, fileExtension = os.path.splitext(file)
if fileExtension in core.USER_SCRIPT_MEDIAEXTENSIONS or core.USER_SCRIPT_MEDIAEXTENSIONS == "ALL":
num_files_new += 1
if core.USER_SCRIPT_CLEAN == int(1) and num_files_new == 0 and final_result == 0:
logger.info("All files have been processed. Cleaning outputDirectory {0}".format(outputDestination))
rmDir(outputDestination)
elif core.USER_SCRIPT_CLEAN == int(1) and num_files_new != 0:
logger.info("{0} files were processed, but {1} still remain. outputDirectory will not be cleaned.".format(
num_files, num_files_new))
return [final_result, '']

File diff suppressed because it is too large Load diff

88
core/permissions.py Normal file
View file

@ -0,0 +1,88 @@
import os
import sys
import logging
log = logging.getLogger(__name__)
log.addHandler(logging.NullHandler())
WINDOWS = sys.platform == 'win32'
POSIX = not WINDOWS
try:
import pwd
import grp
except ImportError:
if POSIX:
raise
try:
from win32security import GetNamedSecurityInfo
from win32security import LookupAccountSid
from win32security import GROUP_SECURITY_INFORMATION
from win32security import OWNER_SECURITY_INFORMATION
from win32security import SE_FILE_OBJECT
except ImportError:
if WINDOWS:
raise
def mode(path):
"""Get permissions."""
stat_result = os.stat(path) # Get information from path
permissions_mask = 0o777 # Set mask for permissions info
# Get only the permissions part of st_mode as an integer
int_mode = stat_result.st_mode & permissions_mask
oct_mode = oct(int_mode) # Convert to octal representation
return oct_mode[2:] # Return mode but strip octal prefix
def nt_ownership(path):
"""Get the owner and group for a file or directory."""
def fully_qualified_name(sid):
"""Return a fully qualified account name."""
# Look up the account information for the given SID
# https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-lookupaccountsida
name, domain, _acct_type = LookupAccountSid(None, sid)
# Return account information formatted as DOMAIN\ACCOUNT_NAME
return '{}\\{}'.format(domain, name)
# Get the Windows security descriptor for the path
# https://learn.microsoft.com/en-us/windows/win32/api/aclapi/nf-aclapi-getnamedsecurityinfoa
security_descriptor = GetNamedSecurityInfo(
path, # Name of the item to query
SE_FILE_OBJECT, # Type of item to query (file or directory)
# Add OWNER and GROUP security information to result
OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION,
)
# Get the Security Identifier for the owner and group from the security descriptor
# https://learn.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-getsecuritydescriptorowner
# https://learn.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-getsecuritydescriptorgroup
owner_sid = security_descriptor.GetSecurityDescriptorOwner()
group_sid = security_descriptor.GetSecurityDescriptorGroup()
# Get the fully qualified account name (e.g. DOMAIN\ACCOUNT_NAME)
owner = fully_qualified_name(owner_sid)
group = fully_qualified_name(group_sid)
return owner, group
def posix_ownership(path):
"""Get the owner and group for a file or directory."""
# Get path information
stat_result = os.stat(path)
# Get account name from path stat result
owner = pwd.getpwuid(stat_result.st_uid).pw_name
group = grp.getgrgid(stat_result.st_gid).gr_name
return owner, group
# Select the ownership function appropriate for the platform
if WINDOWS:
ownership = nt_ownership
else:
ownership = posix_ownership

View file

@ -0,0 +1,5 @@
from core.plugins.downloaders.nzb.configuration import configure_nzbs
from core.plugins.downloaders.torrent.configuration import (
configure_torrents,
configure_torrent_class,
)

View file

@ -0,0 +1,23 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import core
def configure_nzbs(config):
nzb_config = config['Nzb']
core.NZB_CLIENT_AGENT = nzb_config['clientAgent'] # sabnzbd
core.NZB_DEFAULT_DIRECTORY = nzb_config['default_downloadDirectory']
core.NZB_NO_MANUAL = int(nzb_config['no_manual'], 0)
configure_sabnzbd(nzb_config)
def configure_sabnzbd(config):
core.SABNZBD_HOST = config['sabnzbd_host']
core.SABNZBD_PORT = int(config['sabnzbd_port'] or 8080) # defaults to accommodate NzbGet
core.SABNZBD_APIKEY = config['sabnzbd_apikey']

View file

@ -0,0 +1,77 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import requests
import core
from core import logger
def get_nzoid(input_name):
nzoid = None
slots = []
logger.debug('Searching for nzoid from SAbnzbd ...')
if 'http' in core.SABNZBD_HOST:
base_url = '{0}:{1}/api'.format(core.SABNZBD_HOST, core.SABNZBD_PORT)
else:
base_url = 'http://{0}:{1}/api'.format(core.SABNZBD_HOST, core.SABNZBD_PORT)
url = base_url
params = {
'apikey': core.SABNZBD_APIKEY,
'mode': 'queue',
'output': 'json',
}
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.ConnectionError:
logger.error('Unable to open URL')
return nzoid # failure
try:
result = r.json()
clean_name = os.path.splitext(os.path.split(input_name)[1])[0]
slots.extend([(slot['nzo_id'], slot['filename']) for slot in result['queue']['slots']])
except Exception:
logger.warning('Data from SABnzbd queue could not be parsed')
params['mode'] = 'history'
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.ConnectionError:
logger.error('Unable to open URL')
return nzoid # failure
try:
result = r.json()
clean_name = os.path.splitext(os.path.split(input_name)[1])[0]
slots.extend([(slot['nzo_id'], slot['name']) for slot in result['history']['slots']])
except Exception:
logger.warning('Data from SABnzbd history could not be parsed')
try:
for nzo_id, name in slots:
if name in [input_name, clean_name]:
nzoid = nzo_id
logger.debug('Found nzoid: {0}'.format(nzoid))
break
except Exception:
logger.warning('Data from SABnzbd could not be parsed')
return nzoid
def report_nzb(failure_link, client_agent):
# Contact indexer site
logger.info('Sending failure notification to indexer site')
if client_agent == 'nzbget':
headers = {'User-Agent': 'NZBGet / nzbToMedia.py'}
elif client_agent == 'sabnzbd':
headers = {'User-Agent': 'SABnzbd / nzbToMedia.py'}
else:
return
try:
requests.post(failure_link, headers=headers, timeout=(30, 300))
except Exception as e:
logger.error('Unable to open URL {0} due to {1}'.format(failure_link, e))
return

View file

@ -0,0 +1,97 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import core
from core.plugins.downloaders.torrent.utils import create_torrent_class
def configure_torrents(config):
torrent_config = config['Torrent']
core.TORRENT_CLIENT_AGENT = torrent_config['clientAgent'] # utorrent | deluge | transmission | rtorrent | vuze | qbittorrent | synods | other
core.OUTPUT_DIRECTORY = torrent_config['outputDirectory'] # /abs/path/to/complete/
core.TORRENT_DEFAULT_DIRECTORY = torrent_config['default_downloadDirectory']
core.TORRENT_NO_MANUAL = int(torrent_config['no_manual'], 0)
configure_torrent_linking(torrent_config)
configure_flattening(torrent_config)
configure_torrent_deletion(torrent_config)
configure_torrent_categories(torrent_config)
configure_torrent_permissions(torrent_config)
configure_torrent_resuming(torrent_config)
configure_utorrent(torrent_config)
configure_transmission(torrent_config)
configure_deluge(torrent_config)
configure_qbittorrent(torrent_config)
configure_syno(torrent_config)
def configure_torrent_linking(config):
core.USE_LINK = config['useLink'] # no | hard | sym
def configure_flattening(config):
core.NOFLATTEN = (config['noFlatten'])
if isinstance(core.NOFLATTEN, str):
core.NOFLATTEN = core.NOFLATTEN.split(',')
def configure_torrent_categories(config):
core.CATEGORIES = (config['categories']) # music,music_videos,pictures,software
if isinstance(core.CATEGORIES, str):
core.CATEGORIES = core.CATEGORIES.split(',')
def configure_torrent_resuming(config):
core.TORRENT_RESUME_ON_FAILURE = int(config['resumeOnFailure'])
core.TORRENT_RESUME = int(config['resume'])
def configure_torrent_permissions(config):
core.TORRENT_CHMOD_DIRECTORY = int(str(config['chmodDirectory']), 8)
def configure_torrent_deletion(config):
core.DELETE_ORIGINAL = int(config['deleteOriginal'])
def configure_utorrent(config):
core.UTORRENT_WEB_UI = config['uTorrentWEBui'] # http://localhost:8090/gui/
core.UTORRENT_USER = config['uTorrentUSR'] # mysecretusr
core.UTORRENT_PASSWORD = config['uTorrentPWD'] # mysecretpwr
def configure_transmission(config):
core.TRANSMISSION_HOST = config['TransmissionHost'] # localhost
core.TRANSMISSION_PORT = int(config['TransmissionPort'])
core.TRANSMISSION_USER = config['TransmissionUSR'] # mysecretusr
core.TRANSMISSION_PASSWORD = config['TransmissionPWD'] # mysecretpwr
def configure_syno(config):
core.SYNO_HOST = config['synoHost'] # localhost
core.SYNO_PORT = int(config['synoPort'])
core.SYNO_USER = config['synoUSR'] # mysecretusr
core.SYNO_PASSWORD = config['synoPWD'] # mysecretpwr
def configure_deluge(config):
core.DELUGE_HOST = config['DelugeHost'] # localhost
core.DELUGE_PORT = int(config['DelugePort']) # 8084
core.DELUGE_USER = config['DelugeUSR'] # mysecretusr
core.DELUGE_PASSWORD = config['DelugePWD'] # mysecretpwr
def configure_qbittorrent(config):
core.QBITTORRENT_HOST = config['qBittorrentHost'] # localhost
core.QBITTORRENT_PORT = int(config['qBittorrentPort']) # 8080
core.QBITTORRENT_USER = config['qBittorrentUSR'] # mysecretusr
core.QBITTORRENT_PASSWORD = config['qBittorrentPWD'] # mysecretpwr
def configure_torrent_class():
# create torrent class
core.TORRENT_CLASS = create_torrent_class(core.TORRENT_CLIENT_AGENT)

View file

@ -0,0 +1,28 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from deluge_client.client import DelugeRPCClient
import core
from core import logger
def configure_client():
agent = 'deluge'
host = core.DELUGE_HOST
port = core.DELUGE_PORT
user = core.DELUGE_USER
password = core.DELUGE_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
client = DelugeRPCClient(host, port, user, password)
try:
client.connect()
except Exception:
logger.error('Failed to connect to Deluge')
else:
return client

View file

@ -0,0 +1,31 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from qbittorrent import Client as qBittorrentClient
import core
from core import logger
def configure_client():
agent = 'qbittorrent'
host = core.QBITTORRENT_HOST
port = core.QBITTORRENT_PORT
user = core.QBITTORRENT_USER
password = core.QBITTORRENT_PASSWORD
logger.debug(
'Connecting to {0}: http://{1}:{2}'.format(agent, host, port),
)
client = qBittorrentClient('http://{0}:{1}/'.format(host, port))
try:
client.login(user, password)
except Exception:
logger.error('Failed to connect to qBittorrent')
else:
return client

View file

@ -0,0 +1,27 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from syno.downloadstation import DownloadStation
import core
from core import logger
def configure_client():
agent = 'synology'
host = core.SYNO_HOST
port = core.SYNO_PORT
user = core.SYNO_USER
password = core.SYNO_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
try:
client = DownloadStation(host, port, user, password)
except Exception:
logger.error('Failed to connect to synology')
else:
return client

View file

@ -0,0 +1,27 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from transmissionrpc.client import Client as TransmissionClient
import core
from core import logger
def configure_client():
agent = 'transmission'
host = core.TRANSMISSION_HOST
port = core.TRANSMISSION_PORT
user = core.TRANSMISSION_USER
password = core.TRANSMISSION_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
try:
client = TransmissionClient(host, port, user, password)
except Exception:
logger.error('Failed to connect to Transmission')
else:
return client

View file

@ -0,0 +1,94 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import time
import core
from core import logger
from .deluge import configure_client as deluge_client
from .qbittorrent import configure_client as qbittorrent_client
from .transmission import configure_client as transmission_client
from .utorrent import configure_client as utorrent_client
from .synology import configure_client as synology_client
torrent_clients = {
'deluge': deluge_client,
'qbittorrent': qbittorrent_client,
'transmission': transmission_client,
'utorrent': utorrent_client,
'synods': synology_client,
}
def create_torrent_class(client_agent):
if not core.APP_NAME == 'TorrentToMedia.py':
return # Skip loading Torrent for NZBs.
client = torrent_clients.get(client_agent)
if client:
return client()
def pause_torrent(client_agent, input_hash, input_id, input_name):
logger.debug('Stopping torrent {0} in {1} while processing'.format(input_name, client_agent))
try:
if client_agent == 'utorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.stop(input_hash)
if client_agent == 'transmission' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.stop_torrent(input_id)
if client_agent == 'synods' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.pause_task(input_id)
if client_agent == 'deluge' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.core.pause_torrent([input_id])
if client_agent == 'qbittorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.pause(input_hash)
time.sleep(5)
except Exception:
logger.warning('Failed to stop torrent {0} in {1}'.format(input_name, client_agent))
def resume_torrent(client_agent, input_hash, input_id, input_name):
if not core.TORRENT_RESUME == 1:
return
logger.debug('Starting torrent {0} in {1}'.format(input_name, client_agent))
try:
if client_agent == 'utorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.start(input_hash)
if client_agent == 'transmission' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.start_torrent(input_id)
if client_agent == 'synods' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.resume_task(input_id)
if client_agent == 'deluge' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.core.resume_torrent([input_id])
if client_agent == 'qbittorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.resume(input_hash)
time.sleep(5)
except Exception:
logger.warning('Failed to start torrent {0} in {1}'.format(input_name, client_agent))
def remove_torrent(client_agent, input_hash, input_id, input_name):
if core.DELETE_ORIGINAL == 1 or core.USE_LINK == 'move':
logger.debug('Deleting torrent {0} from {1}'.format(input_name, client_agent))
try:
if client_agent == 'utorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.removedata(input_hash)
core.TORRENT_CLASS.remove(input_hash)
if client_agent == 'transmission' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.remove_torrent(input_id, True)
if client_agent == 'synods' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.delete_task(input_id)
if client_agent == 'deluge' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.core.remove_torrent(input_id, True)
if client_agent == 'qbittorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.delete_permanently(input_hash)
time.sleep(5)
except Exception:
logger.warning('Failed to delete torrent {0} in {1}'.format(input_name, client_agent))
else:
resume_torrent(client_agent, input_hash, input_id, input_name)

View file

@ -0,0 +1,26 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from utorrent.client import UTorrentClient
import core
from core import logger
def configure_client():
agent = 'utorrent'
web_ui = core.UTORRENT_WEB_UI
user = core.UTORRENT_USER
password = core.UTORRENT_PASSWORD
logger.debug('Connecting to {0}: {1}'.format(agent, web_ui))
try:
client = UTorrentClient(web_ui, user, password)
except Exception:
logger.error('Failed to connect to uTorrent')
else:
return client

View file

@ -0,0 +1,5 @@
from core.plugins.downloaders.torrent.utils import (
pause_torrent,
remove_torrent,
resume_torrent,
)

53
core/plugins/plex.py Normal file
View file

@ -0,0 +1,53 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
import core
from core import logger
def configure_plex(config):
core.PLEX_SSL = int(config['Plex']['plex_ssl'])
core.PLEX_HOST = config['Plex']['plex_host']
core.PLEX_PORT = config['Plex']['plex_port']
core.PLEX_TOKEN = config['Plex']['plex_token']
plex_section = config['Plex']['plex_sections'] or []
if plex_section:
if isinstance(plex_section, list):
plex_section = ','.join(plex_section) # fix in case this imported as list.
plex_section = [
tuple(item.split(','))
for item in plex_section.split('|')
]
core.PLEX_SECTION = plex_section
def plex_update(category):
if core.FAILED:
return
url = '{scheme}://{host}:{port}/library/sections/'.format(
scheme='https' if core.PLEX_SSL else 'http',
host=core.PLEX_HOST,
port=core.PLEX_PORT,
)
section = None
if not core.PLEX_SECTION:
return
logger.debug('Attempting to update Plex Library for category {0}.'.format(category), 'PLEX')
for item in core.PLEX_SECTION:
if item[0] == category:
section = item[1]
if section:
url = '{url}{section}/refresh?X-Plex-Token={token}'.format(url=url, section=section, token=core.PLEX_TOKEN)
requests.get(url, timeout=(60, 120), verify=False)
logger.debug('Plex Library has been refreshed.', 'PLEX')
else:
logger.debug('Could not identify section for plex update', 'PLEX')

107
core/plugins/subtitles.py Normal file
View file

@ -0,0 +1,107 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from babelfish import Language
import subliminal
import core
from core import logger
import os
import re
for provider in subliminal.provider_manager.internal_extensions:
if provider not in [str(x) for x in subliminal.provider_manager.list_entry_points()]:
subliminal.provider_manager.register(str(provider))
def import_subs(filename):
if not core.GETSUBS:
return
try:
subliminal.region.configure('dogpile.cache.dbm', arguments={'filename': 'cachefile.dbm'})
except Exception:
pass
languages = set()
for item in core.SLANGUAGES:
try:
languages.add(Language(item))
except Exception:
pass
if not languages:
return
logger.info('Attempting to download subtitles for {0}'.format(filename), 'SUBTITLES')
try:
video = subliminal.scan_video(filename)
subtitles = subliminal.download_best_subtitles({video}, languages)
subliminal.save_subtitles(video, subtitles[video])
for subtitle in subtitles[video]:
subtitle_path = subliminal.subtitle.get_subtitle_path(video.name, subtitle.language)
os.chmod(subtitle_path, 0o644)
except Exception as e:
logger.error('Failed to download subtitles for {0} due to: {1}'.format(filename, e), 'SUBTITLES')
def rename_subs(path):
filepaths = []
sub_ext = ['.srt', '.sub', '.idx']
vidfiles = core.list_media_files(path, media=True, audio=False, meta=False, archives=False)
if not vidfiles or len(vidfiles) > 1: # If there is more than 1 video file, or no video files, we can't rename subs.
return
name = os.path.splitext(os.path.split(vidfiles[0])[1])[0]
for directory, _, filenames in os.walk(path):
for filename in filenames:
filepaths.extend([os.path.join(directory, filename)])
subfiles = [item for item in filepaths if os.path.splitext(item)[1] in sub_ext]
subfiles.sort() #This should sort subtitle names by language (alpha) and Number (where multiple)
renamed = []
for sub in subfiles:
subname, ext = os.path.splitext(os.path.basename(sub))
if name in subname: # The sub file name already includes the video name.
continue
words = re.findall('[a-zA-Z]+',str(subname)) # find whole words in string
# parse the words for language descriptors.
lan = None
for word in words:
try:
if len(word) == 2:
lan = Language.fromalpha2(word.lower())
elif len(word) == 3:
lan = Language(word.lower())
elif len(word) > 3:
lan = Language.fromname(word.lower())
if lan:
break
except: #if we didn't find a language, try next word.
continue
# rename the sub file as name.lan.ext
if not lan:
# could call ffprobe to parse the sub information and get language if lan unknown here.
new_sub_name = name
else:
new_sub_name = '{name}.{lan}'.format(name=name, lan=str(lan))
new_sub = os.path.join(directory, new_sub_name) # full path and name less ext
if '{new_sub}{ext}'.format(new_sub=new_sub, ext=ext) in renamed: # If duplicate names, add unique number before ext.
for i in range(1,len(renamed)+1):
if '{new_sub}.{i}{ext}'.format(new_sub=new_sub, i=i, ext=ext) in renamed:
continue
new_sub = '{new_sub}.{i}'.format(new_sub=new_sub, i=i)
break
new_sub = '{new_sub}{ext}'.format(new_sub=new_sub, ext=ext) # add extension now
if os.path.isfile(new_sub): # Don't copy over existing - final check.
logger.debug('Unable to rename sub file {old} as destination {new} already exists'.format(old=sub, new=new_sub))
continue
logger.debug('Renaming sub file from {old} to {new}'.format
(old=sub, new=new_sub))
renamed.append(new_sub)
try:
os.rename(sub, new_sub)
except Exception as error:
logger.error('Unable to rename sub file due to: {error}'.format(error=error))
return

72
core/processor/manual.py Normal file
View file

@ -0,0 +1,72 @@
import os
import core
from core import logger
from core.auto_process.common import ProcessResult
from core.processor import nzb
from core.utils import (
get_dirs,
get_download_info,
)
try:
text_type = unicode
except NameError:
text_type = str
def process():
# Perform Manual Post-Processing
logger.warning(
'Invalid number of arguments received from client, Switching to manual run mode ...')
# Post-Processing Result
result = ProcessResult(
message='',
status_code=0,
)
for section, subsections in core.SECTIONS.items():
for subsection in subsections:
if not core.CFG[section][subsection].isenabled():
continue
for dir_name in get_dirs(section, subsection, link='move'):
logger.info(
'Starting manual run for {0}:{1} - Folder: {2}'.format(
section, subsection, dir_name))
logger.info(
'Checking database for download info for {0} ...'.format(
os.path.basename(dir_name)))
core.DOWNLOAD_INFO = get_download_info(
os.path.basename(dir_name), 0)
if core.DOWNLOAD_INFO:
logger.info('Found download info for {0}, '
'setting variables now ...'.format
(os.path.basename(dir_name)))
client_agent = text_type(
core.DOWNLOAD_INFO[0]['client_agent']) or 'manual'
download_id = text_type(
core.DOWNLOAD_INFO[0]['input_id']) or ''
else:
logger.info('Unable to locate download info for {0}, '
'continuing to try and process this release ...'.format
(os.path.basename(dir_name)))
client_agent = 'manual'
download_id = ''
if client_agent and client_agent.lower() not in core.NZB_CLIENTS:
continue
input_name = os.path.basename(dir_name)
results = nzb.process(dir_name, input_name, 0,
client_agent=client_agent,
download_id=download_id or None,
input_category=subsection)
if results.status_code != 0:
logger.error(
'A problem was reported when trying to perform a manual run for {0}:{1}.'.format
(section, subsection))
result = results
return result

154
core/processor/nzb.py Normal file
View file

@ -0,0 +1,154 @@
import datetime
import core
from core import logger, main_db
from core.auto_process import comics, games, movies, music, tv, books
from core.auto_process.common import ProcessResult
from core.plugins.downloaders.nzb.utils import get_nzoid
from core.plugins.plex import plex_update
from core.user_scripts import external_script
from core.utils import (
char_replace,
clean_dir,
convert_to_ascii,
extract_files,
update_download_info_status,
)
try:
text_type = unicode
except NameError:
text_type = str
def process(input_directory, input_name=None, status=0, client_agent='manual', download_id=None, input_category=None, failure_link=None):
if core.SAFE_MODE and input_directory == core.NZB_DEFAULT_DIRECTORY:
logger.error(
'The input directory:[{0}] is the Default Download Directory. Please configure category directories to prevent processing of other media.'.format(
input_directory))
return ProcessResult(
message='',
status_code=-1,
)
if not download_id and client_agent == 'sabnzbd':
download_id = get_nzoid(input_name)
if client_agent != 'manual' and not core.DOWNLOAD_INFO:
logger.debug('Adding NZB download info for directory {0} to database'.format(input_directory))
my_db = main_db.DBConnection()
input_directory1 = input_directory
input_name1 = input_name
try:
encoded, input_directory1 = char_replace(input_directory)
encoded, input_name1 = char_replace(input_name)
except Exception:
pass
control_value_dict = {'input_directory': text_type(input_directory1)}
new_value_dict = {
'input_name': text_type(input_name1),
'input_hash': text_type(download_id),
'input_id': text_type(download_id),
'client_agent': text_type(client_agent),
'status': 0,
'last_update': datetime.date.today().toordinal(),
}
my_db.upsert('downloads', new_value_dict, control_value_dict)
# auto-detect section
if input_category is None:
input_category = 'UNCAT'
usercat = input_category
section = core.CFG.findsection(input_category).isenabled()
if section is None:
section = core.CFG.findsection('ALL').isenabled()
if section is None:
logger.error(
'Category:[{0}] is not defined or is not enabled. Please rename it or ensure it is enabled for the appropriate section in your autoProcessMedia.cfg and try again.'.format(
input_category))
return ProcessResult(
message='',
status_code=-1,
)
else:
usercat = 'ALL'
if len(section) > 1:
logger.error(
'Category:[{0}] is not unique, {1} are using it. Please rename it or disable all other sections using the same category name in your autoProcessMedia.cfg and try again.'.format(
input_category, section.keys()))
return ProcessResult(
message='',
status_code=-1,
)
if section:
section_name = section.keys()[0]
logger.info('Auto-detected SECTION:{0}'.format(section_name))
else:
logger.error('Unable to locate a section with subsection:{0} enabled in your autoProcessMedia.cfg, exiting!'.format(
input_category))
return ProcessResult(
status_code=-1,
message='',
)
cfg = dict(core.CFG[section_name][usercat])
extract = int(cfg.get('extract', 0))
try:
if int(cfg.get('remote_path')) and not core.REMOTE_PATHS:
logger.error('Remote Path is enabled for {0}:{1} but no Network mount points are defined. Please check your autoProcessMedia.cfg, exiting!'.format(
section_name, input_category))
return ProcessResult(
status_code=-1,
message='',
)
except Exception:
logger.error('Remote Path {0} is not valid for {1}:{2} Please set this to either 0 to disable or 1 to enable!'.format(
cfg.get('remote_path'), section_name, input_category))
input_name, input_directory = convert_to_ascii(input_name, input_directory)
if extract == 1 and not (status > 0 and core.NOEXTRACTFAILED):
logger.debug('Checking for archives to extract in directory: {0}'.format(input_directory))
extract_files(input_directory)
logger.info('Calling {0}:{1} to post-process:{2}'.format(section_name, input_category, input_name))
if section_name in ['CouchPotato', 'Radarr', 'Watcher3']:
result = movies.process(section_name, input_directory, input_name, status, client_agent, download_id, input_category, failure_link)
elif section_name in ['SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr']:
result = tv.process(section_name, input_directory, input_name, status, client_agent, download_id, input_category, failure_link)
elif section_name in ['HeadPhones', 'Lidarr']:
result = music.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'Mylar':
result = comics.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'Gamez':
result = games.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'LazyLibrarian':
result = books.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'UserScript':
result = external_script(input_directory, input_name, input_category, section[usercat])
else:
result = ProcessResult(
message='',
status_code=-1,
)
plex_update(input_category)
if result.status_code == 0:
if client_agent != 'manual':
# update download status in our DB
update_download_info_status(input_name, 1)
if section_name not in ['UserScript', 'NzbDrone', 'Sonarr', 'Radarr', 'Lidarr']:
# cleanup our processing folders of any misc unwanted files and empty directories
clean_dir(input_directory, section_name, input_category)
return result

108
core/processor/nzbget.py Normal file
View file

@ -0,0 +1,108 @@
import os
import sys
import core
from core import logger
from core.processor import nzb
def parse_download_id():
"""Parse nzbget download_id from environment."""
download_id_keys = [
'NZBPR_COUCHPOTATO',
'NZBPR_DRONE',
'NZBPR_SONARR',
'NZBPR_RADARR',
'NZBPR_LIDARR',
]
for download_id_key in download_id_keys:
try:
return os.environ[download_id_key]
except KeyError:
pass
else:
return ''
def parse_failure_link():
"""Parse nzbget failure_link from environment."""
return os.environ.get('NZBPR__DNZB_FAILURE')
def _parse_total_status():
status_summary = os.environ['NZBPP_TOTALSTATUS']
if status_summary != 'SUCCESS':
status = os.environ['NZBPP_STATUS']
logger.info('Download failed with status {0}.'.format(status))
return 1
return 0
def _parse_par_status():
"""Parse nzbget par status from environment."""
par_status = os.environ['NZBPP_PARSTATUS']
if par_status == '1' or par_status == '4':
logger.warning('Par-repair failed, setting status \'failed\'')
return 1
return 0
def _parse_unpack_status():
if os.environ['NZBPP_UNPACKSTATUS'] == '1':
logger.warning('Unpack failed, setting status \'failed\'')
return 1
return 0
def _parse_health_status():
"""Parse nzbget download health from environment."""
status = 0
unpack_status_value = os.environ['NZBPP_UNPACKSTATUS']
par_status_value = os.environ['NZBPP_PARSTATUS']
if unpack_status_value == '0' and par_status_value == '0':
# Unpack was skipped due to nzb-file properties
# or due to errors during par-check
if int(os.environ['NZBPP_HEALTH']) < 1000:
logger.warning('Download health is compromised and Par-check/repair disabled or no .par2 files found. Setting status \'failed\'')
status = 1
else:
logger.info('Par-check/repair disabled or no .par2 files found, and Unpack not required. Health is ok so handle as though download successful')
logger.info('Please check your Par-check/repair settings for future downloads.')
return status
def parse_status():
if 'NZBPP_TOTALSTATUS' in os.environ: # Called from nzbget 13.0 or later
status = _parse_total_status()
else:
par_status = _parse_par_status()
unpack_status = _parse_unpack_status()
health_status = _parse_health_status()
status = par_status or unpack_status or health_status
return status
def check_version():
"""Check nzbget version and if version is unsupported, exit."""
version = os.environ['NZBOP_VERSION']
# Check if the script is called from nzbget 11.0 or later
if version[0:5] < '11.0':
logger.error('NZBGet Version {0} is not supported. Please update NZBGet.'.format(version))
sys.exit(core.NZBGET_POSTPROCESS_ERROR)
logger.info('Script triggered from NZBGet Version {0}.'.format(version))
def process():
check_version()
status = parse_status()
download_id = parse_download_id()
failure_link = parse_failure_link()
return nzb.process(
input_directory=os.environ['NZBPP_DIRECTORY'],
input_name=os.environ['NZBPP_NZBNAME'],
status=status,
client_agent='nzbget',
download_id=download_id,
input_category=os.environ['NZBPP_CATEGORY'],
failure_link=failure_link,
)

50
core/processor/sab.py Normal file
View file

@ -0,0 +1,50 @@
import os
from core import logger
from core.processor import nzb
# Constants
MINIMUM_ARGUMENTS = 8
def process_script():
version = os.environ['SAB_VERSION']
logger.info('Script triggered from SABnzbd {0}.'.format(version))
return nzb.process(
input_directory=os.environ['SAB_COMPLETE_DIR'],
input_name=os.environ['SAB_FINAL_NAME'],
status=int(os.environ['SAB_PP_STATUS']),
client_agent='sabnzbd',
download_id=os.environ['SAB_NZO_ID'],
input_category=os.environ['SAB_CAT'],
failure_link=os.environ['SAB_FAILURE_URL'],
)
def process(args):
"""
SABnzbd arguments:
1. The final directory of the job (full path)
2. The original name of the NZB file
3. Clean version of the job name (no path info and '.nzb' removed)
4. Indexer's report number (if supported)
5. User-defined category
6. Group that the NZB was posted in e.g. alt.binaries.x
7. Status of post processing:
0 = OK
1 = failed verification
2 = failed unpack
3 = 1+2
8. Failure URL
"""
version = '0.7.17+' if len(args) > MINIMUM_ARGUMENTS else ''
logger.info('Script triggered from SABnzbd {}'.format(version))
return nzb.process(
input_directory=args[1],
input_name=args[2],
status=int(args[7]),
input_category=args[5],
client_agent='sabnzbd',
download_id='',
failure_link=''.join(args[8:]),
)

View file

@ -1 +0,0 @@
# coding=utf-8

195
core/scene_exceptions.py Normal file
View file

@ -0,0 +1,195 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import platform
import re
import shlex
import subprocess
import core
from core import logger
from core.utils import list_media_files
reverse_list = [r'\.\d{2}e\d{2}s\.', r'\.[pi]0801\.', r'\.p027\.', r'\.[pi]675\.', r'\.[pi]084\.', r'\.p063\.',
r'\b[45]62[xh]\.', r'\.yarulb\.', r'\.vtd[hp]\.',
r'\.ld[.-]?bew\.', r'\.pir.?(dov|dvd|bew|db|rb)\.', r'\brdvd\.', r'\.vts\.', r'\.reneercs\.',
r'\.dcv\.', r'\b(pir|mac)dh\b', r'\.reporp\.', r'\.kcaper\.',
r'\.lanretni\.', r'\b3ca\b', r'\.cstn\.']
reverse_pattern = re.compile('|'.join(reverse_list), flags=re.IGNORECASE)
season_pattern = re.compile(r'(.*\.\d{2}e\d{2}s\.)(.*)', flags=re.IGNORECASE)
word_pattern = re.compile(r'([^A-Z0-9]*[A-Z0-9]+)')
media_list = [r'\.s\d{2}e\d{2}\.', r'\.1080[pi]\.', r'\.720p\.', r'\.576[pi]', r'\.480[pi]\.', r'\.360p\.',
r'\.[xh]26[45]\b', r'\.bluray\.', r'\.[hp]dtv\.',
r'\.web[.-]?dl\.', r'\.(vod|dvd|web|bd|br).?rip\.', r'\.dvdr\b', r'\.stv\.', r'\.screener\.', r'\.vcd\.',
r'\bhd(cam|rip)\b', r'\.proper\.', r'\.repack\.',
r'\.internal\.', r'\bac3\b', r'\.ntsc\.', r'\.pal\.', r'\.secam\.', r'\bdivx\b', r'\bxvid\b']
media_pattern = re.compile('|'.join(media_list), flags=re.IGNORECASE)
garbage_name = re.compile(r'^[a-zA-Z0-9]*$')
char_replace = [[r'(\w)1\.(\w)', r'\1i\2'],
]
def process_all_exceptions(name, dirname):
par2(dirname)
rename_script(dirname)
for filename in list_media_files(dirname):
newfilename = None
parent_dir = os.path.dirname(filename)
head, file_extension = os.path.splitext(os.path.basename(filename))
if reverse_pattern.search(head) is not None:
exception = reverse_filename
elif garbage_name.search(head) is not None:
exception = replace_filename
else:
exception = None
newfilename = filename
if not newfilename:
newfilename = exception(filename, parent_dir, name)
if core.GROUPS:
newfilename = strip_groups(newfilename)
if newfilename != filename:
rename_file(filename, newfilename)
def strip_groups(filename):
if not core.GROUPS:
return filename
dirname, file = os.path.split(filename)
head, file_extension = os.path.splitext(file)
newname = head.replace(' ', '.')
for group in core.GROUPS:
newname = newname.replace(group, '')
newname = newname.replace('[]', '')
newfile = newname + file_extension
newfile_path = os.path.join(dirname, newfile)
return newfile_path
def rename_file(filename, newfile_path):
if os.path.isfile(newfile_path):
newfile_path = os.path.splitext(newfile_path)[0] + '.NTM' + os.path.splitext(newfile_path)[1]
logger.debug('Replacing file name {old} with download name {new}'.format
(old=filename, new=newfile_path), 'EXCEPTION')
try:
os.rename(filename, newfile_path)
except Exception as error:
logger.error('Unable to rename file due to: {error}'.format(error=error), 'EXCEPTION')
def replace_filename(filename, dirname, name):
head, file_extension = os.path.splitext(os.path.basename(filename))
if media_pattern.search(os.path.basename(dirname).replace(' ', '.')) is not None:
newname = os.path.basename(dirname).replace(' ', '.')
logger.debug('Replacing file name {old} with directory name {new}'.format(old=head, new=newname), 'EXCEPTION')
elif media_pattern.search(name.replace(' ', '.').lower()) is not None:
newname = name.replace(' ', '.')
logger.debug('Replacing file name {old} with download name {new}'.format
(old=head, new=newname), 'EXCEPTION')
else:
logger.warning('No name replacement determined for {name}'.format(name=head), 'EXCEPTION')
newname = name
newfile = newname + file_extension
newfile_path = os.path.join(dirname, newfile)
return newfile_path
def reverse_filename(filename, dirname, name):
head, file_extension = os.path.splitext(os.path.basename(filename))
na_parts = season_pattern.search(head)
if na_parts is not None:
word_p = word_pattern.findall(na_parts.group(2))
if word_p:
new_words = ''
for wp in word_p:
if wp[0] == '.':
new_words += '.'
new_words += re.sub(r'\W', '', wp)
else:
new_words = na_parts.group(2)
for cr in char_replace:
new_words = re.sub(cr[0], cr[1], new_words)
newname = new_words[::-1] + na_parts.group(1)[::-1]
else:
newname = head[::-1].title()
newname = newname.replace(' ', '.')
logger.debug('Reversing filename {old} to {new}'.format
(old=head, new=newname), 'EXCEPTION')
newfile = newname + file_extension
newfile_path = os.path.join(dirname, newfile)
return newfile_path
def rename_script(dirname):
rename_file = ''
for directory, _, files in os.walk(dirname):
for file in files:
if re.search(r'(rename\S*\.(sh|bat)$)', file, re.IGNORECASE):
rename_file = os.path.join(directory, file)
dirname = directory
break
if rename_file:
rename_lines = [line.strip() for line in open(rename_file)]
for line in rename_lines:
if re.search('^(mv|Move)', line, re.IGNORECASE):
cmd = shlex.split(line)[1:]
else:
continue
if len(cmd) == 2 and os.path.isfile(os.path.join(dirname, cmd[0])):
orig = os.path.join(dirname, cmd[0])
dest = os.path.join(dirname, cmd[1].split('\\')[-1].split('/')[-1])
if os.path.isfile(dest):
continue
logger.debug('Renaming file {source} to {destination}'.format
(source=orig, destination=dest), 'EXCEPTION')
try:
os.rename(orig, dest)
except Exception as error:
logger.error('Unable to rename file due to: {error}'.format(error=error), 'EXCEPTION')
def par2(dirname):
sofar = 0
parfile = ''
objects = []
if os.path.exists(dirname):
objects = os.listdir(dirname)
for item in objects:
if item.endswith('.par2'):
size = os.path.getsize(os.path.join(dirname, item))
if size > sofar:
sofar = size
parfile = item
if core.PAR2CMD and parfile:
pwd = os.getcwd() # Get our Present Working Directory
os.chdir(dirname) # set directory to run par on.
if platform.system() == 'Windows':
bitbucket = open('NUL')
else:
bitbucket = open('/dev/null')
logger.info('Running par2 on file {0}.'.format(parfile), 'PAR2')
command = [core.PAR2CMD, 'r', parfile, '*']
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.debug('calling command:{0}'.format(cmd), 'PAR2')
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
proc.communicate()
result = proc.returncode
except Exception:
logger.error('par2 file processing for {0} has failed'.format(parfile), 'PAR2')
if result == 0:
logger.info('par2 file processing succeeded', 'PAR2')
os.chdir(pwd)
bitbucket.close()
# dict for custom groups
# we can add more to this list
# _customgroups = {'Q o Q': process_qoq, '-ECI': process_eci}

View file

@ -1,23 +0,0 @@
# coding=utf-8
"""A synchronous implementation of the Deluge RPC protocol
based on gevent-deluge by Christopher Rosell.
https://github.com/chrippa/gevent-deluge
Example usage:
from synchronousdeluge import DelgueClient
client = DelugeClient()
client.connect()
# Wait for value
download_location = client.core.get_config_value("download_location").get()
"""
from core.synchronousdeluge.exceptions import DelugeRPCError
__title__ = "synchronous-deluge"
__version__ = "0.1"
__author__ = "Christian Dale"

View file

@ -1,159 +0,0 @@
# coding=utf-8
import os
import platform
from collections import defaultdict
from itertools import imap
from .exceptions import DelugeRPCError
from .protocol import DelugeRPCRequest, DelugeRPCResponse
from .transfer import DelugeTransfer
__all__ = ["DelugeClient"]
RPC_RESPONSE = 1
RPC_ERROR = 2
RPC_EVENT = 3
class DelugeClient(object):
def __init__(self):
"""A deluge client session."""
self.transfer = DelugeTransfer()
self.modules = []
self._request_counter = 0
def _get_local_auth(self):
username = password = ""
if platform.system() in ('Windows', 'Microsoft'):
appDataPath = os.environ.get("APPDATA")
if not appDataPath:
import _winreg
hkey = _winreg.OpenKey(_winreg.HKEY_CURRENT_USER,
"Software\\Microsoft\\Windows\\CurrentVersion\\Explorer\\Shell Folders")
appDataReg = _winreg.QueryValueEx(hkey, "AppData")
appDataPath = appDataReg[0]
_winreg.CloseKey(hkey)
auth_file = os.path.join(appDataPath, "deluge", "auth")
else:
from xdg.BaseDirectory import save_config_path
try:
auth_file = os.path.join(save_config_path("deluge"), "auth")
except OSError:
return username, password
if os.path.exists(auth_file):
for line in open(auth_file):
if line.startswith("#"):
# This is a comment line
continue
line = line.strip()
try:
lsplit = line.split(":")
except Exception:
continue
if len(lsplit) == 2:
username, password = lsplit
elif len(lsplit) == 3:
username, password, level = lsplit
else:
continue
if username == "localclient":
return username, password
return "", ""
def _create_module_method(self, module, method):
fullname = "{0}.{1}".format(module, method)
def func(obj, *args, **kwargs):
return self.remote_call(fullname, *args, **kwargs)
func.__name__ = method
return func
def _introspect(self):
self.modules = []
methods = self.remote_call("daemon.get_method_list").get()
methodmap = defaultdict(dict)
splitter = lambda v: v.split(".")
for module, method in imap(splitter, methods):
methodmap[module][method] = self._create_module_method(module, method)
for module, methods in methodmap.items():
clsname = "DelugeModule{0}".format(module.capitalize())
cls = type(clsname, (), methods)
setattr(self, module, cls())
self.modules.append(module)
def remote_call(self, method, *args, **kwargs):
req = DelugeRPCRequest(self._request_counter, method, *args, **kwargs)
message = next(self.transfer.send_request(req))
response = DelugeRPCResponse()
if not isinstance(message, tuple):
return
if len(message) < 3:
return
message_type = message[0]
# if message_type == RPC_EVENT:
# event = message[1]
# values = message[2]
#
# if event in self._event_handlers:
# for handler in self._event_handlers[event]:
# gevent.spawn(handler, *values)
#
# elif message_type in (RPC_RESPONSE, RPC_ERROR):
if message_type in (RPC_RESPONSE, RPC_ERROR):
request_id = message[1]
value = message[2]
if request_id == self._request_counter:
if message_type == RPC_RESPONSE:
response.set(value)
elif message_type == RPC_ERROR:
err = DelugeRPCError(*value)
response.set_exception(err)
self._request_counter += 1
return response
def connect(self, host="127.0.0.1", port=58846, username="", password=""):
"""Connects to a daemon process.
:param host: str, the hostname of the daemon
:param port: int, the port of the daemon
:param username: str, the username to login with
:param password: str, the password to login with
"""
# Connect transport
self.transfer.connect((host, port))
# Attempt to fetch local auth info if needed
if not username and host in ("127.0.0.1", "localhost"):
username, password = self._get_local_auth()
# Authenticate
self.remote_call("daemon.login", username, password).get()
# Introspect available methods
self._introspect()
@property
def connected(self):
return self.transfer.connected
def disconnect(self):
"""Disconnects from the daemon."""
self.transfer.disconnect()

View file

@ -1,12 +0,0 @@
# coding=utf-8
__all__ = ["DelugeRPCError"]
class DelugeRPCError(Exception):
def __init__(self, name, msg, traceback):
self.name = name
self.msg = msg
self.traceback = traceback
def __str__(self):
return "{0}: {1}: {2}".format(self.__class__.__name__, self.name, self.msg)

View file

@ -1,40 +0,0 @@
# coding=utf-8
__all__ = ["DelugeRPCRequest", "DelugeRPCResponse"]
class DelugeRPCRequest(object):
def __init__(self, request_id, method, *args, **kwargs):
self.request_id = request_id
self.method = method
self.args = args
self.kwargs = kwargs
def format(self):
return self.request_id, self.method, self.args, self.kwargs
class DelugeRPCResponse(object):
def __init__(self):
self.value = None
self._exception = None
def successful(self):
return self._exception is None
@property
def exception(self):
if self._exception is not None:
return self._exception
def set(self, value=None):
self.value = value
self._exception = None
def set_exception(self, exception):
self._exception = exception
def get(self):
if self._exception is None:
return self.value
else:
raise self._exception

View file

@ -1,56 +0,0 @@
# coding=utf-8
import zlib
import struct
import socket
import ssl
from core.synchronousdeluge import rencode
__all__ = ["DelugeTransfer"]
class DelugeTransfer(object):
def __init__(self):
self.sock = None
self.conn = None
self.connected = False
def connect(self, hostport):
if self.connected:
self.disconnect()
self.sock = socket.create_connection(hostport)
self.conn = ssl.wrap_socket(self.sock, None, None, False, ssl.CERT_NONE, ssl.PROTOCOL_TLSv1)
self.connected = True
def disconnect(self):
if self.conn:
self.conn.close()
self.connected = False
def send_request(self, request):
data = (request.format(),)
payload = zlib.compress(rencode.dumps(data))
self.conn.sendall(payload)
buf = b""
while True:
data = self.conn.recv(1024)
if not data:
self.connected = False
break
buf += data
dobj = zlib.decompressobj()
try:
message = rencode.loads(dobj.decompress(buf))
except (ValueError, zlib.error, struct.error):
# Probably incomplete data, read more
continue
else:
buf = dobj.unused_data
yield message

999
core/transcoder.py Normal file
View file

@ -0,0 +1,999 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import errno
import json
import sys
import os
import time
import platform
import re
import shutil
import subprocess
from babelfish import Language
from six import iteritems, string_types, text_type
import core
from core import logger
from core.utils import make_dir
__author__ = 'Justin'
def is_video_good(videofile, status, require_lan=None):
file_name_ext = os.path.basename(videofile)
file_name, file_ext = os.path.splitext(file_name_ext)
disable = False
if file_ext not in core.MEDIA_CONTAINER or not core.FFPROBE or not core.CHECK_MEDIA or file_ext in ['.iso'] or (status > 0 and core.NOEXTRACTFAILED):
disable = True
else:
test_details, res = get_video_details(core.TEST_FILE)
if res != 0 or test_details.get('error'):
disable = True
logger.info('DISABLED: ffprobe failed to analyse test file. Stopping corruption check.', 'TRANSCODER')
if test_details.get('streams'):
vid_streams = [item for item in test_details['streams'] if 'codec_type' in item and item['codec_type'] == 'video']
aud_streams = [item for item in test_details['streams'] if 'codec_type' in item and item['codec_type'] == 'audio']
if not (len(vid_streams) > 0 and len(aud_streams) > 0):
disable = True
logger.info('DISABLED: ffprobe failed to analyse streams from test file. Stopping corruption check.',
'TRANSCODER')
if disable:
if status: # if the download was 'failed', assume bad. If it was successful, assume good.
return False
else:
return True
logger.info('Checking [{0}] for corruption, please stand by ...'.format(file_name_ext), 'TRANSCODER')
video_details, result = get_video_details(videofile)
if result != 0:
logger.error('FAILED: [{0}] is corrupted!'.format(file_name_ext), 'TRANSCODER')
return False
if video_details.get('error'):
logger.info('FAILED: [{0}] returned error [{1}].'.format(file_name_ext, video_details.get('error')), 'TRANSCODER')
return False
if video_details.get('streams'):
video_streams = [item for item in video_details['streams'] if item['codec_type'] == 'video']
audio_streams = [item for item in video_details['streams'] if item['codec_type'] == 'audio']
if require_lan:
valid_audio = [item for item in audio_streams if 'tags' in item and 'language' in item['tags'] and item['tags']['language'] in require_lan ]
else:
valid_audio = audio_streams
if len(video_streams) > 0 and len(valid_audio) > 0:
logger.info('SUCCESS: [{0}] has no corruption.'.format(file_name_ext), 'TRANSCODER')
return True
else:
logger.info('FAILED: [{0}] has {1} video streams and {2} audio streams. '
'Assume corruption.'.format
(file_name_ext, len(video_streams), len(audio_streams)), 'TRANSCODER')
return False
def zip_out(file, img, bitbucket):
procin = None
if os.path.isfile(file):
cmd = ['cat', file]
else:
cmd = [core.SEVENZIP, '-so', 'e', img, file]
try:
procin = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
except Exception:
logger.error('Extracting [{0}] has failed'.format(file), 'TRANSCODER')
return procin
def get_video_details(videofile, img=None, bitbucket=None):
video_details = {}
result = 1
file = videofile
if not core.FFPROBE:
return video_details, result
print_format = '-of' if 'avprobe' in core.FFPROBE else '-print_format'
try:
if img:
videofile = '-'
command = [core.FFPROBE, '-v', 'quiet', print_format, 'json', '-show_format', '-show_streams', '-show_error',
videofile]
print_cmd(command)
if img:
procin = zip_out(file, img, bitbucket)
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=procin.stdout)
procin.stdout.close()
else:
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = proc.communicate()
result = proc.returncode
video_details = json.loads(out.decode())
except Exception:
try: # try this again without -show error in case of ffmpeg limitation
command = [core.FFPROBE, '-v', 'quiet', print_format, 'json', '-show_format', '-show_streams', videofile]
print_cmd(command)
if img:
procin = zip_out(file, img, bitbucket)
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=procin.stdout)
procin.stdout.close()
else:
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = proc.communicate()
result = proc.returncode
video_details = json.loads(out.decode())
except Exception:
logger.error('Checking [{0}] has failed'.format(file), 'TRANSCODER')
return video_details, result
def check_vid_file(video_details, result):
if result != 0:
return False
if video_details.get('error'):
return False
if not video_details.get('streams'):
return False
video_streams = [item for item in video_details['streams'] if item['codec_type'] == 'video']
audio_streams = [item for item in video_details['streams'] if item['codec_type'] == 'audio']
if len(video_streams) > 0 and len(audio_streams) > 0:
return True
else:
return False
def build_commands(file, new_dir, movie_name, bitbucket):
if isinstance(file, string_types):
input_file = file
if 'concat:' in file:
file = file.split('|')[0].replace('concat:', '')
video_details, result = get_video_details(file)
directory, name = os.path.split(file)
name, ext = os.path.splitext(name)
check = re.match('VTS_([0-9][0-9])_[0-9]+', name)
if check and core.CONCAT:
name = movie_name
elif check:
name = ('{0}.cd{1}'.format(movie_name, check.groups()[0]))
elif core.CONCAT and re.match('(.+)[cC][dD][0-9]', name):
name = re.sub('([ ._=:-]+[cC][dD][0-9])', '', name)
if ext == core.VEXTENSION and new_dir == directory: # we need to change the name to prevent overwriting itself.
core.VEXTENSION = '-transcoded{ext}'.format(ext=core.VEXTENSION) # adds '-transcoded.ext'
new_file = file
else:
img, data = next(iteritems(file))
name = data['name']
new_file = []
rem_vid = []
for vid in data['files']:
video_details, result = get_video_details(vid, img, bitbucket)
if not check_vid_file(video_details, result): #lets not transcode menu or other clips that don't have audio and video.
rem_vid.append(vid)
data['files'] = [ f for f in data['files'] if f not in rem_vid ]
new_file = {img: {'name': data['name'], 'files': data['files']}}
video_details, result = get_video_details(data['files'][0], img, bitbucket)
input_file = '-'
file = '-'
newfile_path = os.path.normpath(os.path.join(new_dir, name) + core.VEXTENSION)
map_cmd = []
video_cmd = []
audio_cmd = []
audio_cmd2 = []
sub_cmd = []
meta_cmd = []
other_cmd = []
if not video_details or not video_details.get(
'streams'): # we couldn't read streams with ffprobe. Set defaults to try transcoding.
video_streams = []
audio_streams = []
sub_streams = []
map_cmd.extend(['-map', '0'])
if core.VCODEC:
video_cmd.extend(['-c:v', core.VCODEC])
if core.VCODEC == 'libx264' and core.VPRESET:
video_cmd.extend(['-pre', core.VPRESET])
else:
video_cmd.extend(['-c:v', 'copy'])
if core.VFRAMERATE:
video_cmd.extend(['-r', str(core.VFRAMERATE)])
if core.VBITRATE:
video_cmd.extend(['-b:v', str(core.VBITRATE)])
if core.VRESOLUTION:
video_cmd.extend(['-vf', 'scale={vres}'.format(vres=core.VRESOLUTION)])
if core.VPRESET:
video_cmd.extend(['-preset', core.VPRESET])
if core.VCRF:
video_cmd.extend(['-crf', str(core.VCRF)])
if core.VLEVEL:
video_cmd.extend(['-level', str(core.VLEVEL)])
if core.ACODEC:
audio_cmd.extend(['-c:a', core.ACODEC])
if core.ACODEC in ['aac',
'dts']: # Allow users to use the experimental AAC codec that's built into recent versions of ffmpeg
audio_cmd.extend(['-strict', '-2'])
else:
audio_cmd.extend(['-c:a', 'copy'])
if core.ACHANNELS:
audio_cmd.extend(['-ac', str(core.ACHANNELS)])
if core.ABITRATE:
audio_cmd.extend(['-b:a', str(core.ABITRATE)])
if core.OUTPUTQUALITYPERCENT:
audio_cmd.extend(['-q:a', str(core.OUTPUTQUALITYPERCENT)])
if core.SCODEC and core.ALLOWSUBS:
sub_cmd.extend(['-c:s', core.SCODEC])
elif core.ALLOWSUBS: # Not every subtitle codec can be used for every video container format!
sub_cmd.extend(['-c:s', 'copy'])
else: # http://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/subtitle_options
sub_cmd.extend(['-sn']) # Don't copy the subtitles over
if core.OUTPUTFASTSTART:
other_cmd.extend(['-movflags', '+faststart'])
else:
video_streams = [item for item in video_details['streams'] if item['codec_type'] == 'video']
audio_streams = [item for item in video_details['streams'] if item['codec_type'] == 'audio']
sub_streams = [item for item in video_details['streams'] if item['codec_type'] == 'subtitle']
if core.VEXTENSION not in ['.mkv', '.mpegts']:
sub_streams = [item for item in video_details['streams'] if
item['codec_type'] == 'subtitle' and item['codec_name'] != 'hdmv_pgs_subtitle' and item[
'codec_name'] != 'pgssub']
for video in video_streams:
codec = video['codec_name']
fr = video.get('avg_frame_rate', 0)
width = video.get('width', 0)
height = video.get('height', 0)
scale = core.VRESOLUTION
if codec in core.VCODEC_ALLOW or not core.VCODEC:
video_cmd.extend(['-c:v', 'copy'])
else:
video_cmd.extend(['-c:v', core.VCODEC])
if core.VFRAMERATE and not (core.VFRAMERATE * 0.999 <= fr <= core.VFRAMERATE * 1.001):
video_cmd.extend(['-r', str(core.VFRAMERATE)])
if scale:
w_scale = width / float(scale.split(':')[0])
h_scale = height / float(scale.split(':')[1])
if w_scale > h_scale: # widescreen, Scale by width only.
scale = '{width}:{height}'.format(
width=scale.split(':')[0],
height=int((height / w_scale) / 2) * 2,
)
if w_scale > 1:
video_cmd.extend(['-vf', 'scale={width}'.format(width=scale)])
else: # lower or matching ratio, scale by height only.
scale = '{width}:{height}'.format(
width=int((width / h_scale) / 2) * 2,
height=scale.split(':')[1],
)
if h_scale > 1:
video_cmd.extend(['-vf', 'scale={height}'.format(height=scale)])
if core.VBITRATE:
video_cmd.extend(['-b:v', str(core.VBITRATE)])
if core.VPRESET:
video_cmd.extend(['-preset', core.VPRESET])
if core.VCRF:
video_cmd.extend(['-crf', str(core.VCRF)])
if core.VLEVEL:
video_cmd.extend(['-level', str(core.VLEVEL)])
no_copy = ['-vf', '-r', '-crf', '-level', '-preset', '-b:v']
if video_cmd[1] == 'copy' and any(i in video_cmd for i in no_copy):
video_cmd[1] = core.VCODEC
if core.VCODEC == 'copy': # force copy. therefore ignore all other video transcoding.
video_cmd = ['-c:v', 'copy']
map_cmd.extend(['-map', '0:{index}'.format(index=video['index'])])
break # Only one video needed
used_audio = 0
a_mapped = []
commentary = []
if audio_streams:
for i, val in reversed(list(enumerate(audio_streams))):
try:
if 'Commentary' in val.get('tags').get('title'): # Split out commentry tracks.
commentary.append(val)
del audio_streams[i]
except Exception:
continue
try:
audio1 = [item for item in audio_streams if item['tags']['language'] == core.ALANGUAGE]
except Exception: # no language tags. Assume only 1 language.
audio1 = audio_streams
try:
audio2 = [item for item in audio1 if item['codec_name'] in core.ACODEC_ALLOW]
except Exception:
audio2 = []
try:
audio3 = [item for item in audio_streams if item['tags']['language'] != core.ALANGUAGE]
except Exception:
audio3 = []
try:
audio4 = [item for item in audio3 if item['codec_name'] in core.ACODEC_ALLOW]
except Exception:
audio4 = []
if audio2: # right (or only) language and codec...
map_cmd.extend(['-map', '0:{index}'.format(index=audio2[0]['index'])])
a_mapped.extend([audio2[0]['index']])
bitrate = int(float(audio2[0].get('bit_rate', 0))) / 1000
channels = int(float(audio2[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio1: # right (or only) language, wrong codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio1[0]['index'])])
a_mapped.extend([audio1[0]['index']])
bitrate = int(float(audio1[0].get('bit_rate', 0))) / 1000
channels = int(float(audio1[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), core.ACODEC if core.ACODEC else 'copy'])
elif audio4: # wrong language, right codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio4[0]['index'])])
a_mapped.extend([audio4[0]['index']])
bitrate = int(float(audio4[0].get('bit_rate', 0))) / 1000
channels = int(float(audio4[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio3: # wrong language, wrong codec. just pick the default audio track
map_cmd.extend(['-map', '0:{index}'.format(index=audio3[0]['index'])])
a_mapped.extend([audio3[0]['index']])
bitrate = int(float(audio3[0].get('bit_rate', 0))) / 1000
channels = int(float(audio3[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), core.ACODEC if core.ACODEC else 'copy'])
if core.ACHANNELS and channels and channels > core.ACHANNELS:
audio_cmd.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if core.ABITRATE and not (core.ABITRATE * 0.9 < bitrate < core.ABITRATE * 1.1):
audio_cmd.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if core.OUTPUTQUALITYPERCENT:
audio_cmd.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if audio_cmd[1] in ['aac', 'dts']:
audio_cmd[2:2] = ['-strict', '-2']
if core.ACODEC2_ALLOW:
used_audio += 1
try:
audio5 = [item for item in audio1 if item['codec_name'] in core.ACODEC2_ALLOW]
except Exception:
audio5 = []
try:
audio6 = [item for item in audio3 if item['codec_name'] in core.ACODEC2_ALLOW]
except Exception:
audio6 = []
if audio5: # right language and codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio5[0]['index'])])
a_mapped.extend([audio5[0]['index']])
bitrate = int(float(audio5[0].get('bit_rate', 0))) / 1000
channels = int(float(audio5[0].get('channels', 0)))
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio1: # right language wrong codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio1[0]['index'])])
a_mapped.extend([audio1[0]['index']])
bitrate = int(float(audio1[0].get('bit_rate', 0))) / 1000
channels = int(float(audio1[0].get('channels', 0)))
if core.ACODEC2:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), core.ACODEC2])
else:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio6: # wrong language, right codec
map_cmd.extend(['-map', '0:{index}'.format(index=audio6[0]['index'])])
a_mapped.extend([audio6[0]['index']])
bitrate = int(float(audio6[0].get('bit_rate', 0))) / 1000
channels = int(float(audio6[0].get('channels', 0)))
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio3: # wrong language, wrong codec just pick the default audio track
map_cmd.extend(['-map', '0:{index}'.format(index=audio3[0]['index'])])
a_mapped.extend([audio3[0]['index']])
bitrate = int(float(audio3[0].get('bit_rate', 0))) / 1000
channels = int(float(audio3[0].get('channels', 0)))
if core.ACODEC2:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), core.ACODEC2])
else:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
if core.ACHANNELS2 and channels and channels > core.ACHANNELS2:
audio_cmd2.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS2)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if core.ABITRATE2 and not (core.ABITRATE2 * 0.9 < bitrate < core.ABITRATE2 * 1.1):
audio_cmd2.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE2)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if core.OUTPUTQUALITYPERCENT:
audio_cmd2.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if audio_cmd2[1] in ['aac', 'dts']:
audio_cmd2[2:2] = ['-strict', '-2']
if a_mapped[1] == a_mapped[0] and audio_cmd2[1:] == audio_cmd[1:]: # check for duplicate output track.
del map_cmd[-2:]
else:
audio_cmd.extend(audio_cmd2)
if core.AINCLUDE and core.ACODEC3:
audio_streams.extend(commentary) # add commentry tracks back here.
for audio in audio_streams:
if audio['index'] in a_mapped:
continue
used_audio += 1
map_cmd.extend(['-map', '0:{index}'.format(index=audio['index'])])
audio_cmd3 = []
bitrate = int(float(audio.get('bit_rate', 0))) / 1000
channels = int(float(audio.get('channels', 0)))
if audio['codec_name'] in core.ACODEC3_ALLOW:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), 'copy'])
else:
if core.ACODEC3:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), core.ACODEC3])
else:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), 'copy'])
if core.ACHANNELS3 and channels and channels > core.ACHANNELS3:
audio_cmd3.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS3)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if core.ABITRATE3 and not (core.ABITRATE3 * 0.9 < bitrate < core.ABITRATE3 * 1.1):
audio_cmd3.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE3)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if core.OUTPUTQUALITYPERCENT > 0:
audio_cmd3.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if audio_cmd3[1] in ['aac', 'dts']:
audio_cmd3[2:2] = ['-strict', '-2']
audio_cmd.extend(audio_cmd3)
s_mapped = []
burnt = 0
n = 0
for lan in core.SLANGUAGES:
try:
subs1 = [item for item in sub_streams if item['tags']['language'] == lan]
except Exception:
subs1 = []
if core.BURN and not subs1 and not burnt and os.path.isfile(file):
for subfile in get_subs(file):
if lan in os.path.split(subfile)[1]:
video_cmd.extend(['-vf', 'subtitles={subs}'.format(subs=subfile)])
burnt = 1
for sub in subs1:
if core.BURN and not burnt and os.path.isfile(input_file):
subloc = 0
for index in range(len(sub_streams)):
if sub_streams[index]['index'] == sub['index']:
subloc = index
break
video_cmd.extend(['-vf', 'subtitles={sub}:si={loc}'.format(sub=input_file, loc=subloc)])
burnt = 1
if not core.ALLOWSUBS:
break
if sub['codec_name'] in ['dvd_subtitle', 'dvb_subtitle', 'VobSub'] and core.SCODEC == 'mov_text': # We can't convert these.
continue
map_cmd.extend(['-map', '0:{index}'.format(index=sub['index'])])
s_mapped.extend([sub['index']])
if core.SINCLUDE:
for sub in sub_streams:
if not core.ALLOWSUBS:
break
if sub['index'] in s_mapped:
continue
if sub['codec_name'] in ['dvd_subtitle', 'dvb_subtitle', 'VobSub'] and core.SCODEC == 'mov_text': # We can't convert these.
continue
map_cmd.extend(['-map', '0:{index}'.format(index=sub['index'])])
s_mapped.extend([sub['index']])
if core.OUTPUTFASTSTART:
other_cmd.extend(['-movflags', '+faststart'])
if core.OTHEROPTS:
other_cmd.extend(core.OTHEROPTS)
command = [core.FFMPEG, '-loglevel', 'warning']
if core.HWACCEL:
command.extend(['-hwaccel', 'auto'])
if core.GENERALOPTS:
command.extend(core.GENERALOPTS)
command.extend(['-i', input_file])
if core.SEMBED and os.path.isfile(file):
for subfile in get_subs(file):
sub_details, result = get_video_details(subfile)
if not sub_details or not sub_details.get('streams'):
continue
if core.SCODEC == 'mov_text':
subcode = [stream['codec_name'] for stream in sub_details['streams']]
if set(subcode).intersection(['dvd_subtitle', 'dvb_subtitle', 'VobSub']): # We can't convert these.
continue
command.extend(['-i', subfile])
lan = os.path.splitext(os.path.splitext(subfile)[0])[1][1:].split('-')[0]
lan = text_type(lan)
metlan = None
try:
if len(lan) == 3:
metlan = Language(lan)
if len(lan) == 2:
metlan = Language.fromalpha2(lan)
except Exception:
pass
if metlan:
meta_cmd.extend(['-metadata:s:s:{x}'.format(x=len(s_mapped) + n),
'language={lang}'.format(lang=metlan.alpha3)])
n += 1
map_cmd.extend(['-map', '{x}:0'.format(x=n)])
if not core.ALLOWSUBS or (not s_mapped and not n):
sub_cmd.extend(['-sn'])
else:
if core.SCODEC:
sub_cmd.extend(['-c:s', core.SCODEC])
else:
sub_cmd.extend(['-c:s', 'copy'])
command.extend(map_cmd)
command.extend(video_cmd)
command.extend(audio_cmd)
command.extend(sub_cmd)
command.extend(meta_cmd)
command.extend(other_cmd)
command.append(newfile_path)
if platform.system() != 'Windows':
command = core.NICENESS + command
return command, new_file
def get_subs(file):
filepaths = []
sub_ext = ['.srt', '.sub', '.idx']
name = os.path.splitext(os.path.split(file)[1])[0]
path = os.path.split(file)[0]
for directory, _, filenames in os.walk(path):
for filename in filenames:
filepaths.extend([os.path.join(directory, filename)])
subfiles = [item for item in filepaths if os.path.splitext(item)[1] in sub_ext and name in item]
return subfiles
def extract_subs(file, newfile_path, bitbucket):
video_details, result = get_video_details(file)
if not video_details:
return
if core.SUBSDIR:
subdir = core.SUBSDIR
else:
subdir = os.path.split(newfile_path)[0]
name = os.path.splitext(os.path.split(newfile_path)[1])[0]
try:
sub_streams = [item for item in video_details['streams'] if
item['codec_type'] == 'subtitle' and item['tags']['language'] in core.SLANGUAGES and item[
'codec_name'] != 'hdmv_pgs_subtitle' and item['codec_name'] != 'pgssub']
except Exception:
sub_streams = [item for item in video_details['streams'] if
item['codec_type'] == 'subtitle' and item['codec_name'] != 'hdmv_pgs_subtitle' and item[
'codec_name'] != 'pgssub']
num = len(sub_streams)
for n in range(num):
sub = sub_streams[n]
idx = sub['index']
lan = sub.get('tags', {}).get('language', 'unk')
if num == 1:
output_file = os.path.join(subdir, '{0}.srt'.format(name))
if os.path.isfile(output_file):
output_file = os.path.join(subdir, '{0}.{1}.srt'.format(name, n))
else:
output_file = os.path.join(subdir, '{0}.{1}.srt'.format(name, lan))
if os.path.isfile(output_file):
output_file = os.path.join(subdir, '{0}.{1}.{2}.srt'.format(name, lan, n))
command = [core.FFMPEG, '-loglevel', 'warning', '-i', file, '-vn', '-an',
'-codec:{index}'.format(index=idx), 'srt', output_file]
if platform.system() != 'Windows':
command = core.NICENESS + command
logger.info('Extracting {0} subtitle from: {1}'.format(lan, file))
print_cmd(command)
result = 1 # set result to failed in case call fails.
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
out, err = proc.communicate()
result = proc.returncode
except Exception:
logger.error('Extracting subtitle has failed')
if result == 0:
try:
shutil.copymode(file, output_file)
except Exception:
pass
logger.info('Extracting {0} subtitle from {1} has succeeded'.format(lan, file))
else:
logger.error('Extracting subtitles has failed')
def process_list(it, new_dir, bitbucket):
rem_list = []
new_list = []
combine = []
vts_path = None
mts_path = None
success = True
for item in it:
ext = os.path.splitext(item)[1].lower()
if ext in ['.iso', '.bin', '.img'] and ext not in core.IGNOREEXTENSIONS:
logger.debug('Attempting to rip disk image: {0}'.format(item), 'TRANSCODER')
new_list.extend(rip_iso(item, new_dir, bitbucket))
rem_list.append(item)
elif re.match('.+VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]', item) and '.vob' not in core.IGNOREEXTENSIONS:
logger.debug('Found VIDEO_TS image file: {0}'.format(item), 'TRANSCODER')
if not vts_path:
try:
vts_path = re.match('(.+VIDEO_TS)', item).groups()[0]
except Exception:
vts_path = os.path.split(item)[0]
rem_list.append(item)
elif re.match('.+BDMV[/\\]SOURCE[/\\][0-9]+[0-9].[Mm][Tt][Ss]', item) and '.mts' not in core.IGNOREEXTENSIONS:
logger.debug('Found MTS image file: {0}'.format(item), 'TRANSCODER')
if not mts_path:
try:
mts_path = re.match('(.+BDMV[/\\]SOURCE)', item).groups()[0]
except Exception:
mts_path = os.path.split(item)[0]
rem_list.append(item)
elif re.match('.+VIDEO_TS.', item) or re.match('.+VTS_[0-9][0-9]_[0-9].', item):
rem_list.append(item)
elif core.CONCAT and re.match('.+[cC][dD][0-9].', item):
rem_list.append(item)
combine.append(item)
else:
continue
if vts_path:
new_list.extend(combine_vts(vts_path))
if mts_path:
new_list.extend(combine_mts(mts_path))
if combine:
new_list.extend(combine_cd(combine))
for file in new_list:
if isinstance(file, string_types) and 'concat:' not in file and not os.path.isfile(file):
success = False
break
if success and new_list:
it.extend(new_list)
for item in rem_list:
it.remove(item)
logger.debug('Successfully extracted .vob file {0} from disk image'.format(new_list[0]), 'TRANSCODER')
elif new_list and not success:
new_list = []
rem_list = []
logger.error('Failed extracting .vob files from disk image. Stopping transcoding.', 'TRANSCODER')
return it, rem_list, new_list, success
def mount_iso(item, new_dir, bitbucket): #Currently only supports Linux Mount when permissions allow.
if platform.system() == 'Windows':
logger.error('No mounting options available under Windows for image file {0}'.format(item), 'TRANSCODER')
return []
mount_point = os.path.join(os.path.dirname(os.path.abspath(item)),'temp')
make_dir(mount_point)
cmd = ['mount', '-o', 'loop', item, mount_point]
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
core.MOUNTED = mount_point # Allows us to verify this has been done and then cleanup.
for root, dirs, files in os.walk(mount_point):
for file in files:
full_path = os.path.join(root, file)
if re.match('.+VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]', full_path) and '.vob' not in core.IGNOREEXTENSIONS:
logger.debug('Found VIDEO_TS image file: {0}'.format(full_path), 'TRANSCODER')
try:
vts_path = re.match('(.+VIDEO_TS)', full_path).groups()[0]
except Exception:
vts_path = os.path.split(full_path)[0]
return combine_vts(vts_path)
elif re.match('.+BDMV[/\\]STREAM[/\\][0-9]+[0-9].[Mm]', full_path) and '.mts' not in core.IGNOREEXTENSIONS:
logger.debug('Found MTS image file: {0}'.format(full_path), 'TRANSCODER')
try:
mts_path = re.match('(.+BDMV[/\\]STREAM)', full_path).groups()[0]
except Exception:
mts_path = os.path.split(full_path)[0]
return combine_mts(mts_path)
logger.error('No VIDEO_TS or BDMV/SOURCE folder found in image file {0}'.format(mount_point), 'TRANSCODER')
return ['failure'] # If we got here, nothing matched our criteria
def rip_iso(item, new_dir, bitbucket):
new_files = []
failure_dir = 'failure'
# Mount the ISO in your OS and call combineVTS.
if not core.SEVENZIP:
logger.debug('No 7zip installed. Attempting to mount image file {0}'.format(item), 'TRANSCODER')
try:
new_files = mount_iso(item, new_dir, bitbucket) # Currently only works for Linux.
except Exception:
logger.error('Failed to mount and extract from image file {0}'.format(item), 'TRANSCODER')
new_files = [failure_dir]
return new_files
cmd = [core.SEVENZIP, 'l', item]
try:
logger.debug('Attempting to extract .vob or .mts from image file {0}'.format(item), 'TRANSCODER')
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
file_match_gen = (
re.match(r'.+(VIDEO_TS[/\\]VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb])', line)
for line in out.decode().splitlines()
)
file_list = [
file_match.groups()[0]
for file_match in file_match_gen
if file_match
]
combined = []
if file_list: # handle DVD
for n in range(99):
concat = []
m = 1
while True:
vts_name = 'VIDEO_TS{0}VTS_{1:02d}_{2:d}.VOB'.format(os.sep, n + 1, m)
if vts_name in file_list:
concat.append(vts_name)
m += 1
else:
break
if not concat:
break
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=os.path.splitext(os.path.split(item)[1])[0], x=n + 1
)
new_files.append({item: {'name': name, 'files': concat}})
else: #check BlueRay for BDMV/STREAM/XXXX.MTS
mts_list_gen = (
re.match(r'.+(BDMV[/\\]STREAM[/\\][0-9]+[0-9].[Mm]).', line)
for line in out.decode().splitlines()
)
mts_list = [
file_match.groups()[0]
for file_match in mts_list_gen
if file_match
]
if sys.version_info[0] == 2: # Python2 sorting
mts_list.sort(key=lambda f: int(filter(str.isdigit, f))) # Sort all .mts files in numerical order
else: # Python3 sorting
mts_list.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
n = 0
for mts_name in mts_list:
concat = []
n += 1
concat.append(mts_name)
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=os.path.splitext(os.path.split(item)[1])[0], x=n
)
new_files.append({item: {'name': name, 'files': concat}})
if core.CONCAT and combined:
name = os.path.splitext(os.path.split(item)[1])[0]
new_files.append({item: {'name': name, 'files': combined}})
if not new_files:
logger.error('No VIDEO_TS or BDMV/SOURCE folder found in image file. Attempting to mount and scan {0}'.format(item), 'TRANSCODER')
new_files = mount_iso(item, new_dir, bitbucket)
except Exception:
logger.error('Failed to extract from image file {0}'.format(item), 'TRANSCODER')
new_files = [failure_dir]
return new_files
def combine_vts(vts_path):
new_files = []
combined = []
name = re.match(r'(.+)[/\\]VIDEO_TS', vts_path).groups()[0]
if os.path.basename(name) == 'temp':
name = os.path.basename(os.path.dirname(name))
else:
name = os.path.basename(name)
for n in range(99):
concat = []
m = 1
while True:
vts_name = 'VTS_{0:02d}_{1:d}.VOB'.format(n + 1, m)
if os.path.isfile(os.path.join(vts_path, vts_name)):
concat.append(os.path.join(vts_path, vts_name))
m += 1
else:
break
if not concat:
break
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=name, x=n + 1
)
new_files.append({vts_path: {'name': name, 'files': concat}})
if core.CONCAT:
new_files.append({vts_path: {'name': name, 'files': combined}})
return new_files
def combine_mts(mts_path):
new_files = []
combined = []
name = re.match(r'(.+)[/\\]BDMV[/\\]STREAM', mts_path).groups()[0]
if os.path.basename(name) == 'temp':
name = os.path.basename(os.path.dirname(name))
else:
name = os.path.basename(name)
n = 0
mts_list = [f for f in os.listdir(mts_path) if os.path.isfile(os.path.join(mts_path, f))]
if sys.version_info[0] == 2: # Python2 sorting
mts_list.sort(key=lambda f: int(filter(str.isdigit, f)))
else: # Python3 sorting
mts_list.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
for mts_name in mts_list: ### need to sort all files [1 - 998].mts in order
concat = []
concat.append(os.path.join(mts_path, mts_name))
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=name, x=n + 1
)
new_files.append({mts_path: {'name': name, 'files': concat}})
n += 1
if core.CONCAT:
new_files.append({mts_path: {'name': name, 'files': combined}})
return new_files
def combine_cd(combine):
new_files = []
for item in {re.match('(.+)[cC][dD][0-9].', item).groups()[0] for item in combine}:
concat = ''
for n in range(99):
files = [file for file in combine if
n + 1 == int(re.match('.+[cC][dD]([0-9]+).', file).groups()[0]) and item in file]
if files:
concat += '{file}|'.format(file=files[0])
else:
break
if concat:
new_files.append('concat:{0}'.format(concat[:-1]))
return new_files
def print_cmd(command):
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.debug('calling command:{0}'.format(cmd))
def transcode_directory(dir_name):
if not core.FFMPEG:
return 1, dir_name
logger.info('Checking for files to be transcoded')
final_result = 0 # initialize as successful
if core.OUTPUTVIDEOPATH:
new_dir = core.OUTPUTVIDEOPATH
make_dir(new_dir)
name = os.path.splitext(os.path.split(dir_name)[1])[0]
new_dir = os.path.join(new_dir, name)
make_dir(new_dir)
else:
new_dir = dir_name
if platform.system() == 'Windows':
bitbucket = open('NUL')
else:
bitbucket = open('/dev/null')
movie_name = os.path.splitext(os.path.split(dir_name)[1])[0]
file_list = core.list_media_files(dir_name, media=True, audio=False, meta=False, archives=False)
file_list, rem_list, new_list, success = process_list(file_list, new_dir, bitbucket)
if not success:
bitbucket.close()
return 1, dir_name
for file in file_list:
if isinstance(file, string_types) and os.path.splitext(file)[1] in core.IGNOREEXTENSIONS:
continue
command, file = build_commands(file, new_dir, movie_name, bitbucket)
newfile_path = command[-1]
# transcoding files may remove the original file, so make sure to extract subtitles first
if core.SEXTRACT and isinstance(file, string_types):
extract_subs(file, newfile_path, bitbucket)
try: # Try to remove the file that we're transcoding to just in case. (ffmpeg will return an error if it already exists for some reason)
os.remove(newfile_path)
except OSError as e:
if e.errno != errno.ENOENT: # Ignore the error if it's just telling us that the file doesn't exist
logger.debug('Error when removing transcoding target: {0}'.format(e))
except Exception as e:
logger.debug('Error when removing transcoding target: {0}'.format(e))
logger.info('Transcoding video: {0}'.format(newfile_path))
print_cmd(command)
result = 1 # set result to failed in case call fails.
try:
if isinstance(file, string_types):
proc = subprocess.Popen(command, stdout=bitbucket, stderr=subprocess.PIPE)
else:
img, data = next(iteritems(file))
proc = subprocess.Popen(command, stdout=bitbucket, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
for vob in data['files']:
procin = zip_out(vob, img, bitbucket)
if procin:
logger.debug('Feeding in file: {0} to Transcoder'.format(vob))
shutil.copyfileobj(procin.stdout, proc.stdin)
procin.stdout.close()
out, err = proc.communicate()
if err:
logger.error('Transcoder returned:{0} has failed'.format(err))
result = proc.returncode
except Exception:
logger.error('Transcoding of video {0} has failed'.format(newfile_path))
if core.SUBSDIR and result == 0 and isinstance(file, string_types):
for sub in get_subs(file):
name = os.path.splitext(os.path.split(file)[1])[0]
subname = os.path.split(sub)[1]
newname = os.path.splitext(os.path.split(newfile_path)[1])[0]
newpath = os.path.join(core.SUBSDIR, subname.replace(name, newname))
if not os.path.isfile(newpath):
os.rename(sub, newpath)
if result == 0:
try:
shutil.copymode(file, newfile_path)
except Exception:
pass
logger.info('Transcoding of video to {0} succeeded'.format(newfile_path))
if os.path.isfile(newfile_path) and (file in new_list or not core.DUPLICATE):
try:
os.unlink(file)
except Exception:
pass
else:
logger.error('Transcoding of video to {0} failed with result {1}'.format(newfile_path, result))
# this will be 0 (successful) it all are successful, else will return a positive integer for failure.
final_result = final_result + result
if core.MOUNTED: # In case we mounted an .iso file, unmount here.
time.sleep(5) # play it safe and avoid failing to unmount.
cmd = ['umount', '-l', core.MOUNTED]
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
time.sleep(5)
os.rmdir(core.MOUNTED)
core.MOUNTED = None
if final_result == 0 and not core.DUPLICATE:
for file in rem_list:
try:
os.unlink(file)
except Exception:
pass
if not os.listdir(text_type(new_dir)): # this is an empty directory and we didn't transcode into it.
os.rmdir(new_dir)
new_dir = dir_name
if not core.PROCESSOUTPUT and core.DUPLICATE: # We postprocess the original files to CP/SB
new_dir = dir_name
bitbucket.close()
return final_result, new_dir

View file

@ -1,2 +0,0 @@
# coding=utf-8
__author__ = 'Justin'

View file

@ -1,824 +0,0 @@
# coding=utf-8
from six import iteritems
import errno
import os
import platform
import subprocess
import core
import json
import shutil
import re
from core import logger
from core.nzbToMediaUtil import makeDir
from babelfish import Language
def isVideoGood(videofile, status):
fileNameExt = os.path.basename(videofile)
fileName, fileExt = os.path.splitext(fileNameExt)
disable = False
if fileExt not in core.MEDIACONTAINER or not core.FFPROBE or not core.CHECK_MEDIA or fileExt in ['.iso']:
disable = True
else:
test_details, res = getVideoDetails(core.TEST_FILE)
if res != 0 or test_details.get("error"):
disable = True
logger.info("DISABLED: ffprobe failed to analyse test file. Stopping corruption check.", 'TRANSCODER')
if test_details.get("streams"):
vidStreams = [item for item in test_details["streams"] if "codec_type" in item and item["codec_type"] == "video"]
audStreams = [item for item in test_details["streams"] if "codec_type" in item and item["codec_type"] == "audio"]
if not (len(vidStreams) > 0 and len(audStreams) > 0):
disable = True
logger.info("DISABLED: ffprobe failed to analyse streams from test file. Stopping corruption check.",
'TRANSCODER')
if disable:
if status: # if the download was "failed", assume bad. If it was successful, assume good.
return False
else:
return True
logger.info('Checking [{0}] for corruption, please stand by ...'.format(fileNameExt), 'TRANSCODER')
video_details, result = getVideoDetails(videofile)
if result != 0:
logger.error("FAILED: [{0}] is corrupted!".format(fileNameExt), 'TRANSCODER')
return False
if video_details.get("error"):
logger.info("FAILED: [{0}] returned error [{1}].".format(fileNameExt, video_details.get("error")), 'TRANSCODER')
return False
if video_details.get("streams"):
videoStreams = [item for item in video_details["streams"] if item["codec_type"] == "video"]
audioStreams = [item for item in video_details["streams"] if item["codec_type"] == "audio"]
if len(videoStreams) > 0 and len(audioStreams) > 0:
logger.info("SUCCESS: [{0}] has no corruption.".format(fileNameExt), 'TRANSCODER')
return True
else:
logger.info("FAILED: [{0}] has {1} video streams and {2} audio streams. "
"Assume corruption.".format
(fileNameExt, len(videoStreams), len(audioStreams)), 'TRANSCODER')
return False
def zip_out(file, img, bitbucket):
procin = None
cmd = [core.SEVENZIP, '-so', 'e', img, file]
try:
procin = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
except:
logger.error("Extracting [{0}] has failed".format(file), 'TRANSCODER')
return procin
def getVideoDetails(videofile, img=None, bitbucket=None):
video_details = {}
result = 1
file = videofile
if not core.FFPROBE:
return video_details, result
if 'avprobe' in core.FFPROBE:
print_format = '-of'
else:
print_format = '-print_format'
try:
if img:
videofile = '-'
command = [core.FFPROBE, '-v', 'quiet', print_format, 'json', '-show_format', '-show_streams', '-show_error',
videofile]
print_cmd(command)
if img:
procin = zip_out(file, img, bitbucket)
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=procin.stdout)
procin.stdout.close()
else:
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = proc.communicate()
result = proc.returncode
video_details = json.loads(out)
except:
pass
if not video_details:
try:
command = [core.FFPROBE, '-v', 'quiet', print_format, 'json', '-show_format', '-show_streams', videofile]
if img:
procin = zip_out(file, img)
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=procin.stdout)
procin.stdout.close()
else:
proc = subprocess.Popen(command, stdout=subprocess.PIPE)
out, err = proc.communicate()
result = proc.returncode
video_details = json.loads(out)
except:
logger.error("Checking [{0}] has failed".format(file), 'TRANSCODER')
return video_details, result
def buildCommands(file, newDir, movieName, bitbucket):
if isinstance(file, basestring):
inputFile = file
if 'concat:' in file:
file = file.split('|')[0].replace('concat:', '')
video_details, result = getVideoDetails(file)
dir, name = os.path.split(file)
name, ext = os.path.splitext(name)
check = re.match("VTS_([0-9][0-9])_[0-9]+", name)
if check and core.CONCAT:
name = movieName
elif check:
name = ('{0}.cd{1}'.format(movieName, check.groups()[0]))
elif core.CONCAT and re.match("(.+)[cC][dD][0-9]", name):
name = re.sub("([\ \.\-\_\=\:]+[cC][dD][0-9])", "", name)
if ext == core.VEXTENSION and newDir == dir: # we need to change the name to prevent overwriting itself.
core.VEXTENSION = '-transcoded{ext}'.format(ext=core.VEXTENSION) # adds '-transcoded.ext'
else:
img, data = iteritems(file).next()
name = data['name']
video_details, result = getVideoDetails(data['files'][0], img, bitbucket)
inputFile = '-'
file = '-'
newfilePath = os.path.normpath(os.path.join(newDir, name) + core.VEXTENSION)
map_cmd = []
video_cmd = []
audio_cmd = []
audio_cmd2 = []
sub_cmd = []
meta_cmd = []
other_cmd = []
if not video_details or not video_details.get(
"streams"): # we couldn't read streams with ffprobe. Set defaults to try transcoding.
videoStreams = []
audioStreams = []
subStreams = []
map_cmd.extend(['-map', '0'])
if core.VCODEC:
video_cmd.extend(['-c:v', core.VCODEC])
if core.VCODEC == 'libx264' and core.VPRESET:
video_cmd.extend(['-pre', core.VPRESET])
else:
video_cmd.extend(['-c:v', 'copy'])
if core.VFRAMERATE:
video_cmd.extend(['-r', str(core.VFRAMERATE)])
if core.VBITRATE:
video_cmd.extend(['-b:v', str(core.VBITRATE)])
if core.VRESOLUTION:
video_cmd.extend(['-vf', 'scale={vres}'.format(vres=core.VRESOLUTION)])
if core.VPRESET:
video_cmd.extend(['-preset', core.VPRESET])
if core.VCRF:
video_cmd.extend(['-crf', str(core.VCRF)])
if core.VLEVEL:
video_cmd.extend(['-level', str(core.VLEVEL)])
if core.ACODEC:
audio_cmd.extend(['-c:a', core.ACODEC])
if core.ACODEC in ['aac',
'dts']: # Allow users to use the experimental AAC codec that's built into recent versions of ffmpeg
audio_cmd.extend(['-strict', '-2'])
else:
audio_cmd.extend(['-c:a', 'copy'])
if core.ACHANNELS:
audio_cmd.extend(['-ac', str(core.ACHANNELS)])
if core.ABITRATE:
audio_cmd.extend(['-b:a', str(core.ABITRATE)])
if core.OUTPUTQUALITYPERCENT:
audio_cmd.extend(['-q:a', str(core.OUTPUTQUALITYPERCENT)])
if core.SCODEC and core.ALLOWSUBS:
sub_cmd.extend(['-c:s', core.SCODEC])
elif core.ALLOWSUBS: # Not every subtitle codec can be used for every video container format!
sub_cmd.extend(['-c:s', 'copy'])
else: # http://en.wikibooks.org/wiki/FFMPEG_An_Intermediate_Guide/subtitle_options
sub_cmd.extend(['-sn']) # Don't copy the subtitles over
if core.OUTPUTFASTSTART:
other_cmd.extend(['-movflags', '+faststart'])
else:
videoStreams = [item for item in video_details["streams"] if item["codec_type"] == "video"]
audioStreams = [item for item in video_details["streams"] if item["codec_type"] == "audio"]
subStreams = [item for item in video_details["streams"] if item["codec_type"] == "subtitle"]
if core.VEXTENSION not in ['.mkv', '.mpegts']:
subStreams = [item for item in video_details["streams"] if
item["codec_type"] == "subtitle" and item["codec_name"] != "hdmv_pgs_subtitle" and item[
"codec_name"] != "pgssub"]
for video in videoStreams:
codec = video["codec_name"]
fr = video.get("avg_frame_rate", 0)
width = video.get("width", 0)
height = video.get("height", 0)
scale = core.VRESOLUTION
if codec in core.VCODEC_ALLOW or not core.VCODEC:
video_cmd.extend(['-c:v', 'copy'])
else:
video_cmd.extend(['-c:v', core.VCODEC])
if core.VFRAMERATE and not (core.VFRAMERATE * 0.999 <= fr <= core.VFRAMERATE * 1.001):
video_cmd.extend(['-r', str(core.VFRAMERATE)])
if scale:
w_scale = width / float(scale.split(':')[0])
h_scale = height / float(scale.split(':')[1])
if w_scale > h_scale: # widescreen, Scale by width only.
scale = "{width}:{height}".format(
width=scale.split(':')[0],
height=int((height / w_scale) / 2) * 2,
)
if w_scale > 1:
video_cmd.extend(['-vf', 'scale={width}'.format(width=scale)])
else: # lower or matching ratio, scale by height only.
scale = "{width}:{height}".format(
width=int((width / h_scale) / 2) * 2,
height=scale.split(':')[1],
)
if h_scale > 1:
video_cmd.extend(['-vf', 'scale={height}'.format(height=scale)])
if core.VBITRATE:
video_cmd.extend(['-b:v', str(core.VBITRATE)])
if core.VPRESET:
video_cmd.extend(['-preset', core.VPRESET])
if core.VCRF:
video_cmd.extend(['-crf', str(core.VCRF)])
if core.VLEVEL:
video_cmd.extend(['-level', str(core.VLEVEL)])
no_copy = ['-vf', '-r', '-crf', '-level', '-preset', '-b:v']
if video_cmd[1] == 'copy' and any(i in video_cmd for i in no_copy):
video_cmd[1] = core.VCODEC
if core.VCODEC == 'copy': # force copy. therefore ignore all other video transcoding.
video_cmd = ['-c:v', 'copy']
map_cmd.extend(['-map', '0:{index}'.format(index=video["index"])])
break # Only one video needed
used_audio = 0
a_mapped = []
commentary = []
if audioStreams:
for i, val in reversed(list(enumerate(audioStreams))):
try:
if "Commentary" in val.get("tags").get("title"): # Split out commentry tracks.
commentary.append(val)
del audioStreams[i]
except:
continue
try:
audio1 = [item for item in audioStreams if item["tags"]["language"] == core.ALANGUAGE]
except: # no language tags. Assume only 1 language.
audio1 = audioStreams
try:
audio2 = [item for item in audio1 if item["codec_name"] in core.ACODEC_ALLOW]
except:
audio2 = []
try:
audio3 = [item for item in audioStreams if item["tags"]["language"] != core.ALANGUAGE]
except:
audio3 = []
try:
audio4 = [item for item in audio3 if item["codec_name"] in core.ACODEC_ALLOW]
except:
audio4 = []
if audio2: # right (or only) language and codec...
map_cmd.extend(['-map', '0:{index}'.format(index=audio2[0]["index"])])
a_mapped.extend([audio2[0]["index"]])
bitrate = int(float(audio2[0].get("bit_rate", 0))) / 1000
channels = int(float(audio2[0].get("channels", 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio1: # right (or only) language, wrong codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio1[0]["index"])])
a_mapped.extend([audio1[0]["index"]])
bitrate = int(float(audio1[0].get("bit_rate", 0))) / 1000
channels = int(float(audio1[0].get("channels", 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), core.ACODEC if core.ACODEC else 'copy'])
elif audio4: # wrong language, right codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio4[0]["index"])])
a_mapped.extend([audio4[0]["index"]])
bitrate = int(float(audio4[0].get("bit_rate", 0))) / 1000
channels = int(float(audio4[0].get("channels", 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio3: # wrong language, wrong codec. just pick the default audio track
map_cmd.extend(['-map', '0:{index}'.format(index=audio3[0]["index"])])
a_mapped.extend([audio3[0]["index"]])
bitrate = int(float(audio3[0].get("bit_rate", 0))) / 1000
channels = int(float(audio3[0].get("channels", 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), core.ACODEC if core.ACODEC else 'copy'])
if core.ACHANNELS and channels and channels > core.ACHANNELS:
audio_cmd.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if core.ABITRATE and not (core.ABITRATE * 0.9 < bitrate < core.ABITRATE * 1.1):
audio_cmd.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if core.OUTPUTQUALITYPERCENT:
audio_cmd.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if audio_cmd[1] in ['aac', 'dts']:
audio_cmd[2:2] = ['-strict', '-2']
if core.ACODEC2_ALLOW:
used_audio += 1
try:
audio5 = [item for item in audio1 if item["codec_name"] in core.ACODEC2_ALLOW]
except:
audio5 = []
try:
audio6 = [item for item in audio3 if item["codec_name"] in core.ACODEC2_ALLOW]
except:
audio6 = []
if audio5: # right language and codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio5[0]["index"])])
a_mapped.extend([audio5[0]["index"]])
bitrate = int(float(audio5[0].get("bit_rate", 0))) / 1000
channels = int(float(audio5[0].get("channels", 0)))
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio1: # right language wrong codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio1[0]["index"])])
a_mapped.extend([audio1[0]["index"]])
bitrate = int(float(audio1[0].get("bit_rate", 0))) / 1000
channels = int(float(audio1[0].get("channels", 0)))
if core.ACODEC2:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), core.ACODEC2])
else:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio6: # wrong language, right codec
map_cmd.extend(['-map', '0:{index}'.format(index=audio6[0]["index"])])
a_mapped.extend([audio6[0]["index"]])
bitrate = int(float(audio6[0].get("bit_rate", 0))) / 1000
channels = int(float(audio6[0].get("channels", 0)))
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
elif audio3: # wrong language, wrong codec just pick the default audio track
map_cmd.extend(['-map', '0:{index}'.format(index=audio3[0]["index"])])
a_mapped.extend([audio3[0]["index"]])
bitrate = int(float(audio3[0].get("bit_rate", 0))) / 1000
channels = int(float(audio3[0].get("channels", 0)))
if core.ACODEC2:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), core.ACODEC2])
else:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
if core.ACHANNELS2 and channels and channels > core.ACHANNELS2:
audio_cmd2.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS2)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if core.ABITRATE2 and not (core.ABITRATE2 * 0.9 < bitrate < core.ABITRATE2 * 1.1):
audio_cmd2.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE2)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if core.OUTPUTQUALITYPERCENT:
audio_cmd2.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if audio_cmd2[1] in ['aac', 'dts']:
audio_cmd2[2:2] = ['-strict', '-2']
if a_mapped[1] == a_mapped[0] and audio_cmd2[1:] == audio_cmd[1:]: #check for duplicate output track.
del map_cmd[-2:]
else:
audio_cmd.extend(audio_cmd2)
if core.AINCLUDE and core.ACODEC3:
audioStreams.extend(commentary) #add commentry tracks back here.
for audio in audioStreams:
if audio["index"] in a_mapped:
continue
used_audio += 1
map_cmd.extend(['-map', '0:{index}'.format(index=audio["index"])])
audio_cmd3 = []
bitrate = int(float(audio.get("bit_rate", 0))) / 1000
channels = int(float(audio.get("channels", 0)))
if audio["codec_name"] in core.ACODEC3_ALLOW:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), 'copy'])
else:
if core.ACODEC3:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), core.ACODEC3])
else:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), 'copy'])
if core.ACHANNELS3 and channels and channels > core.ACHANNELS3:
audio_cmd3.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS3)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if core.ABITRATE3 and not (core.ABITRATE3 * 0.9 < bitrate < core.ABITRATE3 * 1.1):
audio_cmd3.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE3)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if core.OUTPUTQUALITYPERCENT > 0:
audio_cmd3.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if audio_cmd3[1] in ['aac', 'dts']:
audio_cmd3[2:2] = ['-strict', '-2']
audio_cmd.extend(audio_cmd3)
s_mapped = []
burnt = 0
n = 0
for lan in core.SLANGUAGES:
try:
subs1 = [item for item in subStreams if item["tags"]["language"] == lan]
except:
subs1 = []
if core.BURN and not subs1 and not burnt and os.path.isfile(file):
for subfile in get_subs(file):
if lan in os.path.split(subfile)[1]:
video_cmd.extend(['-vf', 'subtitles={subs}'.format(subs=subfile)])
burnt = 1
for sub in subs1:
if core.BURN and not burnt and os.path.isfile(inputFile):
subloc = 0
for index in range(len(subStreams)):
if subStreams[index]["index"] == sub["index"]:
subloc = index
break
video_cmd.extend(['-vf', 'subtitles={sub}:si={loc}'.format(sub=inputFile, loc=subloc)])
burnt = 1
if not core.ALLOWSUBS:
break
if sub["codec_name"] in ["dvd_subtitle", "VobSub"] and core.SCODEC == "mov_text": # We can't convert these.
continue
map_cmd.extend(['-map', '0:{index}'.format(index=sub["index"])])
s_mapped.extend([sub["index"]])
if core.SINCLUDE:
for sub in subStreams:
if not core.ALLOWSUBS:
break
if sub["index"] in s_mapped:
continue
if sub["codec_name"] in ["dvd_subtitle", "VobSub"] and core.SCODEC == "mov_text": # We can't convert these.
continue
map_cmd.extend(['-map', '0:{index}'.format(index=sub["index"])])
s_mapped.extend([sub["index"]])
if core.OUTPUTFASTSTART:
other_cmd.extend(['-movflags', '+faststart'])
command = [core.FFMPEG, '-loglevel', 'warning']
if core.HWACCEL:
command.extend(['-hwaccel', 'auto'])
if core.GENERALOPTS:
command.extend(core.GENERALOPTS)
command.extend(['-i', inputFile])
if core.SEMBED and os.path.isfile(file):
for subfile in get_subs(file):
sub_details, result = getVideoDetails(subfile)
if not sub_details or not sub_details.get("streams"):
continue
if core.SCODEC == "mov_text":
subcode = [stream["codec_name"] for stream in sub_details["streams"]]
if set(subcode).intersection(["dvd_subtitle", "VobSub"]): # We can't convert these.
continue
command.extend(['-i', subfile])
lan = os.path.splitext(os.path.splitext(subfile)[0])[1][1:].split('-')[0]
metlan = None
try:
if len(lan) == 3:
metlan = Language(lan)
if len(lan) == 2:
metlan = Language.fromalpha2(lan)
except: pass
if metlan:
meta_cmd.extend(['-metadata:s:s:{x}'.format(x=len(s_mapped) + n),
'language={lang}'.format(lang=metlan.alpha3)])
n += 1
map_cmd.extend(['-map', '{x}:0'.format(x=n)])
if not core.ALLOWSUBS or (not s_mapped and not n):
sub_cmd.extend(['-sn'])
else:
if core.SCODEC:
sub_cmd.extend(['-c:s', core.SCODEC])
else:
sub_cmd.extend(['-c:s', 'copy'])
command.extend(map_cmd)
command.extend(video_cmd)
command.extend(audio_cmd)
command.extend(sub_cmd)
command.extend(meta_cmd)
command.extend(other_cmd)
command.append(newfilePath)
if platform.system() != 'Windows':
command = core.NICENESS + command
return command
def get_subs(file):
filepaths = []
subExt = ['.srt', '.sub', '.idx']
name = os.path.splitext(os.path.split(file)[1])[0]
dir = os.path.split(file)[0]
for dirname, dirs, filenames in os.walk(dir):
for filename in filenames:
filepaths.extend([os.path.join(dirname, filename)])
subfiles = [item for item in filepaths if os.path.splitext(item)[1] in subExt and name in item]
return subfiles
def extract_subs(file, newfilePath, bitbucket):
video_details, result = getVideoDetails(file)
if not video_details:
return
if core.SUBSDIR:
subdir = core.SUBSDIR
else:
subdir = os.path.split(newfilePath)[0]
name = os.path.splitext(os.path.split(newfilePath)[1])[0]
try:
subStreams = [item for item in video_details["streams"] if
item["codec_type"] == "subtitle" and item["tags"]["language"] in core.SLANGUAGES and item[
"codec_name"] != "hdmv_pgs_subtitle" and item["codec_name"] != "pgssub"]
except:
subStreams = [item for item in video_details["streams"] if
item["codec_type"] == "subtitle" and item["codec_name"] != "hdmv_pgs_subtitle" and item[
"codec_name"] != "pgssub"]
num = len(subStreams)
for n in range(num):
sub = subStreams[n]
idx = sub["index"]
lan = sub.get("tags", {}).get("language", "unk")
if num == 1:
outputFile = os.path.join(subdir, "{0}.srt".format(name))
if os.path.isfile(outputFile):
outputFile = os.path.join(subdir, "{0}.{1}.srt".format(name, n))
else:
outputFile = os.path.join(subdir, "{0}.{1}.srt".format(name, lan))
if os.path.isfile(outputFile):
outputFile = os.path.join(subdir, "{0}.{1}.{2}.srt".format(name, lan, n))
command = [core.FFMPEG, '-loglevel', 'warning', '-i', file, '-vn', '-an',
'-codec:{index}'.format(index=idx), 'srt', outputFile]
if platform.system() != 'Windows':
command = core.NICENESS + command
logger.info("Extracting {0} subtitle from: {1}".format(lan, file))
print_cmd(command)
result = 1 # set result to failed in case call fails.
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
proc.communicate()
result = proc.returncode
except:
logger.error("Extracting subtitle has failed")
if result == 0:
try:
shutil.copymode(file, outputFile)
except:
pass
logger.info("Extracting {0} subtitle from {1} has succeeded".format(lan, file))
else:
logger.error("Extracting subtitles has failed")
def processList(List, newDir, bitbucket):
remList = []
newList = []
combine = []
vtsPath = None
success = True
for item in List:
ext = os.path.splitext(item)[1].lower()
if ext in ['.iso', '.bin', '.img'] and ext not in core.IGNOREEXTENSIONS:
logger.debug("Attempting to rip disk image: {0}".format(item), "TRANSCODER")
newList.extend(ripISO(item, newDir, bitbucket))
remList.append(item)
elif re.match(".+VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]", item) and '.vob' not in core.IGNOREEXTENSIONS:
logger.debug("Found VIDEO_TS image file: {0}".format(item), "TRANSCODER")
if not vtsPath:
try:
vtsPath = re.match("(.+VIDEO_TS)", item).groups()[0]
except:
vtsPath = os.path.split(item)[0]
remList.append(item)
elif re.match(".+VIDEO_TS.", item) or re.match(".+VTS_[0-9][0-9]_[0-9].", item):
remList.append(item)
elif core.CONCAT and re.match(".+[cC][dD][0-9].", item):
remList.append(item)
combine.append(item)
else:
continue
if vtsPath:
newList.extend(combineVTS(vtsPath))
if combine:
newList.extend(combineCD(combine))
for file in newList:
if isinstance(file, basestring) and 'concat:' not in file and not os.path.isfile(file):
success = False
break
if success and newList:
List.extend(newList)
for item in remList:
List.remove(item)
logger.debug("Successfully extracted .vob file {0} from disk image".format(newList[0]), "TRANSCODER")
elif newList and not success:
newList = []
remList = []
logger.error("Failed extracting .vob files from disk image. Stopping transcoding.", "TRANSCODER")
return List, remList, newList, success
def ripISO(item, newDir, bitbucket):
newFiles = []
failure_dir = 'failure'
# Mount the ISO in your OS and call combineVTS.
if not core.SEVENZIP:
logger.error("No 7zip installed. Can't extract image file {0}".format(item), "TRANSCODER")
newFiles = [failure_dir]
return newFiles
cmd = [core.SEVENZIP, 'l', item]
try:
logger.debug("Attempting to extract .vob from image file {0}".format(item), "TRANSCODER")
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
fileList = [re.match(".+(VIDEO_TS[\\\/]VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb])", line).groups()[0] for line in
out.splitlines() if re.match(".+VIDEO_TS[\\\/]VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]", line)]
combined = []
for n in range(99):
concat = []
m = 1
while True:
vtsName = 'VIDEO_TS{0}VTS_{1:02d}_{2:d}.VOB'.format(os.sep, n + 1, m)
if vtsName in fileList:
concat.append(vtsName)
m += 1
else:
break
if not concat:
break
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=os.path.splitext(os.path.split(item)[1])[0], x=n + 1
)
newFiles.append({item: {'name': name, 'files': concat}})
if core.CONCAT:
name = os.path.splitext(os.path.split(item)[1])[0]
newFiles.append({item: {'name': name, 'files': combined}})
if not newFiles:
logger.error("No VIDEO_TS folder found in image file {0}".format(item), "TRANSCODER")
newFiles = [failure_dir]
except:
logger.error("Failed to extract from image file {0}".format(item), "TRANSCODER")
newFiles = [failure_dir]
return newFiles
def combineVTS(vtsPath):
newFiles = []
combined = ''
for n in range(99):
concat = ''
m = 1
while True:
vtsName = 'VTS_{0:02d}_{1:d}.VOB'.format(n + 1, m)
if os.path.isfile(os.path.join(vtsPath, vtsName)):
concat += '{file}|'.format(file=os.path.join(vtsPath, vtsName))
m += 1
else:
break
if not concat:
break
if core.CONCAT:
combined += '{files}|'.format(files=concat)
continue
newFiles.append('concat:{0}'.format(concat[:-1]))
if core.CONCAT:
newFiles.append('concat:{0}'.format(combined[:-1]))
return newFiles
def combineCD(combine):
newFiles = []
for item in set([re.match("(.+)[cC][dD][0-9].", item).groups()[0] for item in combine]):
concat = ''
for n in range(99):
files = [file for file in combine if
n + 1 == int(re.match(".+[cC][dD]([0-9]+).", file).groups()[0]) and item in file]
if files:
concat += '{file}|'.format(file=files[0])
else:
break
if concat:
newFiles.append('concat:{0}'.format(concat[:-1]))
return newFiles
def print_cmd(command):
cmd = ""
for item in command:
cmd = "{cmd} {item}".format(cmd=cmd, item=item)
logger.debug("calling command:{0}".format(cmd))
def Transcode_directory(dirName):
if not core.FFMPEG:
return 1, dirName
logger.info("Checking for files to be transcoded")
final_result = 0 # initialize as successful
if core.OUTPUTVIDEOPATH:
newDir = core.OUTPUTVIDEOPATH
makeDir(newDir)
name = os.path.splitext(os.path.split(dirName)[1])[0]
newDir = os.path.join(newDir, name)
makeDir(newDir)
else:
newDir = dirName
if platform.system() == 'Windows':
bitbucket = open('NUL')
else:
bitbucket = open('/dev/null')
movieName = os.path.splitext(os.path.split(dirName)[1])[0]
List = core.listMediaFiles(dirName, media=True, audio=False, meta=False, archives=False)
List, remList, newList, success = processList(List, newDir, bitbucket)
if not success:
bitbucket.close()
return 1, dirName
for file in List:
if isinstance(file, basestring) and os.path.splitext(file)[1] in core.IGNOREEXTENSIONS:
continue
command = buildCommands(file, newDir, movieName, bitbucket)
newfilePath = command[-1]
# transcoding files may remove the original file, so make sure to extract subtitles first
if core.SEXTRACT and isinstance(file, basestring):
extract_subs(file, newfilePath, bitbucket)
try: # Try to remove the file that we're transcoding to just in case. (ffmpeg will return an error if it already exists for some reason)
os.remove(newfilePath)
except OSError as e:
if e.errno != errno.ENOENT: # Ignore the error if it's just telling us that the file doesn't exist
logger.debug("Error when removing transcoding target: {0}".format(e))
except Exception as e:
logger.debug("Error when removing transcoding target: {0}".format(e))
logger.info("Transcoding video: {0}".format(newfilePath))
print_cmd(command)
result = 1 # set result to failed in case call fails.
try:
if isinstance(file, basestring):
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
else:
img, data = iteritems(file).next()
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket, stdin=subprocess.PIPE)
for vob in data['files']:
procin = zip_out(vob, img, bitbucket)
if procin:
shutil.copyfileobj(procin.stdout, proc.stdin)
procin.stdout.close()
proc.communicate()
result = proc.returncode
except:
logger.error("Transcoding of video {0} has failed".format(newfilePath))
if core.SUBSDIR and result == 0 and isinstance(file, basestring):
for sub in get_subs(file):
name = os.path.splitext(os.path.split(file)[1])[0]
subname = os.path.split(sub)[1]
newname = os.path.splitext(os.path.split(newfilePath)[1])[0]
newpath = os.path.join(core.SUBSDIR, subname.replace(name, newname))
if not os.path.isfile(newpath):
os.rename(sub, newpath)
if result == 0:
try:
shutil.copymode(file, newfilePath)
except:
pass
logger.info("Transcoding of video to {0} succeeded".format(newfilePath))
if os.path.isfile(newfilePath) and (file in newList or not core.DUPLICATE):
try:
os.unlink(file)
except:
pass
else:
logger.error("Transcoding of video to {0} failed with result {1}".format(newfilePath, result))
# this will be 0 (successful) it all are successful, else will return a positive integer for failure.
final_result = final_result + result
if final_result == 0 and not core.DUPLICATE:
for file in remList:
try:
os.unlink(file)
except:
pass
if not os.listdir(unicode(newDir)): # this is an empty directory and we didn't transcode into it.
os.rmdir(newDir)
newDir = dirName
if not core.PROCESSOUTPUT and core.DUPLICATE: # We postprocess the original files to CP/SB
newDir = dirName
bitbucket.close()
return final_result, newDir

View file

@ -1,18 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2008-2013 Erik Svensson <erik.public@gmail.com>
# Licensed under the MIT license.
from core.transmissionrpc.constants import DEFAULT_PORT, DEFAULT_TIMEOUT, PRIORITY, RATIO_LIMIT, LOGGER
from core.transmissionrpc.error import TransmissionError, HTTPHandlerError
from core.transmissionrpc.httphandler import HTTPHandler, DefaultHTTPHandler
from core.transmissionrpc.torrent import Torrent
from core.transmissionrpc.session import Session
from core.transmissionrpc.client import Client
from core.transmissionrpc.utils import add_stdout_logger, add_file_logger
__author__ = 'Erik Svensson <erik.public@gmail.com>'
__version_major__ = 0
__version_minor__ = 11
__version__ = '{0}.{1}'.format(__version_major__, __version_minor__)
__copyright__ = 'Copyright (c) 2008-2013 Erik Svensson'
__license__ = 'MIT'

View file

@ -1,328 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2008-2013 Erik Svensson <erik.public@gmail.com>
# Licensed under the MIT license.
import logging
from core.transmissionrpc.six import iteritems
LOGGER = logging.getLogger('transmissionrpc')
LOGGER.setLevel(logging.ERROR)
def mirror_dict(source):
"""
Creates a dictionary with all values as keys and all keys as values.
"""
source.update(dict((value, key) for key, value in iteritems(source)))
return source
DEFAULT_PORT = 9091
DEFAULT_TIMEOUT = 30.0
TR_PRI_LOW = -1
TR_PRI_NORMAL = 0
TR_PRI_HIGH = 1
PRIORITY = mirror_dict({
'low': TR_PRI_LOW,
'normal': TR_PRI_NORMAL,
'high': TR_PRI_HIGH
})
TR_RATIOLIMIT_GLOBAL = 0 # follow the global settings
TR_RATIOLIMIT_SINGLE = 1 # override the global settings, seeding until a certain ratio
TR_RATIOLIMIT_UNLIMITED = 2 # override the global settings, seeding regardless of ratio
RATIO_LIMIT = mirror_dict({
'global': TR_RATIOLIMIT_GLOBAL,
'single': TR_RATIOLIMIT_SINGLE,
'unlimited': TR_RATIOLIMIT_UNLIMITED
})
TR_IDLELIMIT_GLOBAL = 0 # follow the global settings
TR_IDLELIMIT_SINGLE = 1 # override the global settings, seeding until a certain idle time
TR_IDLELIMIT_UNLIMITED = 2 # override the global settings, seeding regardless of activity
IDLE_LIMIT = mirror_dict({
'global': TR_RATIOLIMIT_GLOBAL,
'single': TR_RATIOLIMIT_SINGLE,
'unlimited': TR_RATIOLIMIT_UNLIMITED
})
# A note on argument maps
# These maps are used to verify *-set methods. The information is structured in
# a tree.
# set +- <argument1> - [<type>, <added version>, <removed version>, <previous argument name>, <next argument name>, <description>]
# | +- <argument2> - [<type>, <added version>, <removed version>, <previous argument name>, <next argument name>, <description>]
# |
# get +- <argument1> - [<type>, <added version>, <removed version>, <previous argument name>, <next argument name>, <description>]
# +- <argument2> - [<type>, <added version>, <removed version>, <previous argument name>, <next argument name>, <description>]
# Arguments for torrent methods
TORRENT_ARGS = {
'get': {
'activityDate': ('number', 1, None, None, None, 'Last time of upload or download activity.'),
'addedDate': ('number', 1, None, None, None, 'The date when this torrent was first added.'),
'announceResponse': ('string', 1, 7, None, None, 'The announce message from the tracker.'),
'announceURL': ('string', 1, 7, None, None, 'Current announce URL.'),
'bandwidthPriority': ('number', 5, None, None, None, 'Bandwidth priority. Low (-1), Normal (0) or High (1).'),
'comment': ('string', 1, None, None, None, 'Torrent comment.'),
'corruptEver': ('number', 1, None, None, None, 'Number of bytes of corrupt data downloaded.'),
'creator': ('string', 1, None, None, None, 'Torrent creator.'),
'dateCreated': ('number', 1, None, None, None, 'Torrent creation date.'),
'desiredAvailable': ('number', 1, None, None, None, 'Number of bytes avalable and left to be downloaded.'),
'doneDate': ('number', 1, None, None, None, 'The date when the torrent finished downloading.'),
'downloadDir': ('string', 4, None, None, None, 'The directory path where the torrent is downloaded to.'),
'downloadedEver': ('number', 1, None, None, None, 'Number of bytes of good data downloaded.'),
'downloaders': ('number', 4, 7, None, None, 'Number of downloaders.'),
'downloadLimit': ('number', 1, None, None, None, 'Download limit in Kbps.'),
'downloadLimited': ('boolean', 5, None, None, None, 'Download limit is enabled'),
'downloadLimitMode': (
'number', 1, 5, None, None, 'Download limit mode. 0 means global, 1 means signle, 2 unlimited.'),
'error': ('number', 1, None, None, None,
'Kind of error. 0 means OK, 1 means tracker warning, 2 means tracker error, 3 means local error.'),
'errorString': ('number', 1, None, None, None, 'Error message.'),
'eta': ('number', 1, None, None, None,
'Estimated number of seconds left when downloading or seeding. -1 means not available and -2 means unknown.'),
'etaIdle': ('number', 15, None, None, None,
'Estimated number of seconds left until the idle time limit is reached. -1 means not available and -2 means unknown.'),
'files': (
'array', 1, None, None, None, 'Array of file object containing key, bytesCompleted, length and name.'),
'fileStats': (
'array', 5, None, None, None, 'Aray of file statistics containing bytesCompleted, wanted and priority.'),
'hashString': ('string', 1, None, None, None, 'Hashstring unique for the torrent even between sessions.'),
'haveUnchecked': ('number', 1, None, None, None, 'Number of bytes of partial pieces.'),
'haveValid': ('number', 1, None, None, None, 'Number of bytes of checksum verified data.'),
'honorsSessionLimits': ('boolean', 5, None, None, None, 'True if session upload limits are honored'),
'id': ('number', 1, None, None, None, 'Session unique torrent id.'),
'isFinished': ('boolean', 9, None, None, None, 'True if the torrent is finished. Downloaded and seeded.'),
'isPrivate': ('boolean', 1, None, None, None, 'True if the torrent is private.'),
'isStalled': ('boolean', 14, None, None, None, 'True if the torrent has stalled (been idle for a long time).'),
'lastAnnounceTime': ('number', 1, 7, None, None, 'The time of the last announcement.'),
'lastScrapeTime': ('number', 1, 7, None, None, 'The time af the last successful scrape.'),
'leechers': ('number', 1, 7, None, None, 'Number of leechers.'),
'leftUntilDone': ('number', 1, None, None, None, 'Number of bytes left until the download is done.'),
'magnetLink': ('string', 7, None, None, None, 'The magnet link for this torrent.'),
'manualAnnounceTime': ('number', 1, None, None, None, 'The time until you manually ask for more peers.'),
'maxConnectedPeers': ('number', 1, None, None, None, 'Maximum of connected peers.'),
'metadataPercentComplete': ('number', 7, None, None, None, 'Download progress of metadata. 0.0 to 1.0.'),
'name': ('string', 1, None, None, None, 'Torrent name.'),
'nextAnnounceTime': ('number', 1, 7, None, None, 'Next announce time.'),
'nextScrapeTime': ('number', 1, 7, None, None, 'Next scrape time.'),
'peer-limit': ('number', 5, None, None, None, 'Maximum number of peers.'),
'peers': ('array', 2, None, None, None, 'Array of peer objects.'),
'peersConnected': ('number', 1, None, None, None, 'Number of peers we are connected to.'),
'peersFrom': (
'object', 1, None, None, None, 'Object containing download peers counts for different peer types.'),
'peersGettingFromUs': ('number', 1, None, None, None, 'Number of peers we are sending data to.'),
'peersKnown': ('number', 1, 13, None, None, 'Number of peers that the tracker knows.'),
'peersSendingToUs': ('number', 1, None, None, None, 'Number of peers sending to us'),
'percentDone': ('double', 5, None, None, None, 'Download progress of selected files. 0.0 to 1.0.'),
'pieces': ('string', 5, None, None, None, 'String with base64 encoded bitfield indicating finished pieces.'),
'pieceCount': ('number', 1, None, None, None, 'Number of pieces.'),
'pieceSize': ('number', 1, None, None, None, 'Number of bytes in a piece.'),
'priorities': ('array', 1, None, None, None, 'Array of file priorities.'),
'queuePosition': ('number', 14, None, None, None, 'The queue position.'),
'rateDownload': ('number', 1, None, None, None, 'Download rate in bps.'),
'rateUpload': ('number', 1, None, None, None, 'Upload rate in bps.'),
'recheckProgress': ('double', 1, None, None, None, 'Progress of recheck. 0.0 to 1.0.'),
'secondsDownloading': ('number', 15, None, None, None, ''),
'secondsSeeding': ('number', 15, None, None, None, ''),
'scrapeResponse': ('string', 1, 7, None, None, 'Scrape response message.'),
'scrapeURL': ('string', 1, 7, None, None, 'Current scrape URL'),
'seeders': ('number', 1, 7, None, None, 'Number of seeders reported by the tracker.'),
'seedIdleLimit': ('number', 10, None, None, None, 'Idle limit in minutes.'),
'seedIdleMode': ('number', 10, None, None, None, 'Use global (0), torrent (1), or unlimited (2) limit.'),
'seedRatioLimit': ('double', 5, None, None, None, 'Seed ratio limit.'),
'seedRatioMode': ('number', 5, None, None, None, 'Use global (0), torrent (1), or unlimited (2) limit.'),
'sizeWhenDone': ('number', 1, None, None, None, 'Size of the torrent download in bytes.'),
'startDate': ('number', 1, None, None, None, 'The date when the torrent was last started.'),
'status': ('number', 1, None, None, None, 'Current status, see source'),
'swarmSpeed': ('number', 1, 7, None, None, 'Estimated speed in Kbps in the swarm.'),
'timesCompleted': ('number', 1, 7, None, None, 'Number of successful downloads reported by the tracker.'),
'trackers': ('array', 1, None, None, None, 'Array of tracker objects.'),
'trackerStats': ('object', 7, None, None, None, 'Array of object containing tracker statistics.'),
'totalSize': ('number', 1, None, None, None, 'Total size of the torrent in bytes'),
'torrentFile': ('string', 5, None, None, None, 'Path to .torrent file.'),
'uploadedEver': ('number', 1, None, None, None, 'Number of bytes uploaded, ever.'),
'uploadLimit': ('number', 1, None, None, None, 'Upload limit in Kbps'),
'uploadLimitMode': (
'number', 1, 5, None, None, 'Upload limit mode. 0 means global, 1 means signle, 2 unlimited.'),
'uploadLimited': ('boolean', 5, None, None, None, 'Upload limit enabled.'),
'uploadRatio': ('double', 1, None, None, None, 'Seed ratio.'),
'wanted': ('array', 1, None, None, None, 'Array of booleans indicated wanted files.'),
'webseeds': ('array', 1, None, None, None, 'Array of webseeds objects'),
'webseedsSendingToUs': ('number', 1, None, None, None, 'Number of webseeds seeding to us.'),
},
'set': {
'bandwidthPriority': ('number', 5, None, None, None, 'Priority for this transfer.'),
'downloadLimit': ('number', 5, None, 'speed-limit-down', None, 'Set the speed limit for download in Kib/s.'),
'downloadLimited': ('boolean', 5, None, 'speed-limit-down-enabled', None, 'Enable download speed limiter.'),
'files-wanted': ('array', 1, None, None, None, "A list of file id's that should be downloaded."),
'files-unwanted': ('array', 1, None, None, None, "A list of file id's that shouldn't be downloaded."),
'honorsSessionLimits': ('boolean', 5, None, None, None,
"Enables or disables the transfer to honour the upload limit set in the session."),
'location': ('array', 1, None, None, None, 'Local download location.'),
'peer-limit': ('number', 1, None, None, None, 'The peer limit for the torrents.'),
'priority-high': ('array', 1, None, None, None, "A list of file id's that should have high priority."),
'priority-low': ('array', 1, None, None, None, "A list of file id's that should have normal priority."),
'priority-normal': ('array', 1, None, None, None, "A list of file id's that should have low priority."),
'queuePosition': ('number', 14, None, None, None, 'Position of this transfer in its queue.'),
'seedIdleLimit': ('number', 10, None, None, None, 'Seed inactivity limit in minutes.'),
'seedIdleMode': ('number', 10, None, None, None,
'Seed inactivity mode. 0 = Use session limit, 1 = Use transfer limit, 2 = Disable limit.'),
'seedRatioLimit': ('double', 5, None, None, None, 'Seeding ratio.'),
'seedRatioMode': ('number', 5, None, None, None,
'Which ratio to use. 0 = Use session limit, 1 = Use transfer limit, 2 = Disable limit.'),
'speed-limit-down': ('number', 1, 5, None, 'downloadLimit', 'Set the speed limit for download in Kib/s.'),
'speed-limit-down-enabled': ('boolean', 1, 5, None, 'downloadLimited', 'Enable download speed limiter.'),
'speed-limit-up': ('number', 1, 5, None, 'uploadLimit', 'Set the speed limit for upload in Kib/s.'),
'speed-limit-up-enabled': ('boolean', 1, 5, None, 'uploadLimited', 'Enable upload speed limiter.'),
'trackerAdd': ('array', 10, None, None, None, 'Array of string with announce URLs to add.'),
'trackerRemove': ('array', 10, None, None, None, 'Array of ids of trackers to remove.'),
'trackerReplace': (
'array', 10, None, None, None, 'Array of (id, url) tuples where the announce URL should be replaced.'),
'uploadLimit': ('number', 5, None, 'speed-limit-up', None, 'Set the speed limit for upload in Kib/s.'),
'uploadLimited': ('boolean', 5, None, 'speed-limit-up-enabled', None, 'Enable upload speed limiter.'),
},
'add': {
'bandwidthPriority': ('number', 8, None, None, None, 'Priority for this transfer.'),
'download-dir': (
'string', 1, None, None, None, 'The directory where the downloaded contents will be saved in.'),
'cookies': ('string', 13, None, None, None, 'One or more HTTP cookie(s).'),
'filename': ('string', 1, None, None, None, "A file path or URL to a torrent file or a magnet link."),
'files-wanted': ('array', 1, None, None, None, "A list of file id's that should be downloaded."),
'files-unwanted': ('array', 1, None, None, None, "A list of file id's that shouldn't be downloaded."),
'metainfo': ('string', 1, None, None, None, 'The content of a torrent file, base64 encoded.'),
'paused': ('boolean', 1, None, None, None, 'If True, does not start the transfer when added.'),
'peer-limit': ('number', 1, None, None, None, 'Maximum number of peers allowed.'),
'priority-high': ('array', 1, None, None, None, "A list of file id's that should have high priority."),
'priority-low': ('array', 1, None, None, None, "A list of file id's that should have low priority."),
'priority-normal': ('array', 1, None, None, None, "A list of file id's that should have normal priority."),
}
}
# Arguments for session methods
SESSION_ARGS = {
'get': {
"alt-speed-down": ('number', 5, None, None, None, 'Alternate session download speed limit (in Kib/s).'),
"alt-speed-enabled": (
'boolean', 5, None, None, None, 'True if alternate global download speed limiter is ebabled.'),
"alt-speed-time-begin": (
'number', 5, None, None, None, 'Time when alternate speeds should be enabled. Minutes after midnight.'),
"alt-speed-time-enabled": ('boolean', 5, None, None, None, 'True if alternate speeds scheduling is enabled.'),
"alt-speed-time-end": (
'number', 5, None, None, None, 'Time when alternate speeds should be disabled. Minutes after midnight.'),
"alt-speed-time-day": ('number', 5, None, None, None, 'Days alternate speeds scheduling is enabled.'),
"alt-speed-up": ('number', 5, None, None, None, 'Alternate session upload speed limit (in Kib/s)'),
"blocklist-enabled": ('boolean', 5, None, None, None, 'True when blocklist is enabled.'),
"blocklist-size": ('number', 5, None, None, None, 'Number of rules in the blocklist'),
"blocklist-url": ('string', 11, None, None, None, 'Location of the block list. Updated with blocklist-update.'),
"cache-size-mb": ('number', 10, None, None, None, 'The maximum size of the disk cache in MB'),
"config-dir": ('string', 8, None, None, None, 'location of transmissions configuration directory'),
"dht-enabled": ('boolean', 6, None, None, None, 'True if DHT enabled.'),
"download-dir": ('string', 1, None, None, None, 'The download directory.'),
"download-dir-free-space": ('number', 12, None, None, None, 'Free space in the download directory, in bytes'),
"download-queue-size": ('number', 14, None, None, None, 'Number of slots in the download queue.'),
"download-queue-enabled": ('boolean', 14, None, None, None, 'True if the download queue is enabled.'),
"encryption": (
'string', 1, None, None, None, 'Encryption mode, one of ``required``, ``preferred`` or ``tolerated``.'),
"idle-seeding-limit": ('number', 10, None, None, None, 'Seed inactivity limit in minutes.'),
"idle-seeding-limit-enabled": ('boolean', 10, None, None, None, 'True if the seed activity limit is enabled.'),
"incomplete-dir": (
'string', 7, None, None, None, 'The path to the directory for incomplete torrent transfer data.'),
"incomplete-dir-enabled": ('boolean', 7, None, None, None, 'True if the incomplete dir is enabled.'),
"lpd-enabled": ('boolean', 9, None, None, None, 'True if local peer discovery is enabled.'),
"peer-limit": ('number', 1, 5, None, 'peer-limit-global', 'Maximum number of peers.'),
"peer-limit-global": ('number', 5, None, 'peer-limit', None, 'Maximum number of peers.'),
"peer-limit-per-torrent": ('number', 5, None, None, None, 'Maximum number of peers per transfer.'),
"pex-allowed": ('boolean', 1, 5, None, 'pex-enabled', 'True if PEX is allowed.'),
"pex-enabled": ('boolean', 5, None, 'pex-allowed', None, 'True if PEX is enabled.'),
"port": ('number', 1, 5, None, 'peer-port', 'Peer port.'),
"peer-port": ('number', 5, None, 'port', None, 'Peer port.'),
"peer-port-random-on-start": (
'boolean', 5, None, None, None, 'Enables randomized peer port on start of Transmission.'),
"port-forwarding-enabled": ('boolean', 1, None, None, None, 'True if port forwarding is enabled.'),
"queue-stalled-minutes": (
'number', 14, None, None, None, 'Number of minutes of idle that marks a transfer as stalled.'),
"queue-stalled-enabled": ('boolean', 14, None, None, None, 'True if stalled tracking of transfers is enabled.'),
"rename-partial-files": ('boolean', 8, None, None, None, 'True if ".part" is appended to incomplete files'),
"rpc-version": ('number', 4, None, None, None, 'Transmission RPC API Version.'),
"rpc-version-minimum": ('number', 4, None, None, None, 'Minimum accepted RPC API Version.'),
"script-torrent-done-enabled": ('boolean', 9, None, None, None, 'True if the done script is enabled.'),
"script-torrent-done-filename": (
'string', 9, None, None, None, 'Filename of the script to run when the transfer is done.'),
"seedRatioLimit": ('double', 5, None, None, None, 'Seed ratio limit. 1.0 means 1:1 download and upload ratio.'),
"seedRatioLimited": ('boolean', 5, None, None, None, 'True if seed ration limit is enabled.'),
"seed-queue-size": ('number', 14, None, None, None, 'Number of slots in the upload queue.'),
"seed-queue-enabled": ('boolean', 14, None, None, None, 'True if upload queue is enabled.'),
"speed-limit-down": ('number', 1, None, None, None, 'Download speed limit (in Kib/s).'),
"speed-limit-down-enabled": ('boolean', 1, None, None, None, 'True if the download speed is limited.'),
"speed-limit-up": ('number', 1, None, None, None, 'Upload speed limit (in Kib/s).'),
"speed-limit-up-enabled": ('boolean', 1, None, None, None, 'True if the upload speed is limited.'),
"start-added-torrents": ('boolean', 9, None, None, None, 'When true uploaded torrents will start right away.'),
"trash-original-torrent-files": (
'boolean', 9, None, None, None, 'When true added .torrent files will be deleted.'),
'units': ('object', 10, None, None, None, 'An object containing units for size and speed.'),
'utp-enabled': ('boolean', 13, None, None, None, 'True if Micro Transport Protocol (UTP) is enabled.'),
"version": ('string', 3, None, None, None, 'Transmission version.'),
},
'set': {
"alt-speed-down": ('number', 5, None, None, None, 'Alternate session download speed limit (in Kib/s).'),
"alt-speed-enabled": ('boolean', 5, None, None, None, 'Enables alternate global download speed limiter.'),
"alt-speed-time-begin": (
'number', 5, None, None, None, 'Time when alternate speeds should be enabled. Minutes after midnight.'),
"alt-speed-time-enabled": ('boolean', 5, None, None, None, 'Enables alternate speeds scheduling.'),
"alt-speed-time-end": (
'number', 5, None, None, None, 'Time when alternate speeds should be disabled. Minutes after midnight.'),
"alt-speed-time-day": ('number', 5, None, None, None, 'Enables alternate speeds scheduling these days.'),
"alt-speed-up": ('number', 5, None, None, None, 'Alternate session upload speed limit (in Kib/s).'),
"blocklist-enabled": ('boolean', 5, None, None, None, 'Enables the block list'),
"blocklist-url": ('string', 11, None, None, None, 'Location of the block list. Updated with blocklist-update.'),
"cache-size-mb": ('number', 10, None, None, None, 'The maximum size of the disk cache in MB'),
"dht-enabled": ('boolean', 6, None, None, None, 'Enables DHT.'),
"download-dir": ('string', 1, None, None, None, 'Set the session download directory.'),
"download-queue-size": ('number', 14, None, None, None, 'Number of slots in the download queue.'),
"download-queue-enabled": ('boolean', 14, None, None, None, 'Enables download queue.'),
"encryption": ('string', 1, None, None, None,
'Set the session encryption mode, one of ``required``, ``preferred`` or ``tolerated``.'),
"idle-seeding-limit": ('number', 10, None, None, None, 'The default seed inactivity limit in minutes.'),
"idle-seeding-limit-enabled": ('boolean', 10, None, None, None, 'Enables the default seed inactivity limit'),
"incomplete-dir": ('string', 7, None, None, None, 'The path to the directory of incomplete transfer data.'),
"incomplete-dir-enabled": ('boolean', 7, None, None, None,
'Enables the incomplete transfer data directory. Otherwise data for incomplete transfers are stored in the download target.'),
"lpd-enabled": ('boolean', 9, None, None, None, 'Enables local peer discovery for public torrents.'),
"peer-limit": ('number', 1, 5, None, 'peer-limit-global', 'Maximum number of peers.'),
"peer-limit-global": ('number', 5, None, 'peer-limit', None, 'Maximum number of peers.'),
"peer-limit-per-torrent": ('number', 5, None, None, None, 'Maximum number of peers per transfer.'),
"pex-allowed": ('boolean', 1, 5, None, 'pex-enabled', 'Allowing PEX in public torrents.'),
"pex-enabled": ('boolean', 5, None, 'pex-allowed', None, 'Allowing PEX in public torrents.'),
"port": ('number', 1, 5, None, 'peer-port', 'Peer port.'),
"peer-port": ('number', 5, None, 'port', None, 'Peer port.'),
"peer-port-random-on-start": (
'boolean', 5, None, None, None, 'Enables randomized peer port on start of Transmission.'),
"port-forwarding-enabled": ('boolean', 1, None, None, None, 'Enables port forwarding.'),
"rename-partial-files": ('boolean', 8, None, None, None, 'Appends ".part" to incomplete files'),
"queue-stalled-minutes": (
'number', 14, None, None, None, 'Number of minutes of idle that marks a transfer as stalled.'),
"queue-stalled-enabled": ('boolean', 14, None, None, None, 'Enable tracking of stalled transfers.'),
"script-torrent-done-enabled": ('boolean', 9, None, None, None, 'Whether or not to call the "done" script.'),
"script-torrent-done-filename": (
'string', 9, None, None, None, 'Filename of the script to run when the transfer is done.'),
"seed-queue-size": ('number', 14, None, None, None, 'Number of slots in the upload queue.'),
"seed-queue-enabled": ('boolean', 14, None, None, None, 'Enables upload queue.'),
"seedRatioLimit": ('double', 5, None, None, None, 'Seed ratio limit. 1.0 means 1:1 download and upload ratio.'),
"seedRatioLimited": ('boolean', 5, None, None, None, 'Enables seed ration limit.'),
"speed-limit-down": ('number', 1, None, None, None, 'Download speed limit (in Kib/s).'),
"speed-limit-down-enabled": ('boolean', 1, None, None, None, 'Enables download speed limiting.'),
"speed-limit-up": ('number', 1, None, None, None, 'Upload speed limit (in Kib/s).'),
"speed-limit-up-enabled": ('boolean', 1, None, None, None, 'Enables upload speed limiting.'),
"start-added-torrents": ('boolean', 9, None, None, None, 'Added torrents will be started right away.'),
"trash-original-torrent-files": (
'boolean', 9, None, None, None, 'The .torrent file of added torrents will be deleted.'),
'utp-enabled': ('boolean', 13, None, None, None, 'Enables Micro Transport Protocol (UTP).'),
},
}

142
core/user_scripts.py Normal file
View file

@ -0,0 +1,142 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
from subprocess import Popen
import core
from core import logger, transcoder
from core.plugins.subtitles import import_subs
from core.utils import list_media_files, remove_dir
from core.auto_process.common import (
ProcessResult,
)
def external_script(output_destination, torrent_name, torrent_label, settings):
final_result = 0 # start at 0.
num_files = 0
core.USER_SCRIPT_MEDIAEXTENSIONS = settings.get('user_script_mediaExtensions', '')
try:
if isinstance(core.USER_SCRIPT_MEDIAEXTENSIONS, str):
core.USER_SCRIPT_MEDIAEXTENSIONS = core.USER_SCRIPT_MEDIAEXTENSIONS.lower().split(',')
except Exception:
logger.error('user_script_mediaExtensions could not be set', 'USERSCRIPT')
core.USER_SCRIPT_MEDIAEXTENSIONS = []
core.USER_SCRIPT = settings.get('user_script_path', '')
if not core.USER_SCRIPT or core.USER_SCRIPT == 'None':
# do nothing and return success. This allows the user an option to Link files only and not run a script.
return ProcessResult(
status_code=0,
message='No user script defined',
)
core.USER_SCRIPT_PARAM = settings.get('user_script_param', '')
try:
if isinstance(core.USER_SCRIPT_PARAM, str):
core.USER_SCRIPT_PARAM = core.USER_SCRIPT_PARAM.split(',')
except Exception:
logger.error('user_script_params could not be set', 'USERSCRIPT')
core.USER_SCRIPT_PARAM = []
core.USER_SCRIPT_SUCCESSCODES = settings.get('user_script_successCodes', 0)
try:
if isinstance(core.USER_SCRIPT_SUCCESSCODES, str):
core.USER_SCRIPT_SUCCESSCODES = core.USER_SCRIPT_SUCCESSCODES.split(',')
except Exception:
logger.error('user_script_successCodes could not be set', 'USERSCRIPT')
core.USER_SCRIPT_SUCCESSCODES = 0
core.USER_SCRIPT_CLEAN = int(settings.get('user_script_clean', 1))
core.USER_SCRIPT_RUNONCE = int(settings.get('user_script_runOnce', 1))
if core.CHECK_MEDIA:
for video in list_media_files(output_destination, media=True, audio=False, meta=False, archives=False):
if transcoder.is_video_good(video, 0):
import_subs(video)
else:
logger.info('Corrupt video file found {0}. Deleting.'.format(video), 'USERSCRIPT')
os.unlink(video)
for dirpath, _, filenames in os.walk(output_destination):
for file in filenames:
file_path = core.os.path.join(dirpath, file)
file_name, file_extension = os.path.splitext(file)
logger.debug('Checking file {0} to see if this should be processed.'.format(file), 'USERSCRIPT')
if file_extension in core.USER_SCRIPT_MEDIAEXTENSIONS or 'all' in core.USER_SCRIPT_MEDIAEXTENSIONS:
num_files += 1
if core.USER_SCRIPT_RUNONCE == 1 and num_files > 1: # we have already run once, so just continue to get number of files.
continue
command = [core.USER_SCRIPT]
for param in core.USER_SCRIPT_PARAM:
if param == 'FN':
command.append('{0}'.format(file))
continue
elif param == 'FP':
command.append('{0}'.format(file_path))
continue
elif param == 'TN':
command.append('{0}'.format(torrent_name))
continue
elif param == 'TL':
command.append('{0}'.format(torrent_label))
continue
elif param == 'DN':
if core.USER_SCRIPT_RUNONCE == 1:
command.append('{0}'.format(output_destination))
else:
command.append('{0}'.format(dirpath))
continue
else:
command.append(param)
continue
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.info('Running script {cmd} on file {path}.'.format(cmd=cmd, path=file_path), 'USERSCRIPT')
try:
p = Popen(command)
res = p.wait()
if str(res) in core.USER_SCRIPT_SUCCESSCODES: # Linux returns 0 for successful.
logger.info('UserScript {0} was successfull'.format(command[0]))
result = 0
else:
logger.error('UserScript {0} has failed with return code: {1}'.format(command[0], res), 'USERSCRIPT')
logger.info(
'If the UserScript completed successfully you should add {0} to the user_script_successCodes'.format(
res), 'USERSCRIPT')
result = int(1)
except Exception:
logger.error('UserScript {0} has failed'.format(command[0]), 'USERSCRIPT')
result = int(1)
final_result += result
num_files_new = 0
for _, _, filenames in os.walk(output_destination):
for file in filenames:
file_name, file_extension = os.path.splitext(file)
if file_extension in core.USER_SCRIPT_MEDIAEXTENSIONS or core.USER_SCRIPT_MEDIAEXTENSIONS == 'ALL':
num_files_new += 1
if core.USER_SCRIPT_CLEAN == int(1) and num_files_new == 0 and final_result == 0:
logger.info('All files have been processed. Cleaning outputDirectory {0}'.format(output_destination))
remove_dir(output_destination)
elif core.USER_SCRIPT_CLEAN == int(1) and num_files_new != 0:
logger.info('{0} files were processed, but {1} still remain. outputDirectory will not be cleaned.'.format(
num_files, num_files_new))
return ProcessResult(
status_code=final_result,
message='User Script Completed',
)

54
core/utils/__init__.py Normal file
View file

@ -0,0 +1,54 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
from core.utils import shutil_custom
from core.utils.common import clean_dir, flatten, get_dirs, process_dir
from core.utils.download_info import get_download_info, update_download_info_status
from core.utils.encoding import char_replace, convert_to_ascii
from core.utils.files import (
backup_versioned_file,
extract_files,
is_archive_file,
is_media_file,
is_min_size,
list_media_files,
move_file,
)
from core.utils.identification import category_search, find_imdbid
from core.utils.links import copy_link, replace_links
from core.utils.naming import clean_file_name, is_sample, sanitize_name
from core.utils.network import find_download, server_responding, test_connection, wake_on_lan, wake_up
from core.utils.parsers import (
parse_args,
parse_deluge,
parse_other,
parse_qbittorrent,
parse_rtorrent,
parse_transmission,
parse_utorrent,
parse_vuze,
)
from core.utils.paths import (
clean_directory,
flatten_dir,
get_dir_size,
make_dir,
onerror,
rchmod,
remote_dir,
remove_dir,
remove_empty_folders,
remove_read_only,
)
from core.utils.processes import RunningProcess, restart
requests.packages.urllib3.disable_warnings()
shutil_custom.monkey_patch()

120
core/utils/common.py Normal file
View file

@ -0,0 +1,120 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os.path
from six import text_type
import core
from core import logger
from core.utils.files import list_media_files, move_file
from core.utils.paths import clean_directory, flatten_dir
def flatten(output_destination):
return flatten_dir(output_destination, list_media_files(output_destination))
def clean_dir(path, section, subsection):
cfg = dict(core.CFG[section][subsection])
min_size = int(cfg.get('minSize', 0))
delete_ignored = int(cfg.get('delete_ignored', 0))
try:
files = list_media_files(path, min_size=min_size, delete_ignored=delete_ignored)
except Exception:
files = []
return clean_directory(path, files)
def process_dir(path, link):
folders = []
logger.info('Searching {0} for mediafiles to post-process ...'.format(path))
dir_contents = os.listdir(text_type(path))
# search for single files and move them into their own folder for post-processing
# Generate list of sync files
sync_files = (
item for item in dir_contents
if os.path.splitext(item)[1] in ['.!sync', '.bts']
)
# Generate a list of file paths
filepaths = (
os.path.join(path, item) for item in dir_contents
if item not in ['Thumbs.db', 'thumbs.db']
)
# Generate a list of media files
mediafiles = (
item for item in filepaths
if os.path.isfile(item)
)
if any(sync_files):
logger.info('')
else:
for mediafile in mediafiles:
try:
move_file(mediafile, path, link)
except Exception as e:
logger.error('Failed to move {0} to its own directory: {1}'.format(os.path.split(mediafile)[1], e))
# removeEmptyFolders(path, removeRoot=False)
# Generate all path contents
path_contents = (
os.path.join(path, item)
for item in os.listdir(text_type(path))
)
# Generate all directories from path contents
directories = (
path for path in path_contents
if os.path.isdir(path)
)
for directory in directories:
dir_contents = os.listdir(directory)
sync_files = (
item for item in dir_contents
if os.path.splitext(item)[1] in ['.!sync', '.bts']
)
if not any(dir_contents) or any(sync_files):
continue
folders.append(directory)
return folders
def get_dirs(section, subsection, link='hard'):
to_return = []
watch_directory = core.CFG[section][subsection]['watch_dir']
directory = os.path.join(watch_directory, subsection)
if not os.path.exists(directory):
directory = watch_directory
try:
to_return.extend(process_dir(directory, link))
except Exception as e:
logger.error('Failed to add directories from {0} for post-processing: {1}'.format(watch_directory, e))
if core.USE_LINK == 'move':
try:
output_directory = os.path.join(core.OUTPUT_DIRECTORY, subsection)
if os.path.exists(output_directory):
to_return.extend(process_dir(output_directory, link))
except Exception as e:
logger.error('Failed to add directories from {0} for post-processing: {1}'.format(core.OUTPUT_DIRECTORY, e))
if not to_return:
logger.debug('No directories identified in {0}:{1} for post-processing'.format(section, subsection))
return list(set(to_return))

View file

@ -0,0 +1,30 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import datetime
from six import text_type
from core import logger, main_db
database = main_db.DBConnection()
def update_download_info_status(input_name, status):
msg = 'Updating DB download status of {0} to {1}'
action = 'UPDATE downloads SET status=?, last_update=? WHERE input_name=?'
args = [status, datetime.date.today().toordinal(), text_type(input_name)]
logger.db(msg.format(input_name, status))
database.action(action, args)
def get_download_info(input_name, status):
msg = 'Getting download info for {0} from the DB'
action = 'SELECT * FROM downloads WHERE input_name=? AND status=?'
args = [text_type(input_name), status]
logger.db(msg.format(input_name))
return database.select(action, args)

129
core/utils/encoding.py Normal file
View file

@ -0,0 +1,129 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
from six import text_type
from six import PY2
import core
from core import logger
if not PY2:
from builtins import bytes
def char_replace(name_in):
# Special character hex range:
# CP850: 0x80-0xA5 (fortunately not used in ISO-8859-15)
# UTF-8: 1st hex code 0xC2-0xC3 followed by a 2nd hex code 0xA1-0xFF
# ISO-8859-15: 0xA6-0xFF
# The function will detect if Name contains a special character
# If there is special character, detects if it is a UTF-8, CP850 or ISO-8859-15 encoding
encoded = False
encoding = None
if isinstance(name_in, text_type):
return encoded, name_in
if PY2:
name = name_in
for Idx in range(len(name)):
# print('Trying to intuit the encoding')
# /!\ detection is done 2char by 2char for UTF-8 special character
if (len(name) != 1) & (Idx < (len(name) - 1)):
# Detect UTF-8
if ((name[Idx] == '\xC2') | (name[Idx] == '\xC3')) & (
(name[Idx + 1] >= '\xA0') & (name[Idx + 1] <= '\xFF')):
encoding = 'utf-8'
break
# Detect CP850
elif (name[Idx] >= '\x80') & (name[Idx] <= '\xA5'):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= '\xA6') & (name[Idx] <= '\xFF'):
encoding = 'iso-8859-15'
break
else:
# Detect CP850
if (name[Idx] >= '\x80') & (name[Idx] <= '\xA5'):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= '\xA6') & (name[Idx] <= '\xFF'):
encoding = 'iso-8859-15'
break
else:
name = bytes(name_in)
for Idx in range(len(name)):
# print('Trying to intuit the encoding')
# /!\ detection is done 2char by 2char for UTF-8 special character
if (len(name) != 1) & (Idx < (len(name) - 1)):
# Detect UTF-8
if ((name[Idx] == 0xC2) | (name[Idx] == 0xC3)) & (
(name[Idx + 1] >= 0xA0) & (name[Idx + 1] <= 0xFF)):
encoding = 'utf-8'
break
# Detect CP850
elif (name[Idx] >= 0x80) & (name[Idx] <= 0xA5):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= 0xA6) & (name[Idx] <= 0xFF):
encoding = 'iso-8859-15'
break
else:
# Detect CP850
if (name[Idx] >= 0x80) & (name[Idx] <= 0xA5):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= 0xA6) & (name[Idx] <= 0xFF):
encoding = 'iso-8859-15'
break
if encoding:
encoded = True
name = name.decode(encoding)
elif not PY2:
name = name.decode()
return encoded, name
def convert_to_ascii(input_name, dir_name):
ascii_convert = int(core.CFG['ASCII']['convert'])
if ascii_convert == 0 or os.name == 'nt': # just return if we don't want to convert or on windows os and '\' is replaced!.
return input_name, dir_name
encoded, input_name = char_replace(input_name)
directory, base = os.path.split(dir_name)
if not base: # ended with '/'
directory, base = os.path.split(directory)
encoded, base2 = char_replace(base)
if encoded:
dir_name = os.path.join(directory, base2)
logger.info('Renaming directory to: {0}.'.format(base2), 'ENCODER')
os.rename(os.path.join(directory, base), dir_name)
if 'NZBOP_SCRIPTDIR' in os.environ:
print('[NZB] DIRECTORY={0}'.format(dir_name))
for dirname, dirnames, _ in os.walk(dir_name, topdown=False):
for subdirname in dirnames:
encoded, subdirname2 = char_replace(subdirname)
if encoded:
logger.info('Renaming directory to: {0}.'.format(subdirname2), 'ENCODER')
os.rename(os.path.join(dirname, subdirname), os.path.join(dirname, subdirname2))
for dirname, _, filenames in os.walk(dir_name):
for filename in filenames:
encoded, filename2 = char_replace(filename)
if encoded:
logger.info('Renaming file to: {0}.'.format(filename2), 'ENCODER')
os.rename(os.path.join(dirname, filename), os.path.join(dirname, filename2))
return input_name, dir_name

238
core/utils/files.py Normal file
View file

@ -0,0 +1,238 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import re
import shutil
import stat
import time
import mediafile as mediafiletool
import guessit
from six import text_type
import core
from core import extractor, logger
from core.utils.links import copy_link
from core.utils.naming import is_sample, sanitize_name
from core.utils.paths import get_dir_size, make_dir
def move_file(mediafile, path, link):
logger.debug('Found file {0} in root directory {1}.'.format(os.path.split(mediafile)[1], path))
new_path = None
file_ext = os.path.splitext(mediafile)[1]
try:
if file_ext in core.AUDIO_CONTAINER:
f = mediafiletool.MediaFile(mediafile)
# get artist and album info
artist = f.artist
album = f.album
# create new path
new_path = os.path.join(path, '{0} - {1}'.format(sanitize_name(artist), sanitize_name(album)))
elif file_ext in core.MEDIA_CONTAINER:
f = guessit.guessit(mediafile)
# get title
title = f.get('series') or f.get('title')
if not title:
title = os.path.splitext(os.path.basename(mediafile))[0]
new_path = os.path.join(path, sanitize_name(title))
except Exception as e:
logger.error('Exception parsing name for media file: {0}: {1}'.format(os.path.split(mediafile)[1], e))
if not new_path:
title = os.path.splitext(os.path.basename(mediafile))[0]
new_path = os.path.join(path, sanitize_name(title))
# Removed as encoding of directory no-longer required
#try:
# new_path = new_path.encode(core.SYS_ENCODING)
#except Exception:
# pass
# Just fail-safe incase we already have afile with this clean-name (was actually a bug from earlier code, but let's be safe).
if os.path.isfile(new_path):
new_path2 = os.path.join(os.path.join(os.path.split(new_path)[0], 'new'), os.path.split(new_path)[1])
new_path = new_path2
# create new path if it does not exist
if not os.path.exists(new_path):
make_dir(new_path)
newfile = os.path.join(new_path, sanitize_name(os.path.split(mediafile)[1]))
try:
newfile = newfile.encode(core.SYS_ENCODING)
except Exception:
pass
# link file to its new path
copy_link(mediafile, newfile, link)
def is_min_size(input_name, min_size):
file_name, file_ext = os.path.splitext(os.path.basename(input_name))
# audio files we need to check directory size not file size
input_size = os.path.getsize(input_name)
if file_ext in core.AUDIO_CONTAINER:
try:
input_size = get_dir_size(os.path.dirname(input_name))
except Exception:
logger.error('Failed to get file size for {0}'.format(input_name), 'MINSIZE')
return True
# Ignore files under a certain size
if input_size > min_size * 1048576:
return True
def is_archive_file(filename):
"""Check if the filename is allowed for the Archive."""
for regext in core.COMPRESSED_CONTAINER:
if regext.search(filename):
return regext.split(filename)[0]
return False
def is_media_file(mediafile, media=True, audio=True, meta=True, archives=True, other=False, otherext=None):
if otherext is None:
otherext = []
file_name, file_ext = os.path.splitext(mediafile)
try:
# ignore MAC OS's 'resource fork' files
if file_name.startswith('._'):
return False
except Exception:
pass
return any([
(media and file_ext.lower() in core.MEDIA_CONTAINER),
(audio and file_ext.lower() in core.AUDIO_CONTAINER),
(meta and file_ext.lower() in core.META_CONTAINER),
(archives and is_archive_file(mediafile)),
(other and (file_ext.lower() in otherext or 'all' in otherext)),
])
def list_media_files(path, min_size=0, delete_ignored=0, media=True, audio=True, meta=True, archives=True, other=False, otherext=None):
if otherext is None:
otherext = []
files = []
if not os.path.isdir(path):
if os.path.isfile(path): # Single file downloads.
cur_file = os.path.split(path)[1]
if is_media_file(cur_file, media, audio, meta, archives, other, otherext):
# Optionally ignore sample files
if is_sample(path) or not is_min_size(path, min_size):
if delete_ignored == 1:
try:
os.unlink(path)
logger.debug('Ignored file {0} has been removed ...'.format
(cur_file))
except Exception:
pass
else:
files.append(path)
return files
for cur_file in os.listdir(text_type(path)):
full_cur_file = os.path.join(path, cur_file)
# if it's a folder do it recursively
if os.path.isdir(full_cur_file) and not cur_file.startswith('.'):
files += list_media_files(full_cur_file, min_size, delete_ignored, media, audio, meta, archives, other, otherext)
elif is_media_file(cur_file, media, audio, meta, archives, other, otherext):
# Optionally ignore sample files
if is_sample(full_cur_file) or not is_min_size(full_cur_file, min_size):
if delete_ignored == 1:
try:
os.unlink(full_cur_file)
logger.debug('Ignored file {0} has been removed ...'.format
(cur_file))
except Exception:
pass
continue
files.append(full_cur_file)
return sorted(files, key=len)
def extract_files(src, dst=None, keep_archive=None):
extracted_folder = []
extracted_archive = []
for inputFile in list_media_files(src, media=False, audio=False, meta=False, archives=True):
dir_path = os.path.dirname(inputFile)
full_file_name = os.path.basename(inputFile)
archive_name = os.path.splitext(full_file_name)[0]
archive_name = re.sub(r'part[0-9]+', '', archive_name)
if dir_path in extracted_folder and archive_name in extracted_archive:
continue # no need to extract this, but keep going to look for other archives and sub directories.
try:
if extractor.extract(inputFile, dst or dir_path):
extracted_folder.append(dir_path)
extracted_archive.append(archive_name)
except Exception:
logger.error('Extraction failed for: {0}'.format(full_file_name))
for folder in extracted_folder:
for inputFile in list_media_files(folder, media=False, audio=False, meta=False, archives=True):
full_file_name = os.path.basename(inputFile)
archive_name = os.path.splitext(full_file_name)[0]
archive_name = re.sub(r'part[0-9]+', '', archive_name)
if archive_name not in extracted_archive or keep_archive:
continue # don't remove if we haven't extracted this archive, or if we want to preserve them.
logger.info('Removing extracted archive {0} from folder {1} ...'.format(full_file_name, folder))
try:
if not os.access(inputFile, os.W_OK):
os.chmod(inputFile, stat.S_IWUSR)
os.remove(inputFile)
time.sleep(1)
except Exception as e:
logger.error('Unable to remove file {0} due to: {1}'.format(inputFile, e))
def backup_versioned_file(old_file, version):
num_tries = 0
new_file = '{old}.v{version}'.format(old=old_file, version=version)
while not os.path.isfile(new_file):
if not os.path.isfile(old_file):
logger.log(u'Not creating backup, {file} doesn\'t exist'.format(file=old_file), logger.DEBUG)
break
try:
logger.log(u'Trying to back up {old} to {new]'.format(old=old_file, new=new_file), logger.DEBUG)
shutil.copy(old_file, new_file)
logger.log(u'Backup done', logger.DEBUG)
break
except Exception as error:
logger.log(u'Error while trying to back up {old} to {new} : {msg}'.format
(old=old_file, new=new_file, msg=error), logger.WARNING)
num_tries += 1
time.sleep(1)
logger.log(u'Trying again.', logger.DEBUG)
if num_tries >= 10:
logger.log(u'Unable to back up {old} to {new} please do it manually.'.format(old=old_file, new=new_file), logger.ERROR)
return False
return True

View file

@ -0,0 +1,189 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import re
import guessit
import requests
from six import text_type
from core import logger
from core.utils.naming import sanitize_name
def find_imdbid(dir_name, input_name, omdb_api_key):
imdbid = None
logger.info('Attemping imdbID lookup for {0}'.format(input_name))
# find imdbid in dirName
logger.info('Searching folder and file names for imdbID ...')
m = re.search(r'\b(tt\d{7,8})\b', dir_name + input_name)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}]'.format(imdbid))
return imdbid, dir_name
if os.path.isdir(dir_name):
for file in os.listdir(text_type(dir_name)):
m = re.search(r'\b(tt\d{7,8})\b', file)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}] via file name'.format(imdbid))
return imdbid, dir_name
if 'NZBPR__DNZB_MOREINFO' in os.environ:
dnzb_more_info = os.environ.get('NZBPR__DNZB_MOREINFO', '')
if dnzb_more_info != '':
regex = re.compile(r'^http://www.imdb.com/title/(tt[0-9]+)/$', re.IGNORECASE)
m = regex.match(dnzb_more_info)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}] from DNZB-MoreInfo'.format(imdbid))
return imdbid, dir_name
logger.info('Searching IMDB for imdbID ...')
try:
guess = guessit.guessit(input_name)
except Exception:
guess = None
if guess:
# Movie Title
title = None
if 'title' in guess:
title = guess['title']
# Movie Year
year = None
if 'year' in guess:
year = guess['year']
url = 'http://www.omdbapi.com'
if not omdb_api_key:
logger.info('Unable to determine imdbID: No api key provided for omdbapi.com.')
return imdbid, dir_name
logger.debug('Opening URL: {0}'.format(url))
try:
r = requests.get(url, params={'apikey': omdb_api_key, 'y': year, 't': title},
verify=False, timeout=(60, 300))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url))
return imdbid, dir_name
try:
results = r.json()
except Exception:
logger.error('No json data returned from omdbapi.com')
try:
imdbid = results['imdbID']
except Exception:
logger.error('No imdbID returned from omdbapi.com')
if imdbid:
logger.info('Found imdbID [{0}]'.format(imdbid))
new_dir_name = '{}.cp({})'.format(dir_name, imdbid)
os.rename(dir_name, new_dir_name)
return imdbid, new_dir_name
logger.warning('Unable to find a imdbID for {0}'.format(input_name))
return imdbid, dir_name
def category_search(input_directory, input_name, input_category, root, categories):
tordir = False
if input_directory is None: # =Nothing to process here.
return input_directory, input_name, input_category, root
pathlist = os.path.normpath(input_directory).split(os.sep)
if input_category and input_category in pathlist:
logger.debug('SEARCH: Found the Category: {0} in directory structure'.format(input_category))
elif input_category:
logger.debug('SEARCH: Could not find the category: {0} in the directory structure'.format(input_category))
else:
try:
input_category = list(set(pathlist) & set(categories))[-1] # assume last match is most relevant category.
logger.debug('SEARCH: Found Category: {0} in directory structure'.format(input_category))
except IndexError:
input_category = ''
logger.debug('SEARCH: Could not find a category in the directory structure')
if not os.path.isdir(input_directory) and os.path.isfile(input_directory): # If the input directory is a file
if not input_name:
input_name = os.path.split(os.path.normpath(input_directory))[1]
return input_directory, input_name, input_category, root
if input_category and os.path.isdir(os.path.join(input_directory, input_category)):
logger.info(
'SEARCH: Found category directory {0} in input directory directory {1}'.format(input_category, input_directory))
input_directory = os.path.join(input_directory, input_category)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
if input_name and os.path.isdir(os.path.join(input_directory, input_name)):
logger.info('SEARCH: Found torrent directory {0} in input directory directory {1}'.format(input_name, input_directory))
input_directory = os.path.join(input_directory, input_name)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
tordir = True
elif input_name and os.path.isdir(os.path.join(input_directory, sanitize_name(input_name))):
logger.info('SEARCH: Found torrent directory {0} in input directory directory {1}'.format(
sanitize_name(input_name), input_directory))
input_directory = os.path.join(input_directory, sanitize_name(input_name))
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
tordir = True
elif input_name and os.path.isfile(os.path.join(input_directory, input_name)):
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(input_name, input_directory))
input_directory = os.path.join(input_directory, input_name)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
tordir = True
elif input_name and os.path.isfile(os.path.join(input_directory, sanitize_name(input_name))):
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(
sanitize_name(input_name), input_directory))
input_directory = os.path.join(input_directory, sanitize_name(input_name))
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
tordir = True
elif input_name and os.path.isdir(input_directory):
for file in os.listdir(text_type(input_directory)):
if os.path.splitext(file)[0] in [input_name, sanitize_name(input_name)]:
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(file, input_directory))
input_directory = os.path.join(input_directory, file)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
input_name = file
tordir = True
break
imdbid = [item for item in pathlist if '.cp(tt' in item] # This looks for the .cp(tt imdb id in the path.
if imdbid and '.cp(tt' not in input_name:
input_name = imdbid[0] # This ensures the imdb id is preserved and passed to CP
tordir = True
if input_category and not tordir:
try:
index = pathlist.index(input_category)
if index + 1 < len(pathlist):
tordir = True
logger.info('SEARCH: Found a unique directory {0} in the category directory'.format
(pathlist[index + 1]))
if not input_name:
input_name = pathlist[index + 1]
except ValueError:
pass
if input_name and not tordir:
if input_name in pathlist or sanitize_name(input_name) in pathlist:
logger.info('SEARCH: Found torrent directory {0} in the directory structure'.format(input_name))
tordir = True
else:
root = 1
if not tordir:
root = 2
if root > 0:
logger.info('SEARCH: Could not find a unique directory for this download. Assume a common directory.')
logger.info('SEARCH: We will try and determine which files to process, individually')
return input_directory, input_name, input_category, root

94
core/utils/links.py Normal file
View file

@ -0,0 +1,94 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import shutil
import linktastic
from core import logger
from core.utils.paths import make_dir
try:
from jaraco.windows.filesystem import islink, readlink
except ImportError:
if os.name == 'nt':
raise
else:
from os.path import islink
from os import readlink
def copy_link(src, target_link, use_link):
logger.info('MEDIAFILE: [{0}]'.format(os.path.basename(target_link)), 'COPYLINK')
logger.info('SOURCE FOLDER: [{0}]'.format(os.path.dirname(src)), 'COPYLINK')
logger.info('TARGET FOLDER: [{0}]'.format(os.path.dirname(target_link)), 'COPYLINK')
if src != target_link and os.path.exists(target_link):
logger.info('MEDIAFILE already exists in the TARGET folder, skipping ...', 'COPYLINK')
return True
elif src == target_link and os.path.isfile(target_link) and os.path.isfile(src):
logger.info('SOURCE AND TARGET files are the same, skipping ...', 'COPYLINK')
return True
elif src == os.path.dirname(target_link):
logger.info('SOURCE AND TARGET folders are the same, skipping ...', 'COPYLINK')
return True
make_dir(os.path.dirname(target_link))
try:
if use_link == 'dir':
logger.info('Directory linking SOURCE FOLDER -> TARGET FOLDER', 'COPYLINK')
linktastic.dirlink(src, target_link)
return True
if use_link == 'junction':
logger.info('Directory junction linking SOURCE FOLDER -> TARGET FOLDER', 'COPYLINK')
linktastic.dirlink(src, target_link)
return True
elif use_link == 'hard':
logger.info('Hard linking SOURCE MEDIAFILE -> TARGET FOLDER', 'COPYLINK')
linktastic.link(src, target_link)
return True
elif use_link == 'sym':
logger.info('Sym linking SOURCE MEDIAFILE -> TARGET FOLDER', 'COPYLINK')
linktastic.symlink(src, target_link)
return True
elif use_link == 'move-sym':
logger.info('Sym linking SOURCE MEDIAFILE -> TARGET FOLDER', 'COPYLINK')
shutil.move(src, target_link)
linktastic.symlink(target_link, src)
return True
elif use_link == 'move':
logger.info('Moving SOURCE MEDIAFILE -> TARGET FOLDER', 'COPYLINK')
shutil.move(src, target_link)
return True
except Exception as e:
logger.warning('Error: {0}, copying instead ... '.format(e), 'COPYLINK')
logger.info('Copying SOURCE MEDIAFILE -> TARGET FOLDER', 'COPYLINK')
shutil.copy(src, target_link)
return True
def replace_links(link, max_depth=10):
link_depth = 0
target = link
for attempt in range(0, max_depth):
if not islink(target):
break
target = readlink(target)
link_depth = attempt
if not link_depth:
logger.debug('{0} is not a link'.format(link))
elif link_depth > max_depth or (link_depth == max_depth and islink(target)):
logger.warning('Exceeded maximum depth {0} while following link {1}'.format(max_depth, link))
else:
logger.info('Changing sym-link: {0} to point directly to file: {1}'.format(link, target), 'COPYLINK')
os.unlink(link)
linktastic.symlink(target, link)

Some files were not shown because too many files have changed in this diff Show more