Compare commits

..

No commits in common. "master" and "12.0.8" have entirely different histories.

1841 changed files with 68285 additions and 170115 deletions

View file

@ -1,5 +1,5 @@
[bumpversion]
current_version = 12.1.13
current_version = 12.0.8
commit = True
tag = False
@ -7,6 +7,11 @@ tag = False
search = version='{current_version}'
replace = version='{new_version}'
[bumpversion:file:README.md]
search = v{current_version}
replace = v{new_version}
[bumpversion:file:core/__init__.py]
search = __version__ = '{current_version}'
replace = __version__ = '{new_version}'

View file

@ -1,76 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at fock_wulf@hotmail.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq

View file

@ -1,14 +0,0 @@
# Contributing
When contributing to this repository, please first check the issues list, current pull requests, and FAQ pages.
While it is prefered that all interactions be made through github, the author can be contacted directly at fock_wulf@hotmail.com
Please note we have a code of conduct, please follow it in all your interactions with the project.
## Pull Request Process
1. Please base all pull requests on the current nightly branch.
2. Include a description to explain what is achieved with a pull request.
3. Link any relevant issues that are closed or impacted by the pull request.
4. Please update the FAQ to reflect any new parameters, changed behaviour, or suggested configurations relevant to the changes.

View file

@ -1,23 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**Technical Specs**
1. Running on (Windows, Linux, NAS Model etc) '....'
2. Python version '....'
3. Download Client (NZBget, SABnbzd, Transmission) '....'
4. Intended Media Management (SickChill, CouchPotoato, Radarr, Sonarr) '....'
**Expected behavior**
A clear and concise description of what you expected to happen.
**Log**
Please provide an extract, or full debug log that indicates the issue.

View file

@ -1,28 +0,0 @@
# Description
Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.
Fixes # (issue)
## Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
**Test Configuration**:
# Checklist:
- [ ] I have based this change on the nightly branch
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes

5
.gitignore vendored
View file

@ -1,7 +1,8 @@
*.cfg
!.bumpversion.cfg
*.cfg.old
*.py[cod]
*.pyc
*.pyo
*.log
*.pid
*.db
@ -9,7 +10,5 @@
/userscripts/
/logs/
/.idea/
/venv/
*.dist-info
*.egg-info
/.vscode

View file

@ -1,8 +1,8 @@
nzbToMedia
==========
nzbToMedia v12.0.8
==================
Provides an [efficient](https://github.com/clinton-hall/nzbToMedia/wiki/Efficient-on-demand-post-processing) way to handle postprocessing for [CouchPotatoServer](https://couchpota.to/ "CouchPotatoServer") and [SickBeard](http://sickbeard.com/ "SickBeard") (and its [forks](https://github.com/clinton-hall/nzbToMedia/wiki/Failed-Download-Handling-%28FDH%29#sick-beard-and-its-forks))
when using one of the popular NZB download clients like [SABnzbd](http://sabnzbd.org/ "SABnzbd") and [NZBGet](https://nzbget.com/ "NZBGet") on low performance systems like a NAS.
when using one of the popular NZB download clients like [SABnzbd](http://sabnzbd.org/ "SABnzbd") and [NZBGet](http://nzbget.sourceforge.net/ "NZBGet") on low performance systems like a NAS.
This script is based on sabToSickBeard (written by Nic Wolfe and supplied with SickBeard), with the support for NZBGet being added by [thorli](https://github.com/thorli "thorli") and further contributions by [schumi2004](https://github.com/schumi2004 "schumi2004") and [hugbug](https://sourceforge.net/apps/phpbb/nzbget/memberlist.php?mode=viewprofile&u=67 "hugbug").
Torrent suport added by [jkaberg](https://github.com/jkaberg "jkaberg") and [berkona](https://github.com/berkona "berkona")
Corrupt video checking, auto SickBeard fork determination and a whole lot of code improvement was done by [echel0n](https://github.com/echel0n "echel0n")
@ -32,7 +32,7 @@ Installation instructions for this are available in the [wiki](https://github.co
Contribution
------------
We who have developed nzbToMedia believe in the openness of open-source, and as such we hope that any modifications will lead back to the [original repo](https://github.com/clinton-hall/nzbToMedia "orignal repo") via pull requests.
We who have developed nzbToMedia believe in the openness of open-source, and as such we hope that any modifications will lead back to the [orignal repo](https://github.com/clinton-hall/nzbToMedia "orignal repo") via pull requests.
Founder: [clinton-hall](https://github.com/clinton-hall "clinton-hall")

View file

@ -1,34 +1,23 @@
#!/usr/bin/env python
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import eol
eol.check()
import cleanup
cleanup.clean(cleanup.FOLDER_STRUCTURE)
import datetime
import os
import sys
import eol
import cleanup
eol.check()
cleanup.clean(cleanup.FOLDER_STRUCTURE)
import core
from core import logger, main_db
from core.auto_process import comics, games, movies, music, tv, books
from core.auto_process import comics, games, movies, music, tv
from core.auto_process.common import ProcessResult
from core.plugins.plex import plex_update
from core.user_scripts import external_script
from core.utils import char_replace, convert_to_ascii, replace_links
try:
text_type = unicode
except NameError:
text_type = str
from core.utils import char_replace, convert_to_ascii, plex_update, replace_links
from six import text_type
def process_torrent(input_directory, input_name, input_category, input_hash, input_id, client_agent):
@ -70,25 +59,30 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
input_category = 'UNCAT'
usercat = input_category
try:
input_name = input_name.encode(core.SYS_ENCODING)
except UnicodeError:
pass
try:
input_directory = input_directory.encode(core.SYS_ENCODING)
except UnicodeError:
pass
logger.debug('Determined Directory: {0} | Name: {1} | Category: {2}'.format
(input_directory, input_name, input_category))
# auto-detect section
section = core.CFG.findsection(input_category).isenabled()
if section is None: #Check for user_scripts for 'ALL' and 'UNCAT'
if usercat in core.CATEGORIES:
section = core.CFG.findsection('ALL').isenabled()
usercat = 'ALL'
if section is None:
section = core.CFG.findsection('ALL').isenabled()
if section is None:
logger.error('Category:[{0}] is not defined or is not enabled. '
'Please rename it or ensure it is enabled for the appropriate section '
'in your autoProcessMedia.cfg and try again.'.format
(input_category))
return [-1, '']
else:
section = core.CFG.findsection('UNCAT').isenabled()
usercat = 'UNCAT'
if section is None: # We haven't found any categories to process.
logger.error('Category:[{0}] is not defined or is not enabled. '
'Please rename it or ensure it is enabled for the appropriate section '
'in your autoProcessMedia.cfg and try again.'.format
(input_category))
return [-1, '']
usercat = 'ALL'
if len(section) > 1:
logger.error('Category:[{0}] is not unique, {1} are using it. '
@ -111,7 +105,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
torrent_no_link = int(section.get('Torrent_NoLink', 0))
keep_archive = int(section.get('keep_archive', 0))
extract = int(section.get('extract', 0))
extensions = section.get('user_script_mediaExtensions', '')
extensions = section.get('user_script_mediaExtensions', '').lower().split(',')
unique_path = int(section.get('unique_path', 1))
if client_agent != 'manual':
@ -130,6 +124,10 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
else:
output_destination = os.path.normpath(
core.os.path.join(core.OUTPUT_DIRECTORY, input_category))
try:
output_destination = output_destination.encode(core.SYS_ENCODING)
except UnicodeError:
pass
if output_destination in input_directory:
output_destination = input_directory
@ -171,6 +169,10 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
core.os.path.join(output_destination, os.path.basename(file_path)), full_file_name)
logger.debug('Setting outputDestination to {0} to preserve folder structure'.format
(os.path.dirname(target_file)))
try:
target_file = target_file.encode(core.SYS_ENCODING)
except UnicodeError:
pass
if root == 1:
if not found_file:
logger.debug('Looking for {0} in: {1}'.format(input_name, inputFile))
@ -213,7 +215,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
core.flatten(output_destination)
# Now check if video files exist in destination:
if section_name in ['SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr', 'CouchPotato', 'Radarr', 'Watcher3']:
if section_name in ['SickBeard', 'NzbDrone', 'Sonarr', 'CouchPotato', 'Radarr']:
num_videos = len(
core.list_media_files(output_destination, media=True, audio=False, meta=False, archives=False))
if num_videos > 0:
@ -227,7 +229,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
# Only these sections can handling failed downloads
# so make sure everything else gets through without the check for failed
if section_name not in ['CouchPotato', 'Radarr', 'SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr', 'Watcher3']:
if section_name not in ['CouchPotato', 'Radarr', 'SickBeard', 'NzbDrone', 'Sonarr']:
status = 0
logger.info('Calling {0}:{1} to post-process:{2}'.format(section_name, usercat, input_name))
@ -241,9 +243,9 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
)
if section_name == 'UserScript':
result = external_script(output_destination, input_name, input_category, section)
elif section_name in ['CouchPotato', 'Radarr', 'Watcher3']:
elif section_name in ['CouchPotato', 'Radarr']:
result = movies.process(section_name, output_destination, input_name, status, client_agent, input_hash, input_category)
elif section_name in ['SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr']:
elif section_name in ['SickBeard', 'NzbDrone', 'Sonarr']:
if input_hash:
input_hash = input_hash.upper()
result = tv.process(section_name, output_destination, input_name, status, client_agent, input_hash, input_category)
@ -253,8 +255,6 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
result = comics.process(section_name, output_destination, input_name, status, client_agent, input_category)
elif section_name == 'Gamez':
result = games.process(section_name, output_destination, input_name, status, client_agent, input_category)
elif section_name == 'LazyLibrarian':
result = books.process(section_name, output_destination, input_name, status, client_agent, input_category)
plex_update(input_category)
@ -275,13 +275,13 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
# remove torrent
if core.USE_LINK == 'move-sym' and not core.DELETE_ORIGINAL == 1:
logger.debug('Checking for sym-links to re-direct in: {0}'.format(input_directory))
for dirpath, _, files in os.walk(input_directory):
for dirpath, dirs, files in os.walk(input_directory):
for file in files:
logger.debug('Checking symlink: {0}'.format(os.path.join(dirpath, file)))
replace_links(os.path.join(dirpath, file))
core.remove_torrent(client_agent, input_hash, input_id, input_name)
if section_name != 'UserScript':
if not section_name == 'UserScript':
# for user script, we assume this is cleaned by the script or option USER_SCRIPT_CLEAN
# cleanup our processing folders of any misc unwanted files and empty directories
core.clean_dir(output_destination, section_name, input_category)
@ -317,8 +317,6 @@ def main(args):
if input_directory and input_name and input_hash and input_id:
result = process_torrent(input_directory, input_name, input_category, input_hash, input_id, client_agent)
elif core.TORRENT_NO_MANUAL:
logger.warning('Invalid number of arguments received from client, and no_manual set')
else:
# Perform Manual Post-Processing
logger.warning('Invalid number of arguments received from client, Switching to manual run mode ...')
@ -335,9 +333,9 @@ def main(args):
(os.path.basename(dir_name)))
core.DOWNLOAD_INFO = core.get_download_info(os.path.basename(dir_name), 0)
if core.DOWNLOAD_INFO:
client_agent = text_type(core.DOWNLOAD_INFO[0]['client_agent']) or 'manual'
input_hash = text_type(core.DOWNLOAD_INFO[0]['input_hash']) or ''
input_id = text_type(core.DOWNLOAD_INFO[0]['input_id']) or ''
client_agent = text_type(core.DOWNLOAD_INFO[0].get('client_agent', 'manual'))
input_hash = text_type(core.DOWNLOAD_INFO[0].get('input_hash', ''))
input_id = text_type(core.DOWNLOAD_INFO[0].get('input_id', ''))
logger.info('Found download info for {0}, '
'setting variables now ...'.format(os.path.basename(dir_name)))
else:
@ -351,7 +349,15 @@ def main(args):
if client_agent.lower() not in core.TORRENT_CLIENTS:
continue
try:
dir_name = dir_name.encode(core.SYS_ENCODING)
except UnicodeError:
pass
input_name = os.path.basename(dir_name)
try:
input_name = input_name.encode(core.SYS_ENCODING)
except UnicodeError:
pass
results = process_torrent(dir_name, input_name, subsection, input_hash or None, input_id or None,
client_agent)

View file

@ -1 +0,0 @@
theme: jekyll-theme-cayman

View file

@ -12,7 +12,7 @@
git_user =
# GitHUB branch for repo
git_branch =
# Enable/Disable forceful cleaning of leftover files following postprocess
# Enable/Disable forceful cleaning of leftover files following postprocess
force_clean = 0
# Enable/Disable logging debug messages to nzbtomedia.log
log_debug = 0
@ -28,8 +28,6 @@
ffmpeg_path =
# Enable/Disable media file checking using ffprobe.
check_media = 1
# Required media audio language for media to be deemed valid. Leave blank to disregard media audio language check.
require_lan =
# Enable/Disable a safety check to ensure we don't process all downloads in the default_downloadDirectories by mistake.
safe_mode = 1
# Turn this on to disable additional extraction attempts for failed downloads. Default = 0 will attempt to extract and verify if media is present.
@ -38,9 +36,7 @@
[Posix]
### Process priority setting for External commands (Extractor and Transcoder) on Posix (Unix/Linux/OSX) systems.
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness = 4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness = nice,4' this will be passed as 'nice 4' (Safer).
niceness = nice,-n0
niceness = 0
# Set the ionice scheduling class. 0 for none, 1 for real time, 2 for best-effort, 3 for idle.
ionice_class = 0
# Set the ionice scheduling class data. This defines the class data, if the class accepts an argument. For real time and best-effort, 0-7 is valid data.
@ -70,8 +66,6 @@
method = renamer
delete_failed = 0
wait_for = 2
# Set this to suppress error if no status change after rename called
no_status_check = 0
extract = 1
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
@ -115,36 +109,6 @@
##### Set to define import behavior Move or Copy
importMode = Copy
[Watcher3]
#### autoProcessing for Movies
#### movie - category that gets called for post-processing with CPS
[[movie]]
enabled = 0
apikey =
host = localhost
port = 9090
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
ssl = 0
web_root =
# api key for www.omdbapi.com (used as alternative to imdb)
omdbapikey =
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
delete_failed = 0
wait_for = 0
extract = 1
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if Watcher3 is on a remote server for this category
remote_path = 0
##### Set to path where download client places completed downloads locally for this category
watch_dir =
##### Set the recursive directory permissions to the following (0 to disable)
chmodDirectory = 0
[SickBeard]
#### autoProcessing for TV Series
#### tv - category that gets called for post-processing with SB
@ -166,52 +130,6 @@
process_method =
# force processing of already processed content when running a manual scan.
force = 0
# Additionally to force, handle the download as a priority downlaod.
# The processed files will always replace existing qualities, also if this is a lower quality.
is_priority = 0
# tell SickRage/Medusa to delete all source files after processing.
delete_on = 0
# tell Medusa to ignore check for associated subtitle check when postponing release
ignore_subs = 0
extract = 1
nzbExtractionBy = Downloader
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if SickBeard is on a remote server for this category
remote_path = 0
##### Set to path where download client places completed downloads locally for this category
watch_dir =
##### Set the recursive directory permissions to the following (0 to disable)
chmodDirectory = 0
##### pyMedusa (fork=medusa-apiv2) uses async postprocessing. Wait a maximum of x minutes for a pp result
wait_for = 10
[SiCKRAGE]
#### autoProcessing for TV Series
#### tv - category that gets called for post-processing with SR
[[tv]]
enabled = 0
host = localhost
port = 8081
apikey =
# api version 1 uses api keys
# api version 2 uses SSO user/pass
api_version = 2
# SSO login requires API v2 to be set
sso_username =
sso_password =
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
web_root =
ssl = 0
delete_failed = 0
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
process_method =
# force processing of already processed content when running a manual scan.
force = 0
# tell SickRage/Medusa to delete all source files after processing.
delete_on = 0
# tell Medusa to ignore check for associated subtitle check when postponing release
@ -346,7 +264,7 @@
apikey =
host = localhost
port = 8085
######
######
library = Set to path where you want the processed games to be moved to.
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
ssl = 0
@ -364,35 +282,10 @@
##### Set to path where download client places completed downloads locally for this category
watch_dir =
[LazyLibrarian]
#### autoProcessing for LazyLibrarian
#### books - category that gets called for post-processing with LazyLibrarian
[[books]]
enabled = 0
apikey =
host = localhost
port = 5299
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
ssl = 0
web_root =
# Enable/Disable linking for Torrents
Torrent_NoLink = 0
keep_archive = 1
extract = 1
# Set this to minimum required size to consider a media file valid (in MB)
minSize = 0
# Enable/Disable deleting ignored files (samples and invalid media files)
delete_ignored = 0
##### Enable if LazyLibrarian is on a remote server for this category
remote_path = 0
##### Set to path where download client places completed downloads locally for this category
watch_dir =
[Network]
# Enter Mount points as LocalPath,RemotePath and separate each pair with '|'
# e.g. MountPoints = /volume1/Public/,E:\|/volume2/share/,\\NAS\
mount_points =
mount_points =
[Nzb]
###### clientAgent - Supported clients: sabnzbd, nzbget
@ -403,17 +296,15 @@
sabnzbd_apikey =
###### Enter the default path to your default download directory (non-category downloads). this directory is protected by safe_mode.
default_downloadDirectory =
# enable this option to prevent nzbToMedia from running in manual mode and scanning an entire directory.
no_manual = 0
[Torrent]
###### clientAgent - Supported clients: utorrent, transmission, deluge, rtorrent, vuze, qbittorrent, synods, other
###### clientAgent - Supported clients: utorrent, transmission, deluge, rtorrent, vuze, qbittorrent, other
clientAgent = other
###### useLink - Set to hard for physical links, sym for symbolic links, move to move, move-sym to move and link back, and no to not use links (copy)
useLink = hard
###### outputDirectory - Default output directory (categories will be appended as sub directory to outputDirectory)
outputDirectory = /abs/path/to/complete/
###### Enter the default path to your default download directory (non-category downloads). this directory is protected by safe_mode.
###### Enter the default path to your default download directory (non-category downloads). this directory is protected by safe_mode.
default_downloadDirectory =
###### Other categories/labels defined for your downloader. Does not include CouchPotato, SickBeard, HeadPhones, Mylar categories.
categories = music_videos,pictures,software,manual
@ -434,22 +325,15 @@
DelugeUSR = your username
DelugePWD = your password
###### qBittorrent (You must edit this if you're using TorrentToMedia.py with qBittorrent)
qBittorrentHost = localhost
qBittorrenHost = localhost
qBittorrentPort = 8080
qBittorrentUSR = your username
qBittorrentPWD = your password
###### Synology Download Station (You must edit this if you're using TorrentToMedia.py with Synology DS)
synoHost = localhost
synoPort = 5000
synoUSR = your username
synoPWD = your password
###### ADVANCED USE - ONLY EDIT IF YOU KNOW WHAT YOU'RE DOING ######
deleteOriginal = 0
chmodDirectory = 0
resume = 1
resumeOnFailure = 1
# enable this option to prevent TorrentToMedia from running in manual mode and scanning an entire directory.
no_manual = 0
[Extensions]
compressedExtensions = .zip,.rar,.7z,.gz,.bz,.tar,.arj,.1,.01,.001
@ -463,15 +347,15 @@
plex_host = localhost
plex_port = 32400
plex_token =
plex_ssl = 0
plex_ssl = 0
# Enter Plex category to section mapping as Category,section and separate each pair with '|'
# e.g. plex_sections = movie,3|tv,4
plex_sections =
plex_sections =
[Transcoder]
# getsubs. enable to download subtitles.
getSubs = 0
# subLanguages. create a list of languages in the order you want them in your subtitles.
# subLanguages. create a list of languages in the order you want them in your subtitles.
subLanguages = eng,spa,fra
# transcode. enable to use transcoder
transcode = 0
@ -486,7 +370,7 @@
# outputQualityPercent. used as -q:a value. 0 will disable this from being used.
outputQualityPercent = 0
# outputVideoPath. Set path you want transcoded videos moved to. Leave blank to disable.
outputVideoPath =
outputVideoPath =
# processOutput. 1 will send the outputVideoPath to SickBeard/CouchPotato. 0 will send original files.
processOutput = 0
# audioLanguage. set the 3 letter language code you want as your primary audio track.
@ -505,18 +389,16 @@
externalSubDir =
# hwAccel. 1 will set ffmpeg to enable hardware acceleration (this requires a recent ffmpeg)
hwAccel = 0
# generalOptions. Enter your additional ffmpeg options (these insert before the '-i' input files) here with commas to separate each option/value (i.e replace spaces with commas).
# generalOptions. Enter your additional ffmpeg options here with commas to separate each option/value (i.e replace spaces with commas).
generalOptions =
# otherOptions. Enter your additional ffmpeg options (these insert after the '-i' input files and before the output file) here with commas to separate each option/value (i.e replace spaces with commas).
otherOptions =
# outputDefault. Loads default configs for the selected device. The remaining options below are ignored.
# If you want to use your own profile, leave this blank and set the remaining options below.
# outputDefault profiles allowed: iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mkv-bluray, mp4-scene-release
# outputDefault profiles allowed: iPad, iPad-1080p, iPad-720p, Apple-TV2, iPod, iPhone, PS3, xbox, Roku-1080p, Roku-720p, Roku-480p, mkv, mp4-scene-release
outputDefault =
#### Define custom settings below.
outputVideoExtension = .mp4
outputVideoCodec = libx264
VideoCodecAllow =
VideoCodecAllow =
outputVideoPreset = medium
outputVideoResolution = 1920:1080
outputVideoFramerate = 24
@ -524,15 +406,15 @@
outputVideoCRF = 19
outputVideoLevel = 3.1
outputAudioCodec = ac3
AudioCodecAllow =
AudioCodecAllow =
outputAudioChannels = 6
outputAudioBitrate = 640k
outputAudioTrack2Codec = libfaac
AudioCodec2Allow =
outputAudioTrack2Channels = 2
AudioCodec2Allow =
outputAudioTrack2Channels = 2
outputAudioTrack2Bitrate = 128000
outputAudioOtherCodec = libmp3lame
AudioOtherCodecAllow =
AudioOtherCodecAllow =
outputAudioOtherChannels =
outputAudioOtherBitrate = 128000
outputSubtitleCodec =
@ -589,4 +471,4 @@
# enter a list (comma separated) of Group Tags you want removed from filenames to help with subtitle matching.
# e.g remove_group = [rarbag],-NZBgeek
# be careful if your "group" is a common "real" word. Please report if you have any group replacements that would fall in this category.
remove_group =
remove_group =

View file

@ -1,74 +0,0 @@
# Python package
# Create and test a Python package on multiple Python versions.
# Add steps that analyze code, save the dist with the build record, publish to a PyPI-compatible index, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/python
trigger:
- master
jobs:
- job: 'Test'
pool:
vmImage: 'Ubuntu-latest'
strategy:
matrix:
Python39:
python.version: '3.9'
Python310:
python.version: '3.10'
Python311:
python.version: '3.11'
Python312:
python.version: '3.12'
Python313:
python.version: '3.13'
maxParallel: 3
steps:
- script: |
sudo apt-get update
sudo apt-get install ffmpeg
displayName: 'Install ffmpeg'
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
architecture: 'x64'
- script: python -m pip install --upgrade pip
displayName: 'Install dependencies'
- script: |
pip install pytest
pytest tests --doctest-modules --junitxml=junit/test-results.xml
displayName: 'pytest'
- script: |
rm -rf .git
python cleanup.py
python TorrentToMedia.py
python nzbToMedia.py
displayName: 'Test source install cleanup'
- task: PublishTestResults@2
inputs:
testResultsFiles: '**/test-results.xml'
testRunTitle: 'Python $(python.version)'
condition: succeededOrFailed()
- job: 'Publish'
dependsOn: 'Test'
pool:
vmImage: 'Ubuntu-latest'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
architecture: 'x64'
- script: |
python -m pip install setuptools
python setup.py sdist
displayName: 'Build sdist'

769
changelog.txt Normal file
View file

@ -0,0 +1,769 @@
Change_LOG / History
V12.0.8
Refactor and Rename Modules
Add Medusa API
Fix return parsing from HeadPhones
Add Python end of life detection and reporting
Fix Py3 return from Popen (Transcoder and executable path detection)
Add variable sys_path to config (allows user to specify separate path for binary detection)
Various Py3 compatability fixes
Log successful when returning to Radarr CDH
Add exception handling when failing to return to original directory (due to permissions)
Don't load Torrent Clients when calling NZB processing
V12.0.7
Refactor utils
Fix git subprocess
Fix cleanup script output
Add extra logging for fork detection
Additional code clean up
V12.0.6
Hotfix for Manual Torrent run results.
V12.0.5
Proper fix for source cleaner
V12.0.4
Hotfix missed commit for source cleaner
V12.0.3
Hotfix cleaning for source installs
V12.0.2
Fix missed ProcessResult
V12.0.1
Added Python 3 support
Updated all dependencies
Major code refactoring
Various bug fixes
Hotfix NZBGet not working without comment
V12.0.0
NOTE:
- This release contains major backwards-incompatible changes to the internal API
- Windows users will need to manually install pywin32
Add Python 3 support
Add cleanup script for post-update cleanup
Update all dependencies
Move vendored packages in `core` to `libs`
Move common libs to `libs/common`
Move custom libs to `libs/custom`
Move Python 2 libs to `libs/py2`
Move Windows libs to `libs/windows`
Fix PEP8
Add feature to make libs importable
Add feature to auto-update libs
Add path parent option to module path and default to using local path
Update invisible.cmd to return errorlevel
Update invisible.vbs to return exit code of 7zip
Update extractor.py for correct return code
Added debugging to extractor
Add option for windows extraction debugging
Remove surplus debug
Fix handling of None Password file
Fix invisible windows extraction
Fix execution of extraction
Start vbs directly from extractor
Delete invisible.cmd
Use args instead of Wscript.Arguments
Fix postprocessing of failed / bad downloads (#1091)
Fix release is None
Fix UnRAR failing
V11.8.1 12/29/2018
Fix cleanup for nzbToMedia installed as a git submodule
V11.8.0 12/28/2018
Add version information
Add bumpversion support
Fix automatic cleanup script
V11.7 12/25/2018
Merry Christmas and Happy Holidays!
Add cleanup script to clean up bytecode
Add automatic cleanup on update
NOTE: Cleanup will force-run every time during a transitional period to minimize issues with upcoming refactoring
V11.06 11/03/2018
updates to incorporate importMode for NzbDrone/Sonarr and Radarr.
Correct typo(s) for "Lidarr" category.
only pass id to CP if release id found.
fix issue with no release id and no imdbid.
Fixed NZBGet save of Lidarr config.
improve logging for imdb id lookup.
fix minor description error.
add better logging of movie name when added to CP.
attempt to clean up Liddar api commands.
update to use Mylar api.
set Torrent move-sym option to force SickRage process_method.
add rmDir import for HeadPhones processing.
change sickrage and sickchill names and modify api process to work with multiple sick* forks.
add NZBGet WebUI set of delete failed for HP.
fix qbittorrent to delete permanently (remove files on delete).
V11.05 27/06/2018
Add qBittorrent support.
Add SickGear support.
Add SiCKRAGE api support.
Fix for single file download.
Diable media check for failed HeadPhones downloads.
Added Lidarr flow. Still awaiting confirmation of api interface commands and return.
V11.04 30/12/2017
do not embed .sub.
add proper check of sub streams #1150 and filter out commentary.
traverse audiostreams in reverse.
add catch for OMDB api errors.
convert all listdir functions to unicode.
perform extraction, corruption checks, and transcoding when no server.
fix list indices errors when no fork set.
fix CP server responding test. Add trailing /.
use basestring to match unicode path in transcoder.
attempt autofork even if no username set.
allow long paths in Cleandir.
add Radarr handling.
minor fix for transcoder.
fix non-iterable type.
fix logging error.
DownloadedMovieScan updated to DownloadedMoviesScan.
add check to exception rename to not over-write exisiting.
don't try and process when no api/user.
Added omdbapikey functionality
force sonarr processing to "move".
already extracted archive not skipped.
fix text for keep_archive.
try to avoid spaces in outputdir.
change subtitle logging level.
Increase shutil copy buffer length from 4KB to 512KB.
improve user script media extension handling
add par2 rename/repair (linux only).
V11.03 15/01/2017
Add -o to output path for 7zip.
Try album directory then parent directory for HeadPhones variants.
Prevent duplication of audio tracks in Transcoder.
Update uTorrent Client interface.
Updated to use force_next for SickRage to prevent postprocessing in queue.
V11.02 30/11/2016
Added default "MKV-SD"
Added VideoResolution in nzbGet.
Fix Headphones direcotry parsing.
Remove proc_type when failed.
Added option "no_extract_failed"
Updated beautifulsoup 4 module.
Check for existence of codec_type key when counting streams.
Added default fallback for sabnzbd port = 8080.
V11.01 30/10/2016
Updated external modules and changed config to dict.
Started making code python 3 compatible.
Fixed auto-fork detection for new Sick* branches.
Fixed invalid indexing scope for TorrentToMedia.
Add Medusa fork and new param "ignore_subs".
Added check for language tag size, convert 3 letter language codes.
Fixed guessit call to allow guessit to work of full file path.
Add the ability to set octal permissions on the processed files prior to handing it off to Sickrage/Couchpotato.
Catch errors if not audio codec name.
Allow manual scans to continue.
Revert to 7zip if others missing.
Fixed int conversion base 8 from string or int.
Added more logging to server tests.
Added MKV-SD Profile.
Check for preferred codec even if not preferred language.
Don't convert VobSub to mov_text.
V10.15 29/05/2016
Don't copy archives when set to extract.
Specifically check for failed download handing regardless of fork.
sort Media file results by pathlength.
Synchronize changed SickRage directory param.
Don't remove release group information from base folder.
Don't add imdb id to file name when move-sym in use.
Fix string and integer concat error.
V10.14 13/03/2016
Add option move-sym to create symlink to renamed files.
Transmission comment fix.
Prevent int errors in chmod.
Fix urllib warnings.
Create unique directory in output incase of rename error in sick/couch.
Add -strict -2 to dts codec.
Added support to handle archives in SickRage.
Report Downloader failures to SickRage.
Continue on encoding detection failure.
Strip trailing and leading whitespaces from `mount_points`.
Also check sabnzbd history for nzoid.
Add generic run mode (manually enter parameters for execution).
V10.13 11/12/2015
Always add -strict -2 to aac codec.
Add "delete_on" for SickRage.
Add https handling for SABnzbd.
Added the ability to chmod Torrent diretory before processing.
Add option to not resume failed torrent.
Add Option to not resume successful torrent.
Add procees name to final SABnzbd message.
Fix SSL warnings forcomic processing.
Add .ts to mediaExtensions.
Don't update plex on failed.
Add option to preserve archive files after extraction.
Force_Clean doesn't over-ride delete_failed.
Added support for SickRageTV and SickRage branches.
V10.12 21/09/2015
Updated Requests Module to Latest Version. Works with Python 2.7.10
Add .img files to transcoder extraction routines.
V10.11 28/05/2015
Use socket to verify if running on Linux. Prevents issues with stale pid.
Add timeouts and improve single instance handling.
Prevent Scale Up.
Improve regex for rename script.
Improve safe rename functionality.
Ignore .bts extensions.
Don't process output when no transcoding needed.
Ignore Thumbs.db on manual run.
Rename nzbtomedia to core. To prevent erros on non-case sensitive file systems.
Mark as bad if no media files found.
Increase server responding timeout.
Don't use last modified entry for CP renamer when no imdb id found.
Add plex library update.
V10.10 29/01/2015
Fix error when extracting on windows. (added import of subprocess)
Fix subtitles download and emdedding.
V10.9 19/01/2015
Prevent Errors when trying next release from CouchPotato (CouchPotato failed handling)
Prevent check for status change when using Manage scan (CouchPotato)
Better Tooltip for "host" in NZBGet settings.
Continue if failed to connect to Torrent Client.
Fixed resolution settings in Transcoder.
Make Windows Linking and extraction invisible.
V10.8 15/12/2014
Impacts All
Removed "stand alone" scripts DeleteSamples and ResetDateTimes. These are now in https://github.com/clinton-hall/GetScripts
Removed chp.exe and replaced with vb script.
Improved Sonarr(NZBDrone) CDH support.
Use folder Permissions to set permissions for sub directories and files following extract.
Added support fro new SickRage Login.
Impacts NZBs
Get NZOID from SABnzbd for better release matching.
Impacts Torrents
Now gets Label from Deluge.
Changed SSL version for updated Deluge (0.3.11+)
Impacts Transcoding
Fixed reported bugs.
Fix Audio mapping.
Fix Subtitle mapping from external files.
Fixed scaling errors.
V10.7 06/10/2014
Impacts All
Add Transcoding of iso/images and VIDEO_TS structures.
Improved multiple session handling.
Improve NZBDrone handling (including Torrent Branch).
Multiple bug-fixes.
Impacts NZBs
Add custom "group" replacements to allow better subtitle searching.
Impacts Torrents
Add Vuze Torrent Client support.
V10.6 26/08/2014
Impacts All
Bug Fixes.
Impacts NZBs
Added FailureLink style feedback to dognzb for failed and corrupt downloads.
V10.5 05/08/2014
Impacts All
Bug Fixes for Transcoder.
Support for lib-av as well as ffmpeg.
Fixed SickBeard aut-fork detection.
V10.4 30/07/2014
Impacts All
Supress printed messages from extractor.
Allow no sub languages to be specified.
Ignore hdmv_pgs_subtitle codecs in transcoder.
Fix remote directory use with HeadPhones.
Only use nice and ionice when available.
Impacts NZBs
Cleaner exit logging for SABnzbd.
Impacts Torrents
Improved manual run handling.
V10.3 15/07/2014
Impacts All
Fix auto-fork to identify default fork.
V10.2 15/07/2014
Impacts All
Bug Fixes.
If extracting files and extraction not successful, return Failure and Don't delete archives.
V10.1 11/07/2014
Impacts All
Improved Transcoder
Minor Bug Fixes
Now accepts Number of Audio Channels for Transcoder options.
Userscript can perform video corruption check first.
Improved extraction. Extract all subdirs and multiple "unique" archives in a directory.
Check if already running and wait for complete before continuing.
Impacts NZBs
Allow UserScript for NZBs
Impacts Torrents
Do Extraction Before Flatten
V10.0 03/07/2014
Impacts All
Changed to python2 (some systems now come with python = python3 as default).
Major changes to Transcoder. Only copy streams where possible.
Pre-defined Transcode options for some devices.
Added log_env option to capture environment variables.
Improved remote directory handling.
Various fixes.
V9.3 09/06/2014
Impacts Torrents
Allow Headphones to remove torrents and data after processing.
Delete torrent if uselink = move
Added forceClean for outputDir. Works in file permissions prevent CP/SB from moving files.
Ignore .x264 from archive "part" checks.
Changed handling of TPB/Pistachitos SB forks. Default is to link/extract here. Disabled by Torrent_NoLink = 1.
Changed handling for HeadPhones Now that HeadPhones allows process directory to be defined.
Restructured Flow and streamlines process
Impacts NZBs
Fix setting of Mylar config from NZBGet.
Created sheel scripts to nzbTo{App}. All now call the common nzbToMedia.py
Impacts All
Changes to Couchpotato API for [nosql] added. Keeps aligned with current CouchPotato develop branch.
Add Auto Detection of SickBeard Fork. Thanks @echel0n
Added config class, re-coded migratecfg, misc bugfixes and code cleanup.
Added dynamic timeout based on directory size.
Added process_Method for SickBeard.
Changed configuration migrate process.
Major structure and process re-format.
Improved Manual Call Handling
Now prints github version into log when available.
Changed log location and format.
Added autoUpdate option via git.
All calls now use requests, not urllib.
All details now saved into Database. Can be used for more features later ;)
Improved status checking to ensure we only cleanup when successfully processed.
Huge Thanks @echel0n
V9.2 05/03/2014
Impacts All
Change default "wait_for" to 5 mins. CouchPotato can take more than 2 minutes to return on renamer.scan request.
Added SickBeard "wait_for" to bw customizable to prevent unwanted timeouts.
Fixed ascii conversion of directory name.
Added list of common sample ids and a way to set deletion of All media files less than the sample file size limit.
Added urlquote to dirName for CouchPotato (allows special characters in directory name)
Impacts NZBs
Fix Error with manual run of nzbToMedia
Make sure SickBeard receives the individula download dir.
Added option to set SickBeard extraction as either Downlaoder or Destination (SickBeard).
Fixed Health Check handling for NZBGet.
Impacts Torrents
Added option to run userscript once only (on directory).
Added Option to not flatten specific categories.
Added rtorrent integration.
Fixes for HeadPhones use (no flatten), no move/sym, and fix move back to original.
V9.1 24/01/2014
Impacts All
Don't wait to verify status change in CouchPotato when no initial status (manual run)
Now use "wait_for" timing as socket timeout on the renamer.scan. It appears to now be delayed in confirming success.
V9.0 19/01/2014
Impacts NZBs
SABnzbd 0.7.17+ now uses 8 arguments, not 7. These scripts now support the extra argument.
Impacts Torrents
Always pause before processing.
Moved delete to end of routine, only when succesful process occurs.
Don't flatten hp category (in case multi cd album)
Added UserScript to be called for un-categorized downloads and other defined categories.
Added Torrent Hash to Deluge to assist with movie ID.
Added passwords option to attempt extraction od passworded archives.
Impacts All
Added default socket timeout to prevent script hanging when the destination servers don't respond to http requests.
Made processing Category Centric as an option for people running multiple versions of SickBeard and CouchPotato etc.
Added TPB version of SickBeard processing. This now uses a fork pass-in instead of failed_fork.
Added new option to convert files, directories, and parameters to ASCII. To be used if you regularly download "foreign" titles and have problems with CP/SB.
Now only parse results from CouchPotato 50 at a time to prevent error with large wanted list.
V8.5 05/10/2013
Impacts Torrents
Added Transmission RPC client.
Now pauses and resumes or removes from transmission.
Added debugging of input arguments from torrent clients.
Impacts NZBs
Removed obsolete NZBget (pre V11) code.
Impacts All.
Fixed HeadPhones processing.
Fixed movie parsing in CPS api.
V8.4 14/09/2013
Impacts Torrents
Don't include 720p or 1080p as parts for extracting.
Extracts all sub-folders.
Added option to Move files.
Fix for single file torrents linked to subfolder of same name.
Impacts All
Added option for SickBeard delay (for forks that use 1 minute check.
Updated to new api call in CouchPotato (movie.searcher.try_next)
V8.3 11/07/2013
Impacts All
Allow use of experimental AAC codec in transcoder.
Remove username and password when api key is used.
Add .m4v as media
Added ResetDateTime.py
Manual Opion for Mylar script.
Fixes for Gamez script.
Impacts NZBs
Added option to remove folder path when CouchPotato on different system to downlaoder.
NZBGet v11.0 stable now current.
V8.2 26/05/2013
Impacts All
Add option to set the "wait_for" period. This is how long the script waits to see if the movie changes status in CouchPotato.
minSampleSize now moved to [extensions] section and availabe for nzbs and torrents.
New option in transcoder to use "niceness" on Linux.
Remove excess logging from transcoder.
Impacts NZBs
Added Flatten of input directory and test for media files (including sample deletion) in autoProcessTV
Impacts Torrents
Fixed Delete_Original option
Fix type which caused crash if not sickbeard or couchpotato.
V8.1 04/05/2013
Impacts All
Improved exception logging for error conditions
Impacts Torrents
Fixed an import error when extracting
Impacts NZBs
Fixed passthrough of inputName from NZBGet to pass the .nzb extension (required for SickBeard's failed fork)
V8.0 28/04/2013
Impacts All
Added download_id pass through for CouchPotato release matching
Uses single directory scanning for CouchPotato renamer
Matches imdb_id, download_id, clientAgent with CPS database
Impacts NZB
Addeed direct configuration support via nzbget webUI (nzbget v11+)
All nzb scripts are now directly callabale in nzbget v11
Settings made in nzbget webUI will be applied to the auotPorcessMedia.cfg when the scripts are run from nzbget.
Fixed TLS support for NZBGet email notifications (for V10 support)
V7.1 28/03/2013
Impacts Torrents
Added test for chp.exe. If not found, calls 7zip directly
Added test for multi-part archives. Will only extract part1
Impacts NZB
Fixed failed download handling from nzbget (won't delete or move root!!!)
Fixed sendEmail for nzbget to use html with <br> line breaks
V7.0 21/03/2013
Impacts Torrents
Added option to delete torrent and original files after processing (utorrent)
Impacts NZB
Added nzbget windows script (to be compiled)
Changed nzbget folders to previous X.X, current-stable, testing X.X format
Fix nzbget change directory failure problem
Improved nzbget logging
Add logging to nzbget email notification
Synchronised v10 to latest nzbget testing scripts
Added failed download folder for failed downloads in nzbget
Added option to delete failed in nzbget
Created a single nzbToMedia.py script for all categories (will be the only nzb script compiled for windows)
Impacts All
Added rotating log file handler
Added ffmpeg transcoder
Added CouchPotato status check to provide confirmation of renamer complete
CouchPotato status check will timeout after 2 minutes in case something goes wrong
Improved logging.
Improved scen exception handling.
Major changes to code layout
Better efficiency
Added support for Mylar, Gamez, and HeadPhones
Moved many of the "support" files to the autoProcess directory so that they aren't visible (looks neater)
Added migration tool to update .cfg file on first run following update.
V6.0 03/03/2013
Impacts Torrents
Bundled 7zip binaries and created extraction functions.
Now pauses uTorrent seeding before calling renamer in SickBeard/CouchPotatoServer
uTorrent Resumes seeding after files (hardlinks) have been renamed
Impacts NZB
Added local file logging.
Impacts All
Added scene exception handling. Currently for "QoQ"
Improved code layout.
V5.1 22/02/2013
Improved category search to loop through directory structure.
Added support for deluge and potentially other Torrent clients.
uTorrent now must pass "utorrent" before "%D" "%N" "%L"
added test for date modified (less than 5 mins ago) if root directory and no torrent name.
".cp(ttxxxxxx)" tag preserved in directory name for CPS renaming.
All changes affect Torrent handling. Should not impact NZB handling.
V5.0 20/02/2013
Fixed Extarction and Hard-Linking support in TorrentToMedia
Added new config options for movie file extensions, metadata extensions, compressed file extensions.
Added braid to sync linktastic.
Windows Builds now run without console displaying.
All changes affect Torrent handling. Should not impact NZB handling.
V4.3 17/02/2013
Added Logger in TorrentToMedia.py
Added nzbget V10.0 script.
Delete sample files in nzbget postprocessing
Single Version for all files.
V4.2 12/02/2013
Fixes to TorrentToMedia
V4.1 02/02/2013
Added Torrent Support (µTorrent and Transmission).
Added manual run option for nzbToSickBeard.
Changed nzbGet script to use move not copy and remove.
Merged all .cfg scripts into one (autoProcessMedia.cfg).
Made all scripts execitable (755) on github.
Added category limits for email support in nzbget.
Fixed issue with replacements (of paths) in email messages in nzbget.
V4.0 21/12/2012
Changed name from nzbToCouchPotato to nzbToMedia; Now supports mltiple post-processing from two nzb download clients.
Added email support for nzbget.
Version printing now for each of the nzbTo* scripts.
Added "custom" post-process support in nzbget.
Added post-process script output logging in nzbget.
V3.2 11/12/2012
Added failed handling from NZBGet. Thanks to schumi2004.
Also added support for the "failed download" development branch of SickBeard from https://github.com/Tolstyak/Sick-Beard.git
V3.1 02/12/2012
Added conversion to ensure the status passed to the autoProcessTV and autoProcessMovie is always handled as an integer.
V3.0 30/11/2012
Changed name from sabToCouchPotato to nzbToCouchPotato as this now included NZBGet support.
Packaged the NZBGet postprocess files as well as modified version of nzbToSickBeard (from sabToSickBeard).
V2.2 05/10/2012
Re-wrote the failed downlaod handling to just search for the imdb ttXXXX identifier (as received from the nzb name)
Now issues only two api calls. movie.list and searcher.try_next
Should be more robust with regards changes to CPS and also utilises less resources (i.e. less api call and and less processing).
V2.1 04/10/2012
detected a change in the movie release info format. Fixed the script to work with new format.
V2.0 04/10/2012
Fixed an issue with the failed download handling in that the status id for "snatched" can be different on each installation. now performs a status.list via api to verify the status.
Also including a version print (currently 2.0... yeah original I know) so you know if you are current.
removed the multiple versions. The former _recue version will perform the standard renamer only if "postprocess only verified downloads" (default) is enabled in SABnzbd. Also, the "unix" version works fine in Windows, only the "dos" version gave issue in Linux. In other words, this one version should work for all systems.
For historical reasons, the former download stats apply to the old versions:
sabToCouchPotato-dos - downloaded 143 times
sabToCouchPotato-unix - downloaded 205 times
sabToCouchPotato_recue - downloaded 105 times
Also updated the Windows Build to include the same changes. I have removed the link to the linux build as this didn't work on all systems and it really shouldn't be necessary. Let me know if you need this updated.
V1.9 18/09/2012
compiled (build) versions of sabToSickBeard and sabToCouchPotato added for both Linux and Windows. links at top of post.
V1.9 16/09/2012
Added a compiled .exe version for windows. Should prevent the "python not recognised" issue and allow this to be used in conjunction with the windows build on systems that do not have python installed.
This is the full (_recue version) if sabnzbd is set to post ptocess only verified jobs, this will not recue and will function as a standard renamer.
V1.9 27/08/2012
Following the latest CPS update on the master branch, this script is not really needed as CPS actually polls the SABnzbd api and does the same as this script (internally).
However, if you have any issues with CPS constantly downloading the same movies, or filling the log with polling SABnzbd for completed movies, or otherwise prefer to use this method, then you can still use this script and make the following changes in CPS:
Settings, renamer, run every (advanced) = set to 1440 (or some longer interval)
Settings, renamer, next On_failed = off
Settings, downloaders, SABnzbd, Delete failed = off.
V1.9 06/08/2012
Also added the integer handling of status in the sabToSickBeard.py script to prevent SickBeard trying to postprocess a failed TV download. Only impacts the _recue version
V1.8 05/08/2012
Modified the _recue version as SABnzbd 0.7.3 now appears to pass the "status" variable as a string not an integer!!! (or i had it wrong on first attempt :~)
This causes the old script to identify completed downloads as failed and recues the next download!
The fix here should work with any conceivable subsequent updates in that I now make the sys.argv[7] an integer before passing it. if the variable already is an integer, this shouldn't cause any issues.
status = int(sys.argv[7])
autoProcessMovie.process(sys.argv[1], sys.argv[2], status)
V1.7 02/08/2012
Added a new version sabToCouchPotato_recue
This works the same as the other versions, but includes support for recuing failed downloads.
This is new, and only tested once (with success ) at my end.
To get this to run you will need to uncheck the "post-process only verified jobs" option in SABnzbd. Also, to avoid issues with SickBeard postprocessing, I have included a modified postprocessing for SickBeard that just checks for failed status and then exits (the SickBeard Team are currently working on failed download handling and I will hopefully make this script work with that in the future)
This re-cue works as follows:
Performs an api call to CPS to get a list of all wanted movies (with all data including the releases and status etc)
It finds the nzbname (from SABnzbd) in the json list returned from the api call (movie.list) and identifies the movie id and release id.
It performs an api call to make the release as "ignore" and then performs another api call to refresh the movie.
If another (next best) release that meets your criteria is already available it will send that to SABnzbd, otherwise it will wait until a new release becomes availabe.
I have left the old versions here for now for those who don't want to try this. Also, if you don't uncheck the "post-process only verified jobs" in SABnzbd this code will perform the same as the previous versions.
The next issue to tackle (if this works) is automating the deletion of failed download files in SABnzbd.... but I figured this was a start.
V1.6 22/07/2012
no functionality change, but providing scripts in both unix and dos format to prevent exit(127) errors.
if you are using windows, use the dos format. if you are using linux, use the unix format and unzip the files in linux.
V1.5 17/07/2012
add back the web_root parameter to set the URL base.
V1.4 17/07/2012
Have uploaded the latest version.
changes
Removed support for a movie.downlaoded api call that was only used in a seperate branch and is not expected to be merged.
Modified the passthrough to allow a manual call to this script (i.e. does not need to be called from SABnzbd).
Have added a helpfile that explains the setup options in a bit more detail.
Modified the .cfg.sample file to use 60 as a default delay and now specify that 60 should be your minimum to ensure the renamer.scan finds newly extracted movies.
V1.3 and earlier were not fully tracked, as the script itself (not files) was posted on the QNAP forums.

View file

@ -1,19 +1,12 @@
#!/usr/bin/env python
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from __future__ import print_function
import os
import subprocess
import sys
import shutil
sys.dont_write_bytecode = True
FOLDER_STRUCTURE = {
'libs': [
'common',
@ -24,8 +17,6 @@ FOLDER_STRUCTURE = {
'core': [
'auto_process',
'extractor',
'plugins',
'processor',
'utils',
],
}
@ -33,7 +24,6 @@ FOLDER_STRUCTURE = {
class WorkingDirectory(object):
"""Context manager for changing current working directory."""
def __init__(self, new, original=None):
self.working_directory = new
self.original_directory = os.getcwd() if original is None else original
@ -52,7 +42,7 @@ class WorkingDirectory(object):
original_directory=self.original_directory,
error=error,
working_directory=self.working_directory,
),
)
)
@ -116,7 +106,6 @@ def clean_bytecode():
result = git_clean(
remove_directories=True,
force=True,
ignore_rules=True,
exclude=[
'*.*', # exclude everything
'!*.py[co]', # except bytecode

View file

@ -1,11 +1,6 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from __future__ import print_function
import itertools
import locale
@ -16,9 +11,9 @@ import subprocess
import sys
import time
import eol
import libs.autoload
import libs.util
import eol
if not libs.autoload.completed:
sys.exit('Could not load vendored libraries.')
@ -51,22 +46,12 @@ from six.moves import reload_module
from core import logger, main_db, version_check, databases, transcoder
from core.configuration import config
from core.plugins.downloaders.configuration import (
configure_nzbs,
configure_torrents,
configure_torrent_class,
)
from core.plugins.downloaders.utils import (
pause_torrent,
remove_torrent,
resume_torrent,
)
from core.plugins.plex import configure_plex
from core.utils import (
RunningProcess,
category_search,
clean_dir,
copy_link,
create_torrent_class,
extract_files,
flatten,
get_dirs,
@ -74,55 +59,54 @@ from core.utils import (
list_media_files,
make_dir,
parse_args,
pause_torrent,
rchmod,
remove_dir,
remove_read_only,
remove_torrent,
restart,
resume_torrent,
sanitize_name,
update_download_info_status,
wake_up,
)
__version__ = '12.1.13'
__version__ = '12.0.8'
# Client Agents
NZB_CLIENTS = ['sabnzbd', 'nzbget', 'manual']
TORRENT_CLIENTS = ['transmission', 'deluge', 'utorrent', 'rtorrent', 'qbittorrent', 'other', 'manual']
# sabnzbd constants
SABNZB_NO_OF_ARGUMENTS = 8
SABNZB_0717_NO_OF_ARGUMENTS = 9
# sickbeard fork/branch constants
FORK_DEFAULT = 'default'
FORK_FAILED = 'failed'
FORK_FAILED_TORRENT = 'failed-torrent'
FORK_SICKRAGE = 'SickRage'
FORK_SICKCHILL = 'SickChill'
FORK_SICKCHILL_API = 'SickChill-api'
FORK_SICKBEARD_API = 'SickBeard-api'
FORK_MEDUSA = 'Medusa'
FORK_MEDUSA_API = 'Medusa-api'
FORK_MEDUSA_APIV2 = 'Medusa-apiv2'
FORK_SICKGEAR = 'SickGear'
FORK_SICKGEAR_API = 'SickGear-api'
FORK_STHENO = 'Stheno'
FORKS = {
FORK_DEFAULT: {'dir': None},
FORK_FAILED: {'dirName': None, 'failed': None},
FORK_FAILED_TORRENT: {'dir': None, 'failed': None, 'process_method': None},
FORK_SICKRAGE: {'proc_dir': None, 'failed': None, 'process_method': None, 'force': None, 'delete_on': None},
FORK_SICKCHILL: {'proc_dir': None, 'failed': None, 'process_method': None, 'force': None, 'delete_on': None, 'force_next': None},
FORK_SICKCHILL_API: {'path': None, 'proc_dir': None, 'failed': None, 'process_method': None, 'force': None, 'force_replace': None, 'return_data': None, 'type': None, 'delete': None, 'force_next': None, 'is_priority': None, 'cmd': 'postprocess'},
FORK_SICKBEARD_API: {'path': None, 'failed': None, 'process_method': None, 'force_replace': None, 'return_data': None, 'type': None, 'delete': None, 'force_next': None, 'cmd': 'postprocess'},
FORK_SICKBEARD_API: {'path': None, 'failed': None, 'process_method': None, 'force_replace': None, 'return_data': None, 'type': None, 'delete': None, 'force_next': None},
FORK_MEDUSA: {'proc_dir': None, 'failed': None, 'process_method': None, 'force': None, 'delete_on': None, 'ignore_subs': None},
FORK_MEDUSA_API: {'path': None, 'failed': None, 'process_method': None, 'force_replace': None, 'return_data': None, 'type': None, 'delete_files': None, 'is_priority': None, 'cmd': 'postprocess'},
FORK_MEDUSA_APIV2: {'proc_dir': None, 'resource': None, 'failed': None, 'process_method': None, 'force': None, 'type': None, 'delete_on': None, 'is_priority': None},
FORK_MEDUSA_API: {'path': None, 'failed': None, 'process_method': None, 'force_replace': None, 'return_data': None, 'type': None, 'delete_files': None, 'is_priority': None},
FORK_SICKGEAR: {'dir': None, 'failed': None, 'process_method': None, 'force': None},
FORK_SICKGEAR_API: {'path': None, 'process_method': None, 'force_replace': None, 'return_data': None, 'type': None, 'is_priority': None, 'failed': None, 'cmd': 'sg.postprocess'},
FORK_STHENO: {'proc_dir': None, 'failed': None, 'process_method': None, 'force': None, 'delete_on': None, 'ignore_subs': None},
FORK_STHENO: {"proc_dir": None, "failed": None, "process_method": None, "force": None, "delete_on": None, "ignore_subs": None}
}
ALL_FORKS = {k: None for k in set(list(itertools.chain.from_iterable([FORKS[x].keys() for x in FORKS.keys()])))}
# SiCKRAGE OAuth2
SICKRAGE_OAUTH_CLIENT_ID = 'nzbtomedia'
SICKRAGE_OAUTH_TOKEN_URL = 'https://auth.sickrage.ca/realms/sickrage/protocol/openid-connect/token'
# NZBGet Exit Codes
NZBGET_POSTPROCESS_PAR_CHECK = 92
NZBGET_POSTPROCESS_SUCCESS = 93
@ -178,11 +162,6 @@ TRANSMISSION_PORT = None
TRANSMISSION_USER = None
TRANSMISSION_PASSWORD = None
SYNO_HOST = None
SYNO_PORT = None
SYNO_USER = None
SYNO_PASSWORD = None
DELUGE_HOST = None
DELUGE_PORT = None
DELUGE_USER = None
@ -207,9 +186,7 @@ META_CONTAINER = []
SECTIONS = []
CATEGORIES = []
FORK_SET = []
MOUNTED = None
GETSUBS = False
TRANSCODE = None
CONCAT = None
@ -221,7 +198,6 @@ VEXTENSION = None
OUTPUTVIDEOPATH = None
PROCESSOUTPUT = False
GENERALOPTS = []
OTHEROPTS = []
ALANGUAGE = None
AINCLUDE = False
SLANGUAGES = []
@ -261,7 +237,6 @@ SHOWEXTRACT = 0
PAR2CMD = None
FFPROBE = None
CHECK_MEDIA = None
REQUIRE_LAN = None
NICENESS = []
HWACCEL = False
@ -380,7 +355,6 @@ def configure_general():
global FFMPEG_PATH
global SYS_PATH
global CHECK_MEDIA
global REQUIRE_LAN
global SAFE_MODE
global NOEXTRACTFAILED
@ -394,7 +368,6 @@ def configure_general():
FFMPEG_PATH = CFG['General']['ffmpeg_path']
SYS_PATH = CFG['General']['sys_path']
CHECK_MEDIA = int(CFG['General']['check_media'])
REQUIRE_LAN = None if not CFG['General']['require_lan'] else CFG['General']['require_lan'].split(',')
SAFE_MODE = int(CFG['General']['safe_mode'])
NOEXTRACTFAILED = int(CFG['General']['no_extract_failed'])
@ -430,6 +403,26 @@ def configure_wake_on_lan():
wake_up()
def configure_sabnzbd():
global SABNZBD_HOST
global SABNZBD_PORT
global SABNZBD_APIKEY
SABNZBD_HOST = CFG['Nzb']['sabnzbd_host']
SABNZBD_PORT = int(CFG['Nzb']['sabnzbd_port'] or 8080) # defaults to accommodate NzbGet
SABNZBD_APIKEY = CFG['Nzb']['sabnzbd_apikey']
def configure_nzbs():
global NZB_CLIENT_AGENT
global NZB_DEFAULT_DIRECTORY
NZB_CLIENT_AGENT = CFG['Nzb']['clientAgent'] # sabnzbd
NZB_DEFAULT_DIRECTORY = CFG['Nzb']['default_downloadDirectory']
configure_sabnzbd()
def configure_groups():
global GROUPS
@ -442,6 +435,114 @@ def configure_groups():
GROUPS = None
def configure_utorrent():
global UTORRENT_WEB_UI
global UTORRENT_USER
global UTORRENT_PASSWORD
UTORRENT_WEB_UI = CFG['Torrent']['uTorrentWEBui'] # http://localhost:8090/gui/
UTORRENT_USER = CFG['Torrent']['uTorrentUSR'] # mysecretusr
UTORRENT_PASSWORD = CFG['Torrent']['uTorrentPWD'] # mysecretpwr
def configure_transmission():
global TRANSMISSION_HOST
global TRANSMISSION_PORT
global TRANSMISSION_USER
global TRANSMISSION_PASSWORD
TRANSMISSION_HOST = CFG['Torrent']['TransmissionHost'] # localhost
TRANSMISSION_PORT = int(CFG['Torrent']['TransmissionPort'])
TRANSMISSION_USER = CFG['Torrent']['TransmissionUSR'] # mysecretusr
TRANSMISSION_PASSWORD = CFG['Torrent']['TransmissionPWD'] # mysecretpwr
def configure_deluge():
global DELUGE_HOST
global DELUGE_PORT
global DELUGE_USER
global DELUGE_PASSWORD
DELUGE_HOST = CFG['Torrent']['DelugeHost'] # localhost
DELUGE_PORT = int(CFG['Torrent']['DelugePort']) # 8084
DELUGE_USER = CFG['Torrent']['DelugeUSR'] # mysecretusr
DELUGE_PASSWORD = CFG['Torrent']['DelugePWD'] # mysecretpwr
def configure_qbittorrent():
global QBITTORRENT_HOST
global QBITTORRENT_PORT
global QBITTORRENT_USER
global QBITTORRENT_PASSWORD
QBITTORRENT_HOST = CFG['Torrent']['qBittorrenHost'] # localhost
QBITTORRENT_PORT = int(CFG['Torrent']['qBittorrentPort']) # 8080
QBITTORRENT_USER = CFG['Torrent']['qBittorrentUSR'] # mysecretusr
QBITTORRENT_PASSWORD = CFG['Torrent']['qBittorrentPWD'] # mysecretpwr
def configure_flattening():
global NOFLATTEN
NOFLATTEN = (CFG['Torrent']['noFlatten'])
if isinstance(NOFLATTEN, str):
NOFLATTEN = NOFLATTEN.split(',')
def configure_torrent_categories():
global CATEGORIES
CATEGORIES = (CFG['Torrent']['categories']) # music,music_videos,pictures,software
if isinstance(CATEGORIES, str):
CATEGORIES = CATEGORIES.split(',')
def configure_torrent_resuming():
global TORRENT_RESUME
global TORRENT_RESUME_ON_FAILURE
TORRENT_RESUME_ON_FAILURE = int(CFG['Torrent']['resumeOnFailure'])
TORRENT_RESUME = int(CFG['Torrent']['resume'])
def configure_torrent_permissions():
global TORRENT_CHMOD_DIRECTORY
TORRENT_CHMOD_DIRECTORY = int(str(CFG['Torrent']['chmodDirectory']), 8)
def configure_torrent_deltetion():
global DELETE_ORIGINAL
DELETE_ORIGINAL = int(CFG['Torrent']['deleteOriginal'])
def configure_torrent_linking():
global USE_LINK
USE_LINK = CFG['Torrent']['useLink'] # no | hard | sym
def configure_torrents():
global TORRENT_CLIENT_AGENT
global OUTPUT_DIRECTORY
global TORRENT_DEFAULT_DIRECTORY
TORRENT_CLIENT_AGENT = CFG['Torrent']['clientAgent'] # utorrent | deluge | transmission | rtorrent | vuze | qbittorrent |other
OUTPUT_DIRECTORY = CFG['Torrent']['outputDirectory'] # /abs/path/to/complete/
TORRENT_DEFAULT_DIRECTORY = CFG['Torrent']['default_downloadDirectory']
configure_torrent_linking()
configure_flattening()
configure_torrent_deltetion()
configure_torrent_categories()
configure_torrent_permissions()
configure_torrent_resuming()
configure_utorrent()
configure_transmission()
configure_deluge()
configure_qbittorrent()
def configure_remote_paths():
global REMOTE_PATHS
@ -464,16 +565,35 @@ def configure_remote_paths():
]
def configure_plex():
global PLEX_SSL
global PLEX_HOST
global PLEX_PORT
global PLEX_TOKEN
global PLEX_SECTION
PLEX_SSL = int(CFG['Plex']['plex_ssl'])
PLEX_HOST = CFG['Plex']['plex_host']
PLEX_PORT = CFG['Plex']['plex_port']
PLEX_TOKEN = CFG['Plex']['plex_token']
PLEX_SECTION = CFG['Plex']['plex_sections'] or []
if PLEX_SECTION:
if isinstance(PLEX_SECTION, list):
PLEX_SECTION = ','.join(PLEX_SECTION) # fix in case this imported as list.
PLEX_SECTION = [
tuple(item.split(','))
for item in PLEX_SECTION.split('|')
]
def configure_niceness():
global NICENESS
with open(os.devnull, 'w') as devnull:
try:
subprocess.Popen(['nice'], stdout=devnull, stderr=devnull).communicate()
if len(CFG['Posix']['niceness'].split(',')) > 1: #Allow passing of absolute command, not just value.
NICENESS.extend(CFG['Posix']['niceness'].split(','))
else:
NICENESS.extend(['nice', '-n{0}'.format(int(CFG['Posix']['niceness']))])
NICENESS.extend(['nice', '-n{0}'.format(int(CFG['Posix']['niceness']))])
except Exception:
pass
try:
@ -522,7 +642,6 @@ def configure_containers():
def configure_transcoder():
global MOUNTED
global GETSUBS
global TRANSCODE
global DUPLICATE
@ -530,7 +649,6 @@ def configure_transcoder():
global IGNOREEXTENSIONS
global OUTPUTFASTSTART
global GENERALOPTS
global OTHEROPTS
global OUTPUTQUALITYPERCENT
global OUTPUTVIDEOPATH
global PROCESSOUTPUT
@ -568,7 +686,6 @@ def configure_transcoder():
global ALLOWSUBS
global DEFAULTS
MOUNTED = None
GETSUBS = int(CFG['Transcoder']['getSubs'])
TRANSCODE = int(CFG['Transcoder']['transcode'])
DUPLICATE = int(CFG['Transcoder']['duplicate'])
@ -586,11 +703,6 @@ def configure_transcoder():
GENERALOPTS.append('-fflags')
if '+genpts' not in GENERALOPTS:
GENERALOPTS.append('+genpts')
OTHEROPTS = (CFG['Transcoder']['otherOptions'])
if isinstance(OTHEROPTS, str):
OTHEROPTS = OTHEROPTS.split(',')
if OTHEROPTS == ['']:
OTHEROPTS = []
try:
OUTPUTQUALITYPERCENT = int(CFG['Transcoder']['outputQualityPercent'])
except Exception:
@ -684,7 +796,7 @@ def configure_transcoder():
codec_alias = {
'libx264': ['libx264', 'h264', 'h.264', 'AVC', 'MPEG-4'],
'libmp3lame': ['libmp3lame', 'mp3'],
'libfaac': ['libfaac', 'aac', 'faac'],
'libfaac': ['libfaac', 'aac', 'faac']
}
transcode_defaults = {
'iPad': {
@ -693,7 +805,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': None, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'iPad-1080p': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -701,7 +813,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': None, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'iPad-720p': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -709,7 +821,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': None, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'Apple-TV': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -717,7 +829,7 @@ def configure_transcoder():
'ACODEC': 'ac3', 'ACODEC_ALLOW': ['ac3'], 'ABITRATE': None, 'ACHANNELS': 6,
'ACODEC2': 'aac', 'ACODEC2_ALLOW': ['libfaac'], 'ABITRATE2': None, 'ACHANNELS2': 2,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'iPod': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -725,7 +837,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': 128000, 'ACHANNELS': 2,
'ACODEC2': None, 'ACODEC2_ALLOW': [], 'ABITRATE2': None, 'ACHANNELS2': None,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'iPhone': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -733,7 +845,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': 128000, 'ACHANNELS': 2,
'ACODEC2': None, 'ACODEC2_ALLOW': [], 'ABITRATE2': None, 'ACHANNELS2': None,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'PS3': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -741,7 +853,7 @@ def configure_transcoder():
'ACODEC': 'ac3', 'ACODEC_ALLOW': ['ac3'], 'ABITRATE': None, 'ACHANNELS': 6,
'ACODEC2': 'aac', 'ACODEC2_ALLOW': ['libfaac'], 'ABITRATE2': None, 'ACHANNELS2': 2,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'xbox': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -749,7 +861,7 @@ def configure_transcoder():
'ACODEC': 'ac3', 'ACODEC_ALLOW': ['ac3'], 'ABITRATE': None, 'ACHANNELS': 6,
'ACODEC2': None, 'ACODEC2_ALLOW': [], 'ABITRATE2': None, 'ACHANNELS2': None,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'Roku-480p': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -757,7 +869,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': 128000, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'Roku-720p': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -765,7 +877,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': 128000, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'Roku-1080p': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -773,7 +885,7 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': 160000, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'mkv': {
'VEXTENSION': '.mkv', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
@ -783,21 +895,13 @@ def configure_transcoder():
'ACODEC3': 'ac3', 'ACODEC3_ALLOW': ['libfaac', 'dts', 'ac3', 'mp2', 'mp3'], 'ABITRATE3': None, 'ACHANNELS3': 8,
'SCODEC': 'mov_text'
},
'mkv-bluray': {
'VEXTENSION': '.mkv', 'VCODEC': 'libx265', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': None, 'VLEVEL': None,
'VRESOLUTION': None, 'VCODEC_ALLOW': ['libx264', 'h264', 'h.264', 'hevc', 'h265', 'libx265', 'h.265', 'AVC', 'avc', 'mpeg4', 'msmpeg4', 'MPEG-4', 'mpeg2video'],
'ACODEC': 'dts', 'ACODEC_ALLOW': ['libfaac', 'dts', 'ac3', 'mp2', 'mp3'], 'ABITRATE': None, 'ACHANNELS': 8,
'ACODEC2': None, 'ACODEC2_ALLOW': [], 'ABITRATE2': None, 'ACHANNELS2': None,
'ACODEC3': 'ac3', 'ACODEC3_ALLOW': ['libfaac', 'dts', 'ac3', 'mp2', 'mp3'], 'ABITRATE3': None, 'ACHANNELS3': 8,
'SCODEC': 'mov_text',
},
'mp4-scene-release': {
'VEXTENSION': '.mp4', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': None, 'VCRF': 19, 'VLEVEL': '3.1',
'VRESOLUTION': None, 'VCODEC_ALLOW': ['libx264', 'h264', 'h.264', 'AVC', 'avc', 'mpeg4', 'msmpeg4', 'MPEG-4', 'mpeg2video'],
'ACODEC': 'dts', 'ACODEC_ALLOW': ['libfaac', 'dts', 'ac3', 'mp2', 'mp3'], 'ABITRATE': None, 'ACHANNELS': 8,
'ACODEC2': None, 'ACODEC2_ALLOW': [], 'ABITRATE2': None, 'ACHANNELS2': None,
'ACODEC3': 'ac3', 'ACODEC3_ALLOW': ['libfaac', 'dts', 'ac3', 'mp2', 'mp3'], 'ABITRATE3': None, 'ACHANNELS3': 8,
'SCODEC': 'mov_text',
'SCODEC': 'mov_text'
},
'MKV-SD': {
'VEXTENSION': '.mkv', 'VCODEC': 'libx264', 'VPRESET': None, 'VFRAMERATE': None, 'VBITRATE': '1200k', 'VCRF': None, 'VLEVEL': None,
@ -805,8 +909,8 @@ def configure_transcoder():
'ACODEC': 'aac', 'ACODEC_ALLOW': ['libfaac'], 'ABITRATE': 128000, 'ACHANNELS': 2,
'ACODEC2': 'ac3', 'ACODEC2_ALLOW': ['ac3'], 'ABITRATE2': None, 'ACHANNELS2': 6,
'ACODEC3': None, 'ACODEC3_ALLOW': [], 'ABITRATE3': None, 'ACHANNELS3': None,
'SCODEC': 'mov_text',
},
'SCODEC': 'mov_text'
}
}
if DEFAULTS and DEFAULTS in transcode_defaults:
VEXTENSION = transcode_defaults[DEFAULTS]['VEXTENSION']
@ -869,6 +973,13 @@ def configure_passwords_file():
PASSWORDS_FILE = CFG['passwords']['PassWordFile']
def configure_torrent_class():
global TORRENT_CLASS
# create torrent class
TORRENT_CLASS = create_torrent_class(TORRENT_CLIENT_AGENT)
def configure_sections(section):
global SECTIONS
global CATEGORIES
@ -909,7 +1020,7 @@ def configure_utility_locations():
else:
if SYS_PATH:
os.environ['PATH'] += ':' + SYS_PATH
os.environ['PATH'] += ':'+SYS_PATH
try:
SEVENZIP = subprocess.Popen(['which', '7z'], stdout=subprocess.PIPE).communicate()[0].strip().decode()
except Exception:
@ -991,22 +1102,13 @@ def check_python():
# Log warning if within grace period
days_left = eol.lifetime()
if days_left > 0:
logger.info(
'Python v{major}.{minor} will reach end of life in {x} days.'.format(
major=sys.version_info[0],
minor=sys.version_info[1],
x=days_left,
),
)
else:
logger.info(
'Python v{major}.{minor} reached end of life {x} days ago.'.format(
major=sys.version_info[0],
minor=sys.version_info[1],
x=-days_left,
),
logger.info(
'Python v{major}.{minor} will reach end of life in {x} days.'.format(
major=sys.version_info[0],
minor=sys.version_info[1],
x=days_left,
)
)
if days_left <= grace_period:
logger.warning('Please upgrade to a more recent Python version.')
@ -1036,10 +1138,10 @@ def initialize(section=None):
configure_general()
configure_updates()
configure_wake_on_lan()
configure_nzbs(CFG)
configure_torrents(CFG)
configure_nzbs()
configure_torrents()
configure_remote_paths()
configure_plex(CFG)
configure_plex()
configure_niceness()
configure_containers()
configure_transcoder()
@ -1047,7 +1149,6 @@ def initialize(section=None):
configure_utility_locations()
configure_sections(section)
configure_torrent_class()
configure_groups()
__INITIALIZED__ = True

View file

@ -1,83 +0,0 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
import core
from core import logger
from core.auto_process.common import ProcessResult
from core.utils import (
convert_to_ascii,
remote_dir,
server_responding,
)
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, status=0, client_agent='manual', input_category=None):
status = int(status)
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
port = cfg['port']
apikey = cfg['apikey']
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
protocol = 'https://' if ssl else 'http://'
remote_path = int(cfg.get('remote_path', 0))
url = '{0}{1}:{2}{3}/api'.format(protocol, host, port, web_root)
if not server_responding(url):
logger.error('Server did not respond. Exiting', section)
return ProcessResult(
message='{0}: Failed to post-process - {0} did not respond.'.format(section),
status_code=1,
)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
params = {
'apikey': apikey,
'cmd': 'forceProcess',
'dir': remote_dir(dir_name) if remote_path else dir_name,
}
logger.debug('Opening URL: {0} with params: {1}'.format(url, params), section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error('Unable to open URL')
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {1}'.format(section, section),
status_code=1,
)
logger.postprocess('{0}'.format(r.text), section)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
status_code=1,
)
elif r.text == 'OK':
logger.postprocess('SUCCESS: ForceProcess for {0} has been started in LazyLibrarian'.format(dir_name), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
else:
logger.error('FAILED: ForceProcess of {0} has Failed in LazyLibrarian'.format(dir_name), section)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(section),
status_code=1,
)

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import requests
@ -67,7 +60,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
logger.error('Unable to open URL', section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
status_code=1
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
@ -76,7 +69,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
status_code=1,
)
result = r.text
result = r.content
if not type(result) == list:
result = result.split('\n')
for line in result:

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
from core import logger
@ -24,7 +17,7 @@ class ProcessResult(object):
def __str__(self):
return 'Processing {0}: {1}'.format(
'succeeded' if bool(self) else 'failed',
self.message,
self.message
)
def __repr__(self):
@ -45,7 +38,7 @@ def command_complete(url, params, headers, section):
return None
else:
try:
return r.json()['status']
return r.json()['state']
except (ValueError, KeyError):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error('{0} did not return expected json data.'.format(section), section)

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import shutil
@ -53,7 +46,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
'api_key': apikey,
'mode': 'UPDATEREQUESTEDSTATUS',
'db_id': gamez_id,
'status': download_status,
'status': download_status
}
logger.debug('Opening URL: {0}'.format(url), section)

View file

@ -1,155 +0,0 @@
import time
from core import logger
from core.auto_process.common import ProcessResult
from core.auto_process.managers.sickbeard import SickBeard
import requests
class PyMedusa(SickBeard):
"""PyMedusa class."""
def __init__(self, sb_init):
super(PyMedusa, self).__init__(sb_init)
def _create_url(self):
return '{0}{1}:{2}{3}/home/postprocess/processEpisode'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
class PyMedusaApiV1(SickBeard):
"""PyMedusa apiv1 class."""
def __init__(self, sb_init):
super(PyMedusaApiV1, self).__init__(sb_init)
def _create_url(self):
return '{0}{1}:{2}{3}/api/{4}/'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root, self.sb_init.apikey)
def api_call(self):
self._process_fork_prarams()
url = self._create_url()
logger.debug('Opening URL: {0} with params: {1}'.format(url, self.sb_init.fork_params), self.sb_init.section)
try:
response = self.session.get(url, auth=(self.sb_init.username, self.sb_init.password), params=self.sb_init.fork_params, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(self.sb_init.section),
status_code=1,
)
if response.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(response.status_code), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(self.sb_init.section, response.status_code),
status_code=1,
)
if response.json()['result'] == 'success':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(self.sb_init.section, self.input_name),
status_code=0,
)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(self.sb_init.section),
status_code=1, # We did not receive Success confirmation.
)
class PyMedusaApiV2(SickBeard):
"""PyMedusa apiv2 class."""
def __init__(self, sb_init):
super(PyMedusaApiV2, self).__init__(sb_init)
# Check for an apikey, as this is required with using fork = medusa-apiv2
if not sb_init.apikey:
raise Exception('For the section SickBeard `fork = medusa-apiv2` you also need to configure an `apikey`')
def _create_url(self):
return '{0}{1}:{2}{3}/api/v2/postprocess'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
def _get_identifier_status(self, url):
# Loop through requesting medusa for the status on the queueitem.
try:
response = self.session.get(url, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to get postprocess identifier status', self.sb_init.section)
return False
try:
jdata = response.json()
except ValueError:
return False
return jdata
def api_call(self):
self._process_fork_prarams()
url = self._create_url()
logger.debug('Opening URL: {0}'.format(url), self.sb_init.section)
payload = self.sb_init.fork_params
payload['resource'] = self.sb_init.fork_params['nzbName']
del payload['nzbName']
# Update the session with the x-api-key
self.session.headers.update({
'x-api-key': self.sb_init.apikey,
'Content-type': 'application/json'
})
# Send postprocess request
try:
response = self.session.post(url, json=payload, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to send postprocess request', self.sb_init.section)
return ProcessResult(
message='{0}: Unable to send postprocess request to PyMedusa',
status_code=1,
)
# Get UUID
if response:
try:
jdata = response.json()
except ValueError:
logger.debug('No data returned from provider')
return False
if not jdata.get('status') or not jdata['status'] == 'success':
return False
queueitem_identifier = jdata['queueItem']['identifier']
wait_for = int(self.sb_init.config.get('wait_for', 2))
n = 0
response = {}
url = '{0}/{1}'.format(url, queueitem_identifier)
while n < 12: # set up wait_for minutes to see if command completes..
time.sleep(5 * wait_for)
response = self._get_identifier_status(url)
if response and response.get('success'):
break
if 'error' in response:
break
n += 1
# Log Medusa's PP logs here.
if response.get('output'):
for line in response['output']:
logger.postprocess('{0}'.format(line), self.sb_init.section)
# For now this will most likely always be True. But in the future we could return an exit state
# for when the PP in medusa didn't yield an expected result.
if response.get('success'):
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(self.sb_init.section, self.input_name),
status_code=0,
)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(self.sb_init.section),
status_code=1, # We did not receive Success confirmation.
)

View file

@ -1,500 +0,0 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import copy
import core
from core import logger
from core.auto_process.common import (
ProcessResult,
)
from core.utils import remote_dir
from oauthlib.oauth2 import LegacyApplicationClient
import requests
from requests_oauthlib import OAuth2Session
import six
from six import iteritems
class InitSickBeard(object):
"""Sickbeard init class.
Used to determin which sickbeard fork object to initialize.
"""
def __init__(self, cfg, section, input_category):
# As a bonus let's also put the config on self.
self.config = cfg
self.section = section
self.input_category = input_category
self.host = cfg['host']
self.port = cfg['port']
self.ssl = int(cfg.get('ssl', 0))
self.web_root = cfg.get('web_root', '')
self.protocol = 'https://' if self.ssl else 'http://'
self.username = cfg.get('username', '')
self.password = cfg.get('password', '')
self.apikey = cfg.get('apikey', '')
self.api_version = int(cfg.get('api_version', 2))
self.sso_username = cfg.get('sso_username', '')
self.sso_password = cfg.get('sso_password', '')
self.fork = ''
self.fork_params = None
self.fork_obj = None
replace = {
'medusa': 'Medusa',
'medusa-api': 'Medusa-api',
'sickbeard-api': 'SickBeard-api',
'sickgear': 'SickGear',
'sickchill': 'SickChill',
'stheno': 'Stheno',
}
_val = cfg.get('fork', 'auto')
f1 = replace.get(_val, _val)
try:
self.fork = f1, core.FORKS[f1]
except KeyError:
self.fork = 'auto'
self.protocol = 'https://' if self.ssl else 'http://'
def auto_fork(self):
# auto-detect correct section
# config settings
if core.FORK_SET: # keep using determined fork for multiple (manual) post-processing
logger.info('{section}:{category} fork already set to {fork}'.format
(section=self.section, category=self.input_category, fork=core.FORK_SET[0]))
return core.FORK_SET[0], core.FORK_SET[1]
cfg = dict(core.CFG[self.section][self.input_category])
replace = {
'medusa': 'Medusa',
'medusa-api': 'Medusa-api',
'medusa-apiv1': 'Medusa-api',
'medusa-apiv2': 'Medusa-apiv2',
'sickbeard-api': 'SickBeard-api',
'sickgear': 'SickGear',
'sickchill': 'SickChill',
'stheno': 'Stheno',
}
_val = cfg.get('fork', 'auto')
f1 = replace.get(_val.lower(), _val)
try:
self.fork = f1, core.FORKS[f1]
except KeyError:
self.fork = 'auto'
protocol = 'https://' if self.ssl else 'http://'
if self.section == 'NzbDrone':
logger.info('Attempting to verify {category} fork'.format
(category=self.input_category))
url = '{protocol}{host}:{port}{root}/api/rootfolder'.format(
protocol=protocol, host=self.host, port=self.port, root=self.web_root,
)
headers = {'X-Api-Key': self.apikey}
try:
r = requests.get(url, headers=headers, stream=True, verify=False)
except requests.ConnectionError:
logger.warning('Could not connect to {0}:{1} to verify fork!'.format(self.section, self.input_category))
if not r.ok:
logger.warning('Connection to {section}:{category} failed! '
'Check your configuration'.format
(section=self.section, category=self.input_category))
self.fork = ['default', {}]
elif self.section == 'SiCKRAGE':
logger.info('Attempting to verify {category} fork'.format
(category=self.input_category))
if self.api_version >= 2:
url = '{protocol}{host}:{port}{root}/api/v{api_version}/ping'.format(
protocol=protocol, host=self.host, port=self.port, root=self.web_root, api_version=self.api_version
)
api_params = {}
else:
url = '{protocol}{host}:{port}{root}/api/v{api_version}/{apikey}/'.format(
protocol=protocol, host=self.host, port=self.port, root=self.web_root, api_version=self.api_version, apikey=self.apikey,
)
api_params = {'cmd': 'postprocess', 'help': '1'}
try:
if self.api_version >= 2 and self.sso_username and self.sso_password:
oauth = OAuth2Session(client=LegacyApplicationClient(client_id=core.SICKRAGE_OAUTH_CLIENT_ID))
oauth_token = oauth.fetch_token(client_id=core.SICKRAGE_OAUTH_CLIENT_ID,
token_url=core.SICKRAGE_OAUTH_TOKEN_URL,
username=self.sso_username,
password=self.sso_password)
r = requests.get(url, headers={'Authorization': 'Bearer ' + oauth_token['access_token']}, stream=True, verify=False)
else:
r = requests.get(url, params=api_params, stream=True, verify=False)
if not r.ok:
logger.warning('Connection to {section}:{category} failed! '
'Check your configuration'.format(
section=self.section, category=self.input_category
))
except requests.ConnectionError:
logger.warning('Could not connect to {0}:{1} to verify API version!'.format(self.section, self.input_category))
params = {
'path': None,
'failed': None,
'process_method': None,
'force_replace': None,
'return_data': None,
'type': None,
'delete': None,
'force_next': None,
'is_priority': None
}
self.fork = ['default', params]
elif self.fork == 'auto':
self.detect_fork()
logger.info('{section}:{category} fork set to {fork}'.format
(section=self.section, category=self.input_category, fork=self.fork[0]))
core.FORK_SET = self.fork
self.fork, self.fork_params = self.fork[0], self.fork[1]
# This will create the fork object, and attach to self.fork_obj.
self._init_fork()
return self.fork, self.fork_params
@staticmethod
def _api_check(r, params, rem_params):
try:
json_data = r.json()
except ValueError:
logger.error('Failed to get JSON data from response')
logger.debug('Response received')
raise
try:
json_data = json_data['data']
except KeyError:
logger.error('Failed to get data from JSON')
logger.debug('Response received: {}'.format(json_data))
raise
else:
if six.PY3:
str_type = (str)
else:
str_type = (str, unicode)
if isinstance(json_data, str_type):
return rem_params, False
json_data = json_data.get('data', json_data)
try:
optional_parameters = json_data['optionalParameters'].keys()
# Find excess parameters
excess_parameters = set(params).difference(optional_parameters)
excess_parameters.remove('cmd') # Don't remove cmd from api params
logger.debug('Removing excess parameters: {}'.format(sorted(excess_parameters)))
rem_params.extend(excess_parameters)
return rem_params, True
except:
logger.error('Failed to identify optionalParameters')
return rem_params, False
def detect_fork(self):
"""Try to detect a specific fork."""
detected = False
params = core.ALL_FORKS
rem_params = []
logger.info('Attempting to auto-detect {category} fork'.format(category=self.input_category))
# define the order to test. Default must be first since the default fork doesn't reject parameters.
# then in order of most unique parameters.
if self.apikey:
url = '{protocol}{host}:{port}{root}/api/{apikey}/'.format(
protocol=self.protocol, host=self.host, port=self.port, root=self.web_root, apikey=self.apikey,
)
api_params = {'cmd': 'sg.postprocess', 'help': '1'}
else:
url = '{protocol}{host}:{port}{root}/home/postprocess/'.format(
protocol=self.protocol, host=self.host, port=self.port, root=self.web_root,
)
api_params = {}
# attempting to auto-detect fork
try:
s = requests.Session()
if not self.apikey and self.username and self.password:
login = '{protocol}{host}:{port}{root}/login'.format(
protocol=self.protocol, host=self.host, port=self.port, root=self.web_root)
login_params = {'username': self.username, 'password': self.password}
r = s.get(login, verify=False, timeout=(30, 60))
if r.status_code in [401, 403] and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False)
r = s.get(url, auth=(self.username, self.password), params=api_params, verify=False)
except requests.ConnectionError:
logger.info('Could not connect to {section}:{category} to perform auto-fork detection!'.format
(section=self.section, category=self.input_category))
r = []
if r and r.ok:
if self.apikey:
rem_params, found = self._api_check(r, params, rem_params)
if found:
params['cmd'] = 'sg.postprocess'
else: # try different api set for non-SickGear forks.
api_params = {'cmd': 'help', 'subject': 'postprocess'}
try:
if not self.apikey and self.username and self.password:
r = s.get(url, auth=(self.username, self.password), params=api_params, verify=False)
else:
r = s.get(url, params=api_params, verify=False)
except requests.ConnectionError:
logger.info('Could not connect to {section}:{category} to perform auto-fork detection!'.format
(section=self.section, category=self.input_category))
rem_params, found = self._api_check(r, params, rem_params)
params['cmd'] = 'postprocess'
else:
# Find excess parameters
rem_params.extend(
param
for param in params
if 'name="{param}"'.format(param=param) not in r.text
)
# Remove excess params
for param in rem_params:
params.pop(param)
for fork in sorted(iteritems(core.FORKS), reverse=False):
if params == fork[1]:
detected = True
break
if detected:
self.fork = fork
logger.info('{section}:{category} fork auto-detection successful ...'.format
(section=self.section, category=self.input_category))
elif rem_params:
logger.info('{section}:{category} fork auto-detection found custom params {params}'.format
(section=self.section, category=self.input_category, params=params))
self.fork = ['custom', params]
else:
logger.info('{section}:{category} fork auto-detection failed'.format
(section=self.section, category=self.input_category))
self.fork = list(core.FORKS.items())[list(core.FORKS.keys()).index(core.FORK_DEFAULT)]
def _init_fork(self):
# These need to be imported here, to prevent a circular import.
from .pymedusa import PyMedusa, PyMedusaApiV1, PyMedusaApiV2
mapped_forks = {
'Medusa': PyMedusa,
'Medusa-api': PyMedusaApiV1,
'Medusa-apiv2': PyMedusaApiV2
}
logger.debug('Create object for fork {fork}'.format(fork=self.fork))
if self.fork and mapped_forks.get(self.fork):
# Create the fork object and pass self (SickBeardInit) to it for all the data, like Config.
self.fork_obj = mapped_forks[self.fork](self)
else:
logger.debug('{section}:{category} Could not create a fork object for {fork}. Probaly class not added yet.'.format(
section=self.section, category=self.input_category, fork=self.fork)
)
class SickBeard(object):
"""Sickbeard base class."""
def __init__(self, sb_init):
"""SB constructor."""
self.sb_init = sb_init
self.session = requests.Session()
self.failed = None
self.status = None
self.input_name = None
self.dir_name = None
self.delete_failed = int(self.sb_init.config.get('delete_failed', 0))
self.nzb_extraction_by = self.sb_init.config.get('nzbExtractionBy', 'Downloader')
self.process_method = self.sb_init.config.get('process_method')
self.remote_path = int(self.sb_init.config.get('remote_path', 0))
self.wait_for = int(self.sb_init.config.get('wait_for', 2))
self.force = int(self.sb_init.config.get('force', 0))
self.delete_on = int(self.sb_init.config.get('delete_on', 0))
self.ignore_subs = int(self.sb_init.config.get('ignore_subs', 0))
self.is_priority = int(self.sb_init.config.get('is_priority', 0))
# get importmode, default to 'Move' for consistency with legacy
self.import_mode = self.sb_init.config.get('importMode', 'Move')
# Keep track of result state
self.success = False
def initialize(self, dir_name, input_name=None, failed=False, client_agent='manual'):
"""We need to call this explicitely because we need some variables.
We can't pass these directly through the constructor.
"""
self.dir_name = dir_name
self.input_name = input_name
self.failed = failed
self.status = int(self.failed)
if self.status > 0 and core.NOEXTRACTFAILED:
self.extract = 0
else:
self.extract = int(self.sb_init.config.get('extract', 0))
if client_agent == core.TORRENT_CLIENT_AGENT and core.USE_LINK == 'move-sym':
self.process_method = 'symlink'
def _create_url(self):
if self.sb_init.apikey:
return '{0}{1}:{2}{3}/api/{4}/'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root, self.sb_init.apikey)
return '{0}{1}:{2}{3}/home/postprocess/processEpisode'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
def _process_fork_prarams(self):
# configure SB params to pass
fork_params = self.sb_init.fork_params
fork_params['quiet'] = 1
fork_params['proc_type'] = 'manual'
if self.input_name is not None:
fork_params['nzbName'] = self.input_name
for param in copy.copy(fork_params):
if param == 'failed':
if self.failed > 1:
self.failed = 1
fork_params[param] = self.failed
if 'proc_type' in fork_params:
del fork_params['proc_type']
if 'type' in fork_params:
del fork_params['type']
if param == 'return_data':
fork_params[param] = 0
if 'quiet' in fork_params:
del fork_params['quiet']
if param == 'type':
if 'type' in fork_params: # only set if we haven't already deleted for 'failed' above.
fork_params[param] = 'manual'
if 'proc_type' in fork_params:
del fork_params['proc_type']
if param in ['dir_name', 'dir', 'proc_dir', 'process_directory', 'path']:
fork_params[param] = self.dir_name
if self.remote_path:
fork_params[param] = remote_dir(self.dir_name)
# SickChill allows multiple path types. Only retunr 'path'
if param == 'proc_dir' and 'path' in fork_params:
del fork_params['proc_dir']
if param == 'process_method':
if self.process_method:
fork_params[param] = self.process_method
else:
del fork_params[param]
if param in ['force', 'force_replace']:
if self.force:
fork_params[param] = self.force
else:
del fork_params[param]
if param in ['delete_on', 'delete']:
if self.delete_on:
fork_params[param] = self.delete_on
else:
del fork_params[param]
if param == 'ignore_subs':
if self.ignore_subs:
fork_params[param] = self.ignore_subs
else:
del fork_params[param]
if param == 'is_priority':
if self.is_priority:
fork_params[param] = self.is_priority
else:
del fork_params[param]
if param == 'force_next':
fork_params[param] = 1
# delete any unused params so we don't pass them to SB by mistake
[fork_params.pop(k) for k, v in list(fork_params.items()) if v is None]
def api_call(self):
"""Perform a base sickbeard api call."""
self._process_fork_prarams()
url = self._create_url()
logger.debug('Opening URL: {0} with params: {1}'.format(url, self.sb_init.fork_params), self.sb_init.section)
try:
if not self.sb_init.apikey and self.sb_init.username and self.sb_init.password:
# If not using the api, we need to login using user/pass first.
login = '{0}{1}:{2}{3}/login'.format(self.sb_init.protocol, self.sb_init.host, self.sb_init.port, self.sb_init.web_root)
login_params = {'username': self.sb_init.username, 'password': self.sb_init.password}
r = self.session.get(login, verify=False, timeout=(30, 60))
if r.status_code in [401, 403] and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
self.session.post(login, data=login_params, stream=True, verify=False, timeout=(30, 60))
response = self.session.get(url, auth=(self.sb_init.username, self.sb_init.password), params=self.sb_init.fork_params, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(self.sb_init.section),
status_code=1,
)
if response.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(response.status_code), self.sb_init.section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(self.sb_init.section, response.status_code),
status_code=1,
)
return self.process_response(response)
def process_response(self, response):
"""Iterate over the lines returned, and log.
:param response: Streamed Requests response object.
This method will need to be overwritten in the forks, for alternative response handling.
"""
for line in response.iter_lines():
if line:
line = line.decode('utf-8')
logger.postprocess('{0}'.format(line), self.sb_init.section)
# if 'Moving file from' in line:
# input_name = os.path.split(line)[1]
# if 'added to the queue' in line:
# queued = True
# For the refactoring i'm only considering vanilla sickbeard, as for the base class.
if 'Processing succeeded' in line or 'Successfully processed' in line:
self.success = True
if self.success:
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(self.sb_init.section, self.input_name),
status_code=0,
)
return ProcessResult(
message='{0}: Failed to post-process - Returned log from {0} was not as expected.'.format(self.sb_init.section),
status_code=1, # We did not receive Success confirmation.
)

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import json
import os
import time
@ -15,23 +8,9 @@ import requests
import core
from core import logger, transcoder
from core.auto_process.common import (
ProcessResult,
command_complete,
completed_download_handling,
)
from core.plugins.downloaders.nzb.utils import report_nzb
from core.plugins.subtitles import import_subs, rename_subs
from core.auto_process.common import command_complete, completed_download_handling, ProcessResult
from core.scene_exceptions import process_all_exceptions
from core.utils import (
convert_to_ascii,
find_download,
find_imdbid,
list_media_files,
remote_dir,
remove_dir,
server_responding,
)
from core.utils import convert_to_ascii, find_download, find_imdbid, import_subs, list_media_files, remote_dir, remove_dir, report_nzb, server_responding
requests.packages.urllib3.disable_warnings()
@ -59,22 +38,19 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
remote_path = int(cfg.get('remote_path', 0))
protocol = 'https://' if ssl else 'http://'
omdbapikey = cfg.get('omdbapikey', '')
no_status_check = int(cfg.get('no_status_check', 0))
status = int(status)
if status > 0 and core.NOEXTRACTFAILED:
extract = 0
else:
extract = int(cfg.get('extract', 0))
imdbid, dir_name = find_imdbid(dir_name, input_name, omdbapikey)
imdbid = find_imdbid(dir_name, input_name, omdbapikey)
if section == 'CouchPotato':
base_url = '{0}{1}:{2}{3}/api/{4}/'.format(protocol, host, port, web_root, apikey)
if section == 'Radarr':
base_url = '{0}{1}:{2}{3}/api/v3/command'.format(protocol, host, port, web_root)
url2 = '{0}{1}:{2}{3}/api/v3/config/downloadClient'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey, 'Content-Type': 'application/json'}
if section == 'Watcher3':
base_url = '{0}{1}:{2}{3}/postprocessing'.format(protocol, host, port, web_root)
base_url = '{0}{1}:{2}{3}/api/command'.format(protocol, host, port, web_root)
url2 = '{0}{1}:{2}{3}/api/config/downloadClient'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey}
if not apikey:
logger.info('No CouchPotato or Radarr apikey entered. Performing transcoder functions only')
release = None
@ -124,32 +100,24 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
input_name, dir_name = convert_to_ascii(input_name, dir_name)
good_files = 0
valid_files = 0
num_files = 0
# Check video files for corruption
for video in list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
num_files += 1
if transcoder.is_video_good(video, status):
import_subs(video)
good_files += 1
if not core.REQUIRE_LAN or transcoder.is_video_good(video, status, require_lan=core.REQUIRE_LAN):
valid_files += 1
import_subs(video)
rename_subs(dir_name)
if num_files and valid_files == num_files:
if num_files and good_files == num_files:
if status:
logger.info('Status shown as failed from Downloader, but {0} valid video files found. Setting as success.'.format(good_files), section)
status = 0
elif num_files and valid_files < num_files:
elif num_files and good_files < num_files:
logger.info('Status shown as success from Downloader, but corrupt video files found. Setting as failed.', section)
status = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if good_files == num_files:
logger.debug('Video marked as failed due to missing required language: {0}'.format(core.REQUIRE_LAN), section)
else:
logger.debug('Video marked as failed due to missing playable audio or video', section)
if good_files < num_files and failure_link: # only report corrupt files
if failure_link:
failure_link += '&corrupt=true'
status = 1
elif client_agent == 'manual':
logger.warning('No media files found in directory {0} to manually process.'.format(dir_name), section)
return ProcessResult(
@ -189,7 +157,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
os.rename(video, video2)
if not apikey: # If only using Transcoder functions, exit here.
logger.info('No CouchPotato or Radarr or Watcher3 apikey entered. Processing completed.')
logger.info('No CouchPotato or Radarr apikey entered. Processing completed.')
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
@ -221,20 +189,9 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(base_url, payload), section)
logger.postprocess('Starting DownloadedMoviesScan scan for {0}'.format(input_name), section)
if section == 'Watcher3':
if input_name and os.path.isfile(os.path.join(dir_name, input_name)):
params['media_folder'] = os.path.join(params['media_folder'], input_name)
payload = {'apikey': apikey, 'path': params['media_folder'], 'guid': download_id, 'mode': 'complete'}
if not download_id:
payload.pop('guid')
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(base_url, payload), section)
logger.postprocess('Starting postprocessing scan for {0}'.format(input_name), section)
try:
if section == 'CouchPotato':
r = requests.get(url, params=params, verify=False, timeout=(30, 1800))
elif section == 'Watcher3':
r = requests.post(base_url, data=payload, verify=False, timeout=(30, 1800))
else:
r = requests.post(base_url, data=json.dumps(payload), headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
@ -259,27 +216,14 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
status_code=0,
)
elif section == 'Radarr':
logger.postprocess('Radarr response: {0}'.format(result['state']))
try:
if isinstance(result, list):
scan_id = int(result[0]['id'])
else:
scan_id = int(result['id'])
res = json.loads(r.content)
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
scan_id = None
elif section == 'Watcher3' and result['status'] == 'finished':
logger.postprocess('Watcher3 updated status to {0}'.format(result['tasks']['update_movie_status']))
if result['tasks']['update_movie_status'] == 'Finished':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=status,
)
else:
return ProcessResult(
message='{0}: Failed to post-process - changed status to {1}'.format(section, result['tasks']['update_movie_status']),
status_code=1,
)
else:
logger.error('FAILED: {0} scan was unable to finish for folder {1}. exiting!'.format(method, dir_name),
section)
@ -298,21 +242,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
return ProcessResult(
message='{0}: Sending failed download back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
) # Return failed flag, but log the event as successful.
elif section == 'Watcher3':
logger.postprocess('Sending failed download to {0} for CDH processing'.format(section), section)
path = remote_dir(dir_name) if remote_path else dir_name
if input_name and os.path.isfile(os.path.join(dir_name, input_name)):
path = os.path.join(path, input_name)
payload = {'apikey': apikey, 'path': path, 'guid': download_id, 'mode': 'failed'}
r = requests.post(base_url, data=payload, verify=False, timeout=(30, 1800))
result = r.json()
logger.postprocess('Watcher3 response: {0}'.format(result))
if result['status'] == 'finished':
return ProcessResult(
message='{0}: Sending failed download back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
) # Return failed flag, but log the event as successful.
) # Return failed flag, but log the event as successful.
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
@ -397,12 +327,6 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
if not release:
download_id = None # we don't want to filter new releases based on this.
if no_status_check:
return ProcessResult(
status_code=0,
message='{0}: Successfully processed but no change in status confirmed'.format(section),
)
# we will now check to see if CPS has finished renaming before returning to TorrentToMedia and unpausing.
timeout = time.time() + 60 * wait_for
while time.time() < timeout: # only wait 2 (default) minutes, then return.
@ -415,9 +339,9 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
if release:
try:
release_id = list(release.keys())[0]
title = release[release_id]['title']
release_status_new = release[release_id]['status']
if release_status_old is None: # we didn't have a release before, but now we do.
title = release[release_id]['title']
logger.postprocess('SUCCESS: Movie {0} has now been added to CouchPotato with release status of [{1}]'.format(
title, str(release_status_new).upper()), section)
return ProcessResult(
@ -426,8 +350,8 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
)
if release_status_new != release_status_old:
logger.postprocess('SUCCESS: Release {0} has now been marked with a status of [{1}]'.format(
release_id, str(release_status_new).upper()), section)
logger.postprocess('SUCCESS: Release for {0} has now been marked with a status of [{1}]'.format(
title, str(release_status_new).upper()), section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
@ -435,22 +359,22 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
except Exception:
pass
elif scan_id:
url = '{0}/{1}'.format(base_url, scan_id)
command_status = command_complete(url, params, headers, section)
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
if command_status in ['completed']:
logger.debug('The Scan command has completed successfully. Renaming was successful.', section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status in ['failed']:
logger.debug('The Scan command has failed. Renaming was not successful.', section)
# return ProcessResult(
# message='{0}: Failed to post-process {1}'.format(section, input_name),
# status_code=1,
# )
url = '{0}/{1}'.format(base_url, scan_id)
command_status = command_complete(url, params, headers, section)
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
if command_status in ['completed']:
logger.debug('The Scan command has completed successfully. Renaming was successful.', section)
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
status_code=0,
)
elif command_status in ['failed']:
logger.debug('The Scan command has failed. Renaming was not successful.', section)
# return ProcessResult(
# message='{0}: Failed to post-process {1}'.format(section, input_name),
# status_code=1,
# )
if not os.path.isdir(dir_name):
logger.postprocess('SUCCESS: Input Directory [{0}] has been processed and removed'.format(
@ -482,7 +406,6 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
'{0} does not appear to have changed status after {1} minutes, Please check your logs.'.format(input_name, wait_for),
section,
)
return ProcessResult(
status_code=1,
message='{0}: Failed to post-process - No change in status'.format(section),
@ -566,27 +489,21 @@ def get_release(base_url, imdb_id=None, download_id=None, release_id=None):
# Narrow results by removing old releases by comparing their last_edit field
if len(results) > 1:
rem_id = set()
for id1, x1 in results.items():
for x2 in results.values():
for id2, x2 in results.items():
try:
if x2['last_edit'] > x1['last_edit']:
rem_id.add(id1)
results.pop(id1)
except Exception:
continue
for id in rem_id:
results.pop(id)
# Search downloads on clients for a match to try and narrow our results down to 1
if len(results) > 1:
rem_id = set()
for cur_id, x in results.items():
try:
if not find_download(str(x['download_info']['downloader']).lower(), x['download_info']['id']):
rem_id.add(cur_id)
results.pop(cur_id)
except Exception:
continue
for id in rem_id:
results.pop(id)
return results

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import json
import os
import time
@ -80,7 +73,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
params = {
'apikey': apikey,
'cmd': 'forceProcess',
'dir': remote_dir(dir_name) if remote_path else dir_name,
'dir': remote_dir(dir_name) if remote_path else dir_name
}
res = force_process(params, url, apikey, input_name, dir_name, section, wait_for)
@ -90,7 +83,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
params = {
'apikey': apikey,
'cmd': 'forceProcess',
'dir': os.path.split(remote_dir(dir_name))[0] if remote_path else os.path.split(dir_name)[0],
'dir': os.path.split(remote_dir(dir_name))[0] if remote_path else os.path.split(dir_name)[0]
}
res = force_process(params, url, apikey, input_name, dir_name, section, wait_for)
@ -125,7 +118,7 @@ def process(section, dir_name, input_name=None, status=0, client_agent='manual',
)
try:
res = r.json()
res = json.loads(r.content)
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
except Exception as e:
@ -194,7 +187,7 @@ def get_status(url, apikey, dir_name):
params = {
'apikey': apikey,
'cmd': 'getHistory',
'cmd': 'getHistory'
}
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params))

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import copy
import errno
import json
@ -14,34 +7,19 @@ import os
import time
import requests
from oauthlib.oauth2 import LegacyApplicationClient
from requests_oauthlib import OAuth2Session
import core
from core import logger, transcoder
from core.auto_process.common import (
ProcessResult,
command_complete,
completed_download_handling,
)
from core.auto_process.managers.sickbeard import InitSickBeard
from core.plugins.downloaders.nzb.utils import report_nzb
from core.plugins.subtitles import import_subs, rename_subs
from core.auto_process.common import command_complete, completed_download_handling, ProcessResult
from core.forks import auto_fork
from core.scene_exceptions import process_all_exceptions
from core.utils import (
convert_to_ascii,
flatten,
list_media_files,
remote_dir,
remove_dir,
server_responding,
)
from core.utils import convert_to_ascii, flatten, import_subs, list_media_files, remote_dir, remove_dir, report_nzb, server_responding
requests.packages.urllib3.disable_warnings()
def process(section, dir_name, input_name=None, failed=False, client_agent='manual', download_id=None, input_category=None, failure_link=None):
cfg = dict(core.CFG[section][input_category])
host = cfg['host']
@ -52,21 +30,12 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
username = cfg.get('username', '')
password = cfg.get('password', '')
apikey = cfg.get('apikey', '')
api_version = int(cfg.get('api_version', 2))
sso_username = cfg.get('sso_username', '')
sso_password = cfg.get('sso_password', '')
# Refactor into an OO structure.
# For now let's do botch the OO and the serialized code, until everything has been migrated.
init_sickbeard = InitSickBeard(cfg, section, input_category)
if server_responding('{0}{1}:{2}{3}'.format(protocol, host, port, web_root)):
# auto-detect correct fork
# During reactor we also return fork, fork_params. But these are also stored in the object.
# Should be changed after refactor.
fork, fork_params = init_sickbeard.auto_fork()
elif not username and not apikey and not sso_username:
logger.info('No SickBeard / SiCKRAGE username or Sonarr apikey entered. Performing transcoder functions only')
fork, fork_params = auto_fork(section, input_category)
elif not username and not apikey:
logger.info('No SickBeard username or Sonarr apikey entered. Performing transcoder functions only')
fork, fork_params = 'None', {}
else:
logger.error('Server did not respond. Exiting', section)
@ -106,13 +75,12 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
# Attempt to create the directory if it doesn't exist and ignore any
# error stating that it already exists. This fixes a bug where SickRage
# won't process the directory because it doesn't exist.
if dir_name:
try:
os.makedirs(dir_name) # Attempt to create the directory
except OSError as e:
# Re-raise the error if it wasn't about the directory not existing
if e.errno != errno.EEXIST:
raise
try:
os.makedirs(dir_name) # Attempt to create the directory
except OSError as e:
# Re-raise the error if it wasn't about the directory not existing
if e.errno != errno.EEXIST:
raise
if 'process_method' not in fork_params or (client_agent in ['nzbget', 'sabnzbd'] and nzb_extraction_by != 'Destination'):
if input_name:
@ -131,32 +99,24 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
# Check video files for corruption
good_files = 0
valid_files = 0
num_files = 0
for video in list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
num_files += 1
if transcoder.is_video_good(video, status):
good_files += 1
if not core.REQUIRE_LAN or transcoder.is_video_good(video, status, require_lan=core.REQUIRE_LAN):
valid_files += 1
import_subs(video)
rename_subs(dir_name)
import_subs(video)
if num_files > 0:
if valid_files == num_files and not status == 0:
if good_files == num_files and not status == 0:
logger.info('Found Valid Videos. Setting status Success')
status = 0
failed = 0
if valid_files < num_files and status == 0:
if good_files < num_files and status == 0:
logger.info('Found corrupt videos. Setting status Failed')
status = 1
failed = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if good_files == num_files:
logger.debug('Video marked as failed due to missing required language: {0}'.format(core.REQUIRE_LAN), section)
else:
logger.debug('Video marked as failed due to missing playable audio or video', section)
if good_files < num_files and failure_link: # only report corrupt files
if failure_link:
failure_link += '&corrupt=true'
elif client_agent == 'manual':
logger.warning('No media files found in directory {0} to manually process.'.format(dir_name), section)
@ -199,74 +159,64 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
status_code=1,
)
# Part of the refactor
if init_sickbeard.fork_obj:
init_sickbeard.fork_obj.initialize(dir_name, input_name, failed, client_agent='manual')
# configure SB params to pass
# We don't want to remove params, for the Forks that have been refactored.
# As we don't want to duplicate this part of the code.
if not init_sickbeard.fork_obj:
fork_params['quiet'] = 1
fork_params['proc_type'] = 'manual'
if input_name is not None:
fork_params['nzbName'] = input_name
fork_params['quiet'] = 1
fork_params['proc_type'] = 'manual'
if input_name is not None:
fork_params['nzbName'] = input_name
for param in copy.copy(fork_params):
if param == 'failed':
if failed > 1:
failed = 1
fork_params[param] = failed
if 'proc_type' in fork_params:
del fork_params['proc_type']
if 'type' in fork_params:
del fork_params['type']
for param in copy.copy(fork_params):
if param == 'failed':
fork_params[param] = failed
if 'proc_type' in fork_params:
del fork_params['proc_type']
if 'type' in fork_params:
del fork_params['type']
if param == 'return_data':
fork_params[param] = 0
if 'quiet' in fork_params:
del fork_params['quiet']
if param == 'return_data':
fork_params[param] = 0
if 'quiet' in fork_params:
del fork_params['quiet']
if param == 'type':
if 'type' in fork_params: # only set if we haven't already deleted for 'failed' above.
fork_params[param] = 'manual'
if 'proc_type' in fork_params:
del fork_params['proc_type']
if param == 'type':
fork_params[param] = 'manual'
if 'proc_type' in fork_params:
del fork_params['proc_type']
if param in ['dir_name', 'dir', 'proc_dir', 'process_directory', 'path']:
fork_params[param] = dir_name
if remote_path:
fork_params[param] = remote_dir(dir_name)
if param in ['dir_name', 'dir', 'proc_dir', 'process_directory', 'path']:
fork_params[param] = dir_name
if remote_path:
fork_params[param] = remote_dir(dir_name)
if param == 'process_method':
if process_method:
fork_params[param] = process_method
else:
del fork_params[param]
if param == 'process_method':
if process_method:
fork_params[param] = process_method
else:
del fork_params[param]
if param in ['force', 'force_replace']:
if force:
fork_params[param] = force
else:
del fork_params[param]
if param in ['force', 'force_replace']:
if force:
fork_params[param] = force
else:
del fork_params[param]
if param in ['delete_on', 'delete']:
if delete_on:
fork_params[param] = delete_on
else:
del fork_params[param]
if param in ['delete_on', 'delete']:
if delete_on:
fork_params[param] = delete_on
else:
del fork_params[param]
if param == 'ignore_subs':
if ignore_subs:
fork_params[param] = ignore_subs
else:
del fork_params[param]
if param == 'ignore_subs':
if ignore_subs:
fork_params[param] = ignore_subs
else:
del fork_params[param]
if param == 'force_next':
fork_params[param] = 1
if param == 'force_next':
fork_params[param] = 1
# delete any unused params so we don't pass them to SB by mistake
[fork_params.pop(k) for k, v in list(fork_params.items()) if v is None]
# delete any unused params so we don't pass them to SB by mistake
[fork_params.pop(k) for k, v in list(fork_params.items()) if v is None]
if status == 0:
if section == 'NzbDrone' and not apikey:
@ -301,25 +251,15 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
url = None
if section == 'SickBeard':
if apikey:
url = '{0}{1}:{2}{3}/api/{4}/'.format(protocol, host, port, web_root, apikey)
if not 'cmd' in fork_params:
if 'SickGear' in fork:
fork_params['cmd'] = 'sg.postprocess'
else:
fork_params['cmd'] = 'postprocess'
url = '{0}{1}:{2}{3}/api/{4}/?cmd=postprocess'.format(protocol, host, port, web_root, apikey)
elif fork == 'Stheno':
url = '{0}{1}:{2}{3}/home/postprocess/process_episode'.format(protocol, host, port, web_root)
url = "{0}{1}:{2}{3}/home/postprocess/process_episode".format(protocol, host, port, web_root)
else:
url = '{0}{1}:{2}{3}/home/postprocess/processEpisode'.format(protocol, host, port, web_root)
elif section == 'SiCKRAGE':
if api_version >= 2:
url = '{0}{1}:{2}{3}/api/v{4}/postprocess'.format(protocol, host, port, web_root, api_version)
else:
url = '{0}{1}:{2}{3}/api/v{4}/{5}/'.format(protocol, host, port, web_root, api_version, apikey)
elif section == 'NzbDrone':
url = '{0}{1}:{2}{3}/api/v3/command'.format(protocol, host, port, web_root)
url2 = '{0}{1}:{2}{3}/api/v3/config/downloadClient'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey, "Content-Type": "application/json"}
url = '{0}{1}:{2}{3}/api/command'.format(protocol, host, port, web_root)
url2 = '{0}{1}:{2}{3}/api/config/downloadClient'.format(protocol, host, port, web_root)
headers = {'X-Api-Key': apikey}
# params = {'sortKey': 'series.title', 'page': 1, 'pageSize': 1, 'sortDir': 'asc'}
if remote_path:
logger.debug('remote_path: {0}'.format(remote_dir(dir_name)), section)
@ -333,45 +273,16 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
try:
if section == 'SickBeard':
if init_sickbeard.fork_obj:
return init_sickbeard.fork_obj.api_call()
else:
s = requests.Session()
logger.debug('Opening URL: {0} with params: {1}'.format(url, fork_params), section)
if not apikey and username and password:
login = '{0}{1}:{2}{3}/login'.format(protocol, host, port, web_root)
login_params = {'username': username, 'password': password}
r = s.get(login, verify=False, timeout=(30, 60))
if r.status_code in [401, 403] and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False, timeout=(30, 60))
r = s.get(url, auth=(username, password), params=fork_params, stream=True, verify=False, timeout=(30, 1800))
elif section == 'SiCKRAGE':
logger.debug('Opening URL: {0} with params: {1}'.format(url, fork_params), section)
s = requests.Session()
if api_version >= 2 and sso_username and sso_password:
oauth = OAuth2Session(client=LegacyApplicationClient(client_id=core.SICKRAGE_OAUTH_CLIENT_ID))
oauth_token = oauth.fetch_token(client_id=core.SICKRAGE_OAUTH_CLIENT_ID,
token_url=core.SICKRAGE_OAUTH_TOKEN_URL,
username=sso_username,
password=sso_password)
s.headers.update({'Authorization': 'Bearer ' + oauth_token['access_token']})
params = {
'path': fork_params['path'],
'failed': str(bool(fork_params['failed'])).lower(),
'processMethod': 'move',
'forceReplace': str(bool(fork_params['force_replace'])).lower(),
'returnData': str(bool(fork_params['return_data'])).lower(),
'delete': str(bool(fork_params['delete'])).lower(),
'forceNext': str(bool(fork_params['force_next'])).lower(),
'nzbName': fork_params['nzbName']
}
else:
params = fork_params
r = s.get(url, params=params, stream=True, verify=False, timeout=(30, 1800))
if not apikey and username and password:
login = '{0}{1}:{2}{3}/login'.format(protocol, host, port, web_root)
login_params = {'username': username, 'password': password}
r = s.get(login, verify=False, timeout=(30, 60))
if r.status_code == 401 and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False, timeout=(30, 60))
r = s.get(url, auth=(username, password), params=fork_params, stream=True, verify=False, timeout=(30, 1800))
elif section == 'NzbDrone':
logger.debug('Opening URL: {0} with data: {1}'.format(url, data), section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
@ -410,15 +321,9 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
if queued:
time.sleep(60)
elif section == 'SiCKRAGE':
if api_version >= 2:
success = True
else:
if r.json()['result'] == 'success':
success = True
elif section == 'NzbDrone':
try:
res = r.json()
res = json.loads(r.content)
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
started = True
@ -467,8 +372,7 @@ def process(section, dir_name, input_name=None, failed=False, client_agent='manu
# status_code=1,
# )
if completed_download_handling(url2, headers, section=section):
logger.debug('The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.'.format(section),
section)
logger.debug('The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.'.format(section), section)
return ProcessResult(
message='{0}: Complete DownLoad Handling is enabled. Passing back to {0}'.format(section),
status_code=status,

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import copy
import os
import shutil
@ -20,17 +13,17 @@ from core import logger
class Section(configobj.Section, object):
def isenabled(self):
def isenabled(section):
# checks if subsection enabled, returns true/false if subsection specified otherwise returns true/false in {}
if not self.sections:
if not section.sections:
try:
value = list(ConfigObj.find_key(self, 'enabled'))[0]
value = list(ConfigObj.find_key(section, 'enabled'))[0]
except Exception:
value = 0
if int(value) == 1:
return self
return section
else:
to_return = copy.deepcopy(self)
to_return = copy.deepcopy(section)
for section_name, subsections in to_return.items():
for subsection in subsections:
try:
@ -47,8 +40,8 @@ class Section(configobj.Section, object):
return to_return
def findsection(self, key):
to_return = copy.deepcopy(self)
def findsection(section, key):
to_return = copy.deepcopy(section)
for subsection in to_return:
try:
value = list(ConfigObj.find_key(to_return[subsection], key))[0]
@ -127,7 +120,7 @@ class ConfigObj(configobj.ConfigObj, Section):
shutil.copyfile(core.CONFIG_SPEC_FILE, core.CONFIG_FILE)
CFG_OLD = config(core.CONFIG_FILE)
except Exception as error:
logger.error('Error {msg} when copying to .cfg'.format(msg=error))
logger.debug('Error {msg} when copying to .cfg'.format(msg=error))
try:
# check for autoProcessMedia.cfg.spec and create if it does not exist
@ -135,7 +128,7 @@ class ConfigObj(configobj.ConfigObj, Section):
shutil.copyfile(core.CONFIG_FILE, core.CONFIG_SPEC_FILE)
CFG_NEW = config(core.CONFIG_SPEC_FILE)
except Exception as error:
logger.error('Error {msg} when copying to .spec'.format(msg=error))
logger.debug('Error {msg} when copying to .spec'.format(msg=error))
# check for autoProcessMedia.cfg and autoProcessMedia.cfg.spec and if they don't exist return and fail
if CFG_NEW is None or CFG_OLD is None:
@ -143,24 +136,14 @@ class ConfigObj(configobj.ConfigObj, Section):
subsections = {}
# gather all new-style and old-style sub-sections
for newsection in CFG_NEW:
for newsection, newitems in CFG_NEW.items():
if CFG_NEW[newsection].sections:
subsections.update({newsection: CFG_NEW[newsection].sections})
for section in CFG_OLD:
for section, items in CFG_OLD.items():
if CFG_OLD[section].sections:
subsections.update({section: CFG_OLD[section].sections})
for option, value in CFG_OLD[section].items():
if option in ['category',
'cpsCategory',
'sbCategory',
'srCategory',
'hpCategory',
'mlCategory',
'gzCategory',
'raCategory',
'ndCategory',
'W3Category']:
if option in ['category', 'cpsCategory', 'sbCategory', 'hpCategory', 'mlCategory', 'gzCategory', 'raCategory', 'ndCategory']:
if not isinstance(value, list):
value = [value]
@ -178,7 +161,7 @@ class ConfigObj(configobj.ConfigObj, Section):
if section in ['CouchPotato', 'HeadPhones', 'Gamez', 'Mylar']:
if option in ['username', 'password']:
values.pop(option)
if section in ['Mylar']:
if section in ['SickBeard', 'Mylar']:
if option == 'wait_for': # remove old format
values.pop(option)
if section in ['SickBeard', 'NzbDrone']:
@ -201,9 +184,6 @@ class ConfigObj(configobj.ConfigObj, Section):
if option == 'forceClean':
CFG_NEW['General']['force_clean'] = value
values.pop(option)
if option == 'qBittorrenHost': # We had a typo that is now fixed.
CFG_NEW['Torrent']['qBittorrentHost'] = value
values.pop(option)
if section in ['Transcoder']:
if option in ['niceness']:
CFG_NEW['Posix'][option] = value
@ -214,7 +194,6 @@ class ConfigObj(configobj.ConfigObj, Section):
elif not value:
value = 0
values[option] = value
# remove any options that we no longer need so they don't migrate into our new config
if not list(ConfigObj.find_key(CFG_NEW, option)):
try:
@ -259,20 +238,6 @@ class ConfigObj(configobj.ConfigObj, Section):
elif section in CFG_OLD.keys():
process_section(section, subsection)
# migrate SiCRKAGE settings from SickBeard section to new dedicated SiCRKAGE section
if CFG_OLD['SickBeard']['tv']['enabled'] and CFG_OLD['SickBeard']['tv']['fork'] == 'sickrage-api':
for option, value in iteritems(CFG_OLD['SickBeard']['tv']):
if option in CFG_NEW['SiCKRAGE']['tv']:
CFG_NEW['SiCKRAGE']['tv'][option] = value
# set API version to 1 if API key detected and no SSO username is set
if CFG_NEW['SiCKRAGE']['tv']['apikey'] and not CFG_NEW['SiCKRAGE']['tv']['sso_username']:
CFG_NEW['SiCKRAGE']['tv']['api_version'] = 1
# disable SickBeard section
CFG_NEW['SickBeard']['tv']['enabled'] = 0
CFG_NEW['SickBeard']['tv']['fork'] = 'auto'
# create a backup of our old config
CFG_OLD.filename = '{config}.old'.format(config=core.CONFIG_FILE)
CFG_OLD.write()
@ -299,16 +264,6 @@ class ConfigObj(configobj.ConfigObj, Section):
logger.warning('{x} category is set for CouchPotato and Radarr. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_RACATEGORY']))
if 'NZBPO_RACATEGORY' in os.environ and 'NZBPO_W3CATEGORY' in os.environ:
if os.environ['NZBPO_RACATEGORY'] == os.environ['NZBPO_W3CATEGORY']:
logger.warning('{x} category is set for Watcher3 and Radarr. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_RACATEGORY']))
if 'NZBPO_W3CATEGORY' in os.environ and 'NZBPO_CPSCATEGORY' in os.environ:
if os.environ['NZBPO_W3CATEGORY'] == os.environ['NZBPO_CPSCATEGORY']:
logger.warning('{x} category is set for CouchPotato and Watcher3. '
'Please check your config in NZBGet'.format
(x=os.environ['NZBPO_W3CATEGORY']))
if 'NZBPO_LICATEGORY' in os.environ and 'NZBPO_HPCATEGORY' in os.environ:
if os.environ['NZBPO_LICATEGORY'] == os.environ['NZBPO_HPCATEGORY']:
logger.warning('{x} category is set for HeadPhones and Lidarr. '
@ -322,8 +277,8 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][option] = value
section = 'General'
env_keys = ['AUTO_UPDATE', 'CHECK_MEDIA', 'REQUIRE_LAN', 'SAFE_MODE', 'NO_EXTRACT_FAILED']
cfg_keys = ['auto_update', 'check_media', 'require_lan', 'safe_mode', 'no_extract_failed']
env_keys = ['AUTO_UPDATE', 'CHECK_MEDIA', 'SAFE_MODE', 'NO_EXTRACT_FAILED']
cfg_keys = ['auto_update', 'check_media', 'safe_mode', 'no_extract_failed']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
if key in os.environ:
@ -359,36 +314,13 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['Radarr'].sections:
cfg_new['Radarr'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['Watcher3'].sections:
cfg_new['Watcher3'][env_cat_key]['enabled'] = 0
section = 'Watcher3'
env_cat_key = 'NZBPO_W3CATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'METHOD', 'DELETE_FAILED', 'REMOTE_PATH',
'WAIT_FOR', 'WATCH_DIR', 'OMDBAPIKEY']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'method', 'delete_failed', 'remote_path',
'wait_for', 'watch_dir', 'omdbapikey']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_W3{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['Radarr'].sections:
cfg_new['Radarr'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['CouchPotato'].sections:
cfg_new['CouchPotato'][env_cat_key]['enabled'] = 0
section = 'SickBeard'
env_cat_key = 'NZBPO_SBCATEGORY'
env_keys = ['ENABLED', 'HOST', 'PORT', 'APIKEY', 'USERNAME', 'PASSWORD', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED', 'TORRENT_NOLINK',
'NZBEXTRACTIONBY', 'REMOTE_PATH', 'PROCESS_METHOD']
cfg_keys = ['enabled', 'host', 'port', 'apikey', 'username', 'password', 'ssl', 'web_root', 'watch_dir', 'fork', 'delete_failed', 'Torrent_NoLink',
'nzbExtractionBy', 'remote_path', 'process_method']
env_keys = ['ENABLED', 'HOST', 'PORT', 'APIKEY', 'USERNAME', 'PASSWORD', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK',
'DELETE_FAILED', 'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'REMOTE_PATH', 'PROCESS_METHOD']
cfg_keys = ['enabled', 'host', 'port', 'apikey', 'username', 'password', 'ssl', 'web_root', 'watch_dir', 'fork',
'delete_failed', 'Torrent_NoLink', 'nzbExtractionBy', 'remote_path', 'process_method']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_SB{index}'.format(index=env_keys[index])
@ -399,29 +331,6 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['SiCKRAGE'].sections:
cfg_new['SiCKRAGE'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['NzbDrone'].sections:
cfg_new['NzbDrone'][env_cat_key]['enabled'] = 0
section = 'SiCKRAGE'
env_cat_key = 'NZBPO_SRCATEGORY'
env_keys = ['ENABLED', 'HOST', 'PORT', 'APIKEY', 'API_VERSION', 'SSO_USERNAME', 'SSO_PASSWORD', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK',
'DELETE_FAILED', 'TORRENT_NOLINK', 'NZBEXTRACTIONBY', 'REMOTE_PATH', 'PROCESS_METHOD']
cfg_keys = ['enabled', 'host', 'port', 'apikey', 'api_version', 'sso_username', 'sso_password', 'ssl', 'web_root', 'watch_dir', 'fork',
'delete_failed', 'Torrent_NoLink', 'nzbExtractionBy', 'remote_path', 'process_method']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_SR{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['SickBeard'].sections:
cfg_new['SickBeard'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['NzbDrone'].sections:
cfg_new['NzbDrone'][env_cat_key]['enabled'] = 0
@ -474,21 +383,6 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
section = 'LazyLibrarian'
env_cat_key = 'NZBPO_LLCATEGORY'
env_keys = ['ENABLED', 'APIKEY', 'HOST', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'REMOTE_PATH']
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'watch_dir', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_LL{index}'.format(index=env_keys[index])
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
if os.environ[env_cat_key] not in cfg_new[section].sections:
cfg_new[section][os.environ[env_cat_key]] = {}
cfg_new[section][os.environ[env_cat_key]][option] = value
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
section = 'NzbDrone'
env_cat_key = 'NZBPO_NDCATEGORY'
env_keys = ['ENABLED', 'HOST', 'APIKEY', 'PORT', 'SSL', 'WEB_ROOT', 'WATCH_DIR', 'FORK', 'DELETE_FAILED',
@ -508,8 +402,6 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['SickBeard'].sections:
cfg_new['SickBeard'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['SiCKRAGE'].sections:
cfg_new['SiCKRAGE'][env_cat_key]['enabled'] = 0
section = 'Radarr'
env_cat_key = 'NZBPO_RACATEGORY'
@ -530,8 +422,6 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
if os.environ[env_cat_key] in cfg_new['CouchPotato'].sections:
cfg_new['CouchPotato'][env_cat_key]['enabled'] = 0
if os.environ[env_cat_key] in cfg_new['Wacther3'].sections:
cfg_new['Watcher3'][env_cat_key]['enabled'] = 0
section = 'Lidarr'
env_cat_key = 'NZBPO_LICATEGORY'

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from core import logger, main_db
from core.utils import backup_versioned_file
@ -40,7 +33,7 @@ class InitialSchema(main_db.SchemaUpgrade):
queries = [
'CREATE TABLE db_version (db_version INTEGER);',
'CREATE TABLE downloads (input_directory TEXT, input_name TEXT, input_hash TEXT, input_id TEXT, client_agent TEXT, status INTEGER, last_update NUMERIC, CONSTRAINT pk_downloadID PRIMARY KEY (input_directory, input_name));',
'INSERT INTO db_version (db_version) VALUES (2);',
'INSERT INTO db_version (db_version) VALUES (2);'
]
for query in queries:
self.connection.action(query)
@ -66,7 +59,7 @@ class InitialSchema(main_db.SchemaUpgrade):
'INSERT INTO downloads2 SELECT * FROM downloads;',
'DROP TABLE IF EXISTS downloads;',
'ALTER TABLE downloads2 RENAME TO downloads;',
'INSERT INTO db_version (db_version) VALUES (2);',
'INSERT INTO db_version (db_version) VALUES (2);'
]
for query in queries:
self.connection.action(query)

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import platform
import shutil
@ -28,11 +21,11 @@ def extract(file_path, output_destination):
wscriptlocation = os.path.join(os.environ['WINDIR'], 'system32', 'wscript.exe')
invislocation = os.path.join(core.APP_ROOT, 'core', 'extractor', 'bin', 'invisible.vbs')
cmd_7zip = [wscriptlocation, invislocation, str(core.SHOWEXTRACT), core.SEVENZIP, 'x', '-y']
ext_7zip = ['.rar', '.zip', '.tar.gz', 'tgz', '.tar.bz2', '.tbz', '.tar.lzma', '.tlz', '.7z', '.xz', '.gz']
ext_7zip = ['.rar', '.zip', '.tar.gz', 'tgz', '.tar.bz2', '.tbz', '.tar.lzma', '.tlz', '.7z', '.xz']
extract_commands = dict.fromkeys(ext_7zip, cmd_7zip)
# Using unix
else:
required_cmds = ['unrar', 'unzip', 'tar', 'unxz', 'unlzma', '7zr', 'bunzip2', 'gunzip']
required_cmds = ['unrar', 'unzip', 'tar', 'unxz', 'unlzma', '7zr', 'bunzip2']
# ## Possible future suport:
# gunzip: gz (cmd will delete original archive)
# ## the following do not extract to dest dir
@ -49,7 +42,6 @@ def extract(file_path, output_destination):
'.tar.lzma': ['tar', '--lzma', '-xf'], '.tlz': ['tar', '--lzma', '-xf'],
'.tar.xz': ['tar', '--xz', '-xf'], '.txz': ['tar', '--xz', '-xf'],
'.7z': ['7zr', 'x'],
'.gz': ['gunzip'],
}
# Test command exists and if not, remove
if not os.getenv('TR_TORRENT_DIR'):
@ -83,8 +75,6 @@ def extract(file_path, output_destination):
# Check if this is a tar
if os.path.splitext(ext[0])[1] == '.tar':
cmd = extract_commands['.tar{ext}'.format(ext=ext[1])]
else: # Try gunzip
cmd = extract_commands[ext[1]]
elif ext[1] in ('.1', '.01', '.001') and os.path.splitext(ext[0])[1] in ('.rar', '.zip', '.7z'):
cmd = extract_commands[os.path.splitext(ext[0])[1]]
elif ext[1] in ('.cb7', '.cba', '.cbr', '.cbt', '.cbz'): # don't extract these comic book archives.
@ -131,15 +121,14 @@ def extract(file_path, output_destination):
else:
cmd = core.NICENESS + cmd
cmd2 = cmd
if not 'gunzip' in cmd: #gunzip doesn't support password
cmd2.append('-p-') # don't prompt for password.
cmd2.append('-p-') # don't prompt for password.
p = Popen(cmd2, stdout=devnull, stderr=devnull, startupinfo=info) # should extract files fine.
res = p.wait()
if res == 0: # Both Linux and Windows return 0 for successful.
core.logger.info('EXTRACTOR: Extraction was successful for {file} to {destination}'.format
(file=file_path, destination=output_destination))
success = 1
elif len(passwords) > 0 and not 'gunzip' in cmd:
elif len(passwords) > 0:
core.logger.info('EXTRACTOR: Attempting to extract with passwords')
for password in passwords:
if password == '': # if edited in windows or otherwise if blank lines.

143
core/forks.py Normal file
View file

@ -0,0 +1,143 @@
# coding=utf-8
import requests
from six import iteritems
import core
from core import logger
def auto_fork(section, input_category):
# auto-detect correct section
# config settings
cfg = dict(core.CFG[section][input_category])
host = cfg.get('host')
port = cfg.get('port')
username = cfg.get('username')
password = cfg.get('password')
apikey = cfg.get('apikey')
ssl = int(cfg.get('ssl', 0))
web_root = cfg.get('web_root', '')
replace = {
'medusa': 'Medusa',
'medusa-api': 'Medusa-api',
'sickbeard-api': 'SickBeard-api',
'sickgear': 'SickGear',
'sickchill': 'SickChill',
'sickrage': 'SickRage',
'stheno': 'Stheno',
}
_val = cfg.get('fork', 'auto')
f1 = replace.get(_val, _val)
try:
fork = f1, core.FORKS[f1]
except KeyError:
fork = 'auto'
protocol = 'https://' if ssl else 'http://'
detected = False
if section == 'NzbDrone':
logger.info('Attempting to verify {category} fork'.format
(category=input_category))
url = '{protocol}{host}:{port}{root}/api/rootfolder'.format(
protocol=protocol, host=host, port=port, root=web_root)
headers = {'X-Api-Key': apikey}
try:
r = requests.get(url, headers=headers, stream=True, verify=False)
except requests.ConnectionError:
logger.warning('Could not connect to {0}:{1} to verify fork!'.format(section, input_category))
if not r.ok:
logger.warning('Connection to {section}:{category} failed! '
'Check your configuration'.format
(section=section, category=input_category))
fork = ['default', {}]
elif fork == 'auto':
params = core.ALL_FORKS
rem_params = []
logger.info('Attempting to auto-detect {category} fork'.format(category=input_category))
# define the order to test. Default must be first since the default fork doesn't reject parameters.
# then in order of most unique parameters.
if apikey:
url = '{protocol}{host}:{port}{root}/api/{apikey}/?cmd=help&subject=postprocess'.format(
protocol=protocol, host=host, port=port, root=web_root, apikey=apikey)
else:
url = '{protocol}{host}:{port}{root}/home/postprocess/'.format(
protocol=protocol, host=host, port=port, root=web_root)
# attempting to auto-detect fork
try:
s = requests.Session()
if not apikey and username and password:
login = '{protocol}{host}:{port}{root}/login'.format(
protocol=protocol, host=host, port=port, root=web_root)
login_params = {'username': username, 'password': password}
r = s.get(login, verify=False, timeout=(30, 60))
if r.status_code == 401 and r.cookies.get('_xsrf'):
login_params['_xsrf'] = r.cookies.get('_xsrf')
s.post(login, data=login_params, stream=True, verify=False)
r = s.get(url, auth=(username, password), verify=False)
except requests.ConnectionError:
logger.info('Could not connect to {section}:{category} to perform auto-fork detection!'.format
(section=section, category=input_category))
r = []
if r and r.ok:
if apikey:
try:
json_data = r.json()
except ValueError:
logger.error('Failed to get JSON data from response')
logger.debug('Response received')
raise
try:
json_data = json_data['data']
except KeyError:
logger.error('Failed to get data from JSON')
logger.debug('Response received: {}'.format(json_data))
raise
else:
json_data = json_data.get('data', json_data)
optional_parameters = json_data['optionalParameters'].keys()
# Find excess parameters
excess_parameters = set(params).difference(optional_parameters)
logger.debug('Removing excess parameters: {}'.format(sorted(excess_parameters)))
rem_params.extend(excess_parameters)
else:
# Find excess parameters
rem_params.extend(
param
for param in params
if 'name="{param}"'.format(param=param) not in r.text
)
# Remove excess params
for param in rem_params:
params.pop(param)
for fork in sorted(iteritems(core.FORKS), reverse=False):
if params == fork[1]:
detected = True
break
if detected:
logger.info('{section}:{category} fork auto-detection successful ...'.format
(section=section, category=input_category))
elif rem_params:
logger.info('{section}:{category} fork auto-detection found custom params {params}'.format
(section=section, category=input_category, params=params))
fork = ['custom', params]
else:
logger.info('{section}:{category} fork auto-detection failed'.format
(section=section, category=input_category))
fork = core.FORKS.items()[core.FORKS.keys().index(core.FORK_DEFAULT)]
logger.info('{section}:{category} fork set to {fork}'.format
(section=section, category=input_category, fork=fork[0]))
return fork[0], fork[1]

View file

@ -1,17 +1,12 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
class GitHub(object):
"""Simple api wrapper for the Github API v3."""
"""
Simple api wrapper for the Github API v3.
"""
def __init__(self, github_repo_user, github_repo, branch='master'):
@ -20,14 +15,16 @@ class GitHub(object):
self.branch = branch
def _access_api(self, path, params=None):
"""Access API at given an API path and optional parameters."""
"""
Access the API at the path given and with the optional params given.
"""
url = 'https://api.github.com/{path}'.format(path='/'.join(path))
data = requests.get(url, params=params, verify=False)
return data.json() if data.ok else []
def commits(self):
"""
Get the 100 most recent commits from the specified user/repo/branch, starting from HEAD.
Uses the API to get a list of the 100 most recent commits from the specified user/repo/branch, starting from HEAD.
user: The github username of the person whose repo you're querying
repo: The repo name to query
@ -42,7 +39,7 @@ class GitHub(object):
def compare(self, base, head, per_page=1):
"""
Get compares between base and head.
Uses the API to get a list of compares between base and head.
user: The github username of the person whose repo you're querying
repo: The repo name to query

View file

@ -1,19 +1,11 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import logging
import os
import sys
import threading
import core
import functools
# number of log files to keep
NUM_LOGS = 3
@ -93,9 +85,9 @@ class NTMRotatingLogHandler(object):
console.setFormatter(DispatchingFormatter(
{'nzbtomedia': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S'),
'postprocess': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S'),
'db': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S'),
'db': logging.Formatter('[%(asctime)s] [%(levelname)s]::%(message)s', '%H:%M:%S')
},
logging.Formatter('%(message)s')))
logging.Formatter('%(message)s'), ))
# add the handler to the root logger
logging.getLogger('nzbtomedia').addHandler(console)
@ -119,7 +111,10 @@ class NTMRotatingLogHandler(object):
self.close_log(old_handler)
def _config_handler(self):
"""Configure a file handler to log at file_name and return it."""
"""
Configure a file handler to log at file_name and return it.
"""
file_handler = logging.FileHandler(self.log_file_path, encoding='utf-8')
file_handler.setLevel(DB)
@ -127,29 +122,29 @@ class NTMRotatingLogHandler(object):
file_handler.setFormatter(DispatchingFormatter(
{'nzbtomedia': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S'),
'postprocess': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S'),
'db': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S'),
'db': logging.Formatter('%(asctime)s %(levelname)-8s::%(message)s', '%Y-%m-%d %H:%M:%S')
},
logging.Formatter('%(message)s')))
logging.Formatter('%(message)s'), ))
return file_handler
def _log_file_name(self, i):
"""
Return a numbered log file name depending on i.
If i==0 it just uses logName, if not it appends it to the extension
e.g. (blah.log.3 for i == 3)
Returns a numbered log file name depending on i. If i==0 it just uses logName, if not it appends
it to the extension (blah.log.3 for i == 3)
i: Log number to ues
"""
return self.log_file_path + ('.{0}'.format(i) if i else '')
def _num_logs(self):
"""
Scan the log folder and figure out how many log files there are already on disk.
Scans the log folder and figures out how many log files there are already on disk
Returns: The number of the last used file (eg. mylog.log.3 would return 3). If there are no logs it returns -1
"""
cur_log = 0
while os.path.isfile(self._log_file_name(cur_log)):
cur_log += 1
@ -207,8 +202,9 @@ class NTMRotatingLogHandler(object):
ntm_logger = logging.getLogger('nzbtomedia')
pp_logger = logging.getLogger('postprocess')
db_logger = logging.getLogger('db')
pp_logger.postprocess = functools.partial(pp_logger.log, POSTPROCESS)
db_logger.db = functools.partial(db_logger.log, DB)
setattr(pp_logger, 'postprocess', lambda *args: pp_logger.log(POSTPROCESS, *args))
setattr(db_logger, 'db', lambda *args: db_logger.log(DB, *args))
try:
if log_level == DEBUG:
if core.LOG_DEBUG == 1:

View file

@ -1,55 +1,19 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from __future__ import print_function
import os.path
import re
import sqlite3
import sys
import time
from six import text_type, PY2
from six import text_type
import core
from core import logger
from core import permissions
if PY2:
class Row(sqlite3.Row, object):
"""
Row factory that uses Byte Strings for keys.
The sqlite3.Row in Python 2 does not support unicode keys.
This overrides __getitem__ to attempt to encode the key to bytes first.
"""
def __getitem__(self, item):
"""
Get an item from the row by index or key.
:param item: Index or Key of item to return.
:return: An item from the sqlite3.Row.
"""
try:
# sqlite3.Row column names should be Bytes in Python 2
item = item.encode()
except AttributeError:
pass # assume item is a numeric index
return super(Row, self).__getitem__(item)
else:
from sqlite3 import Row
def db_filename(filename='nzbtomedia.db', suffix=None):
"""
Return the correct location of the database file.
@param filename: The sqlite database filename to use. If not specified,
will be made to be nzbtomedia.db
@param suffix: The suffix to append to the filename. A '.' will be added
@ -63,29 +27,13 @@ def db_filename(filename='nzbtomedia.db', suffix=None):
class DBConnection(object):
def __init__(self, filename='nzbtomedia.db', suffix=None, row_type=None):
self.filename = filename
path = db_filename(filename)
try:
self.connection = sqlite3.connect(path, 20)
except sqlite3.OperationalError as error:
if os.path.exists(path):
logger.error('Please check permissions on database: {0}'.format(path))
else:
logger.error('Database file does not exist')
logger.error('Please check permissions on directory: {0}'.format(path))
path = os.path.dirname(path)
mode = permissions.mode(path)
owner, group = permissions.ownership(path)
logger.error(
"=== PERMISSIONS ===========================\n"
" Path : {0}\n"
" Mode : {1}\n"
" Owner: {2}\n"
" Group: {3}\n"
"===========================================".format(path, mode, owner, group),
)
self.connection = sqlite3.connect(db_filename(filename), 20)
if row_type == 'dict':
self.connection.row_factory = self._dict_factory
else:
self.connection.row_factory = Row
self.connection.row_factory = sqlite3.Row
def check_db_version(self):
result = None
@ -235,9 +183,9 @@ class DBConnection(object):
'WHERE {conditions}'.format(
table=table_name,
params=', '.join(gen_params(value_dict)),
conditions=' AND '.join(gen_params(key_dict)),
conditions=' AND '.join(gen_params(key_dict))
),
items,
items
)
if self.connection.total_changes == changes_before:
@ -246,9 +194,9 @@ class DBConnection(object):
'VALUES ({values})'.format(
table=table_name,
columns=', '.join(map(text_type, value_dict.keys())),
values=', '.join(['?'] * len(value_dict.values())),
values=', '.join(['?'] * len(value_dict.values()))
),
list(value_dict.values()),
list(value_dict.values())
)
def table_info(self, table_name):
@ -259,6 +207,13 @@ class DBConnection(object):
for column in cursor
}
# http://stackoverflow.com/questions/3300464/how-can-i-get-dict-from-sqlite-query
def _dict_factory(self, cursor, row):
return {
col[0]: row[idx]
for idx, col in enumerate(cursor.description)
}
def sanity_check_database(connection, sanity_check):
sanity_check(connection).check()
@ -278,11 +233,7 @@ class DBSanityCheck(object):
def upgrade_database(connection, schema):
logger.log(u'Checking database structure...', logger.MESSAGE)
try:
_process_upgrade(connection, schema)
except Exception as error:
logger.error(error)
sys.exit(1)
_process_upgrade(connection, schema)
def pretty_name(class_name):

View file

@ -1,88 +0,0 @@
import os
import sys
import logging
log = logging.getLogger(__name__)
log.addHandler(logging.NullHandler())
WINDOWS = sys.platform == 'win32'
POSIX = not WINDOWS
try:
import pwd
import grp
except ImportError:
if POSIX:
raise
try:
from win32security import GetNamedSecurityInfo
from win32security import LookupAccountSid
from win32security import GROUP_SECURITY_INFORMATION
from win32security import OWNER_SECURITY_INFORMATION
from win32security import SE_FILE_OBJECT
except ImportError:
if WINDOWS:
raise
def mode(path):
"""Get permissions."""
stat_result = os.stat(path) # Get information from path
permissions_mask = 0o777 # Set mask for permissions info
# Get only the permissions part of st_mode as an integer
int_mode = stat_result.st_mode & permissions_mask
oct_mode = oct(int_mode) # Convert to octal representation
return oct_mode[2:] # Return mode but strip octal prefix
def nt_ownership(path):
"""Get the owner and group for a file or directory."""
def fully_qualified_name(sid):
"""Return a fully qualified account name."""
# Look up the account information for the given SID
# https://learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-lookupaccountsida
name, domain, _acct_type = LookupAccountSid(None, sid)
# Return account information formatted as DOMAIN\ACCOUNT_NAME
return '{}\\{}'.format(domain, name)
# Get the Windows security descriptor for the path
# https://learn.microsoft.com/en-us/windows/win32/api/aclapi/nf-aclapi-getnamedsecurityinfoa
security_descriptor = GetNamedSecurityInfo(
path, # Name of the item to query
SE_FILE_OBJECT, # Type of item to query (file or directory)
# Add OWNER and GROUP security information to result
OWNER_SECURITY_INFORMATION | GROUP_SECURITY_INFORMATION,
)
# Get the Security Identifier for the owner and group from the security descriptor
# https://learn.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-getsecuritydescriptorowner
# https://learn.microsoft.com/en-us/windows/win32/api/securitybaseapi/nf-securitybaseapi-getsecuritydescriptorgroup
owner_sid = security_descriptor.GetSecurityDescriptorOwner()
group_sid = security_descriptor.GetSecurityDescriptorGroup()
# Get the fully qualified account name (e.g. DOMAIN\ACCOUNT_NAME)
owner = fully_qualified_name(owner_sid)
group = fully_qualified_name(group_sid)
return owner, group
def posix_ownership(path):
"""Get the owner and group for a file or directory."""
# Get path information
stat_result = os.stat(path)
# Get account name from path stat result
owner = pwd.getpwuid(stat_result.st_uid).pw_name
group = grp.getgrgid(stat_result.st_gid).gr_name
return owner, group
# Select the ownership function appropriate for the platform
if WINDOWS:
ownership = nt_ownership
else:
ownership = posix_ownership

View file

@ -1,5 +0,0 @@
from core.plugins.downloaders.nzb.configuration import configure_nzbs
from core.plugins.downloaders.torrent.configuration import (
configure_torrents,
configure_torrent_class,
)

View file

@ -1,23 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import core
def configure_nzbs(config):
nzb_config = config['Nzb']
core.NZB_CLIENT_AGENT = nzb_config['clientAgent'] # sabnzbd
core.NZB_DEFAULT_DIRECTORY = nzb_config['default_downloadDirectory']
core.NZB_NO_MANUAL = int(nzb_config['no_manual'], 0)
configure_sabnzbd(nzb_config)
def configure_sabnzbd(config):
core.SABNZBD_HOST = config['sabnzbd_host']
core.SABNZBD_PORT = int(config['sabnzbd_port'] or 8080) # defaults to accommodate NzbGet
core.SABNZBD_APIKEY = config['sabnzbd_apikey']

View file

@ -1,97 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import core
from core.plugins.downloaders.torrent.utils import create_torrent_class
def configure_torrents(config):
torrent_config = config['Torrent']
core.TORRENT_CLIENT_AGENT = torrent_config['clientAgent'] # utorrent | deluge | transmission | rtorrent | vuze | qbittorrent | synods | other
core.OUTPUT_DIRECTORY = torrent_config['outputDirectory'] # /abs/path/to/complete/
core.TORRENT_DEFAULT_DIRECTORY = torrent_config['default_downloadDirectory']
core.TORRENT_NO_MANUAL = int(torrent_config['no_manual'], 0)
configure_torrent_linking(torrent_config)
configure_flattening(torrent_config)
configure_torrent_deletion(torrent_config)
configure_torrent_categories(torrent_config)
configure_torrent_permissions(torrent_config)
configure_torrent_resuming(torrent_config)
configure_utorrent(torrent_config)
configure_transmission(torrent_config)
configure_deluge(torrent_config)
configure_qbittorrent(torrent_config)
configure_syno(torrent_config)
def configure_torrent_linking(config):
core.USE_LINK = config['useLink'] # no | hard | sym
def configure_flattening(config):
core.NOFLATTEN = (config['noFlatten'])
if isinstance(core.NOFLATTEN, str):
core.NOFLATTEN = core.NOFLATTEN.split(',')
def configure_torrent_categories(config):
core.CATEGORIES = (config['categories']) # music,music_videos,pictures,software
if isinstance(core.CATEGORIES, str):
core.CATEGORIES = core.CATEGORIES.split(',')
def configure_torrent_resuming(config):
core.TORRENT_RESUME_ON_FAILURE = int(config['resumeOnFailure'])
core.TORRENT_RESUME = int(config['resume'])
def configure_torrent_permissions(config):
core.TORRENT_CHMOD_DIRECTORY = int(str(config['chmodDirectory']), 8)
def configure_torrent_deletion(config):
core.DELETE_ORIGINAL = int(config['deleteOriginal'])
def configure_utorrent(config):
core.UTORRENT_WEB_UI = config['uTorrentWEBui'] # http://localhost:8090/gui/
core.UTORRENT_USER = config['uTorrentUSR'] # mysecretusr
core.UTORRENT_PASSWORD = config['uTorrentPWD'] # mysecretpwr
def configure_transmission(config):
core.TRANSMISSION_HOST = config['TransmissionHost'] # localhost
core.TRANSMISSION_PORT = int(config['TransmissionPort'])
core.TRANSMISSION_USER = config['TransmissionUSR'] # mysecretusr
core.TRANSMISSION_PASSWORD = config['TransmissionPWD'] # mysecretpwr
def configure_syno(config):
core.SYNO_HOST = config['synoHost'] # localhost
core.SYNO_PORT = int(config['synoPort'])
core.SYNO_USER = config['synoUSR'] # mysecretusr
core.SYNO_PASSWORD = config['synoPWD'] # mysecretpwr
def configure_deluge(config):
core.DELUGE_HOST = config['DelugeHost'] # localhost
core.DELUGE_PORT = int(config['DelugePort']) # 8084
core.DELUGE_USER = config['DelugeUSR'] # mysecretusr
core.DELUGE_PASSWORD = config['DelugePWD'] # mysecretpwr
def configure_qbittorrent(config):
core.QBITTORRENT_HOST = config['qBittorrentHost'] # localhost
core.QBITTORRENT_PORT = int(config['qBittorrentPort']) # 8080
core.QBITTORRENT_USER = config['qBittorrentUSR'] # mysecretusr
core.QBITTORRENT_PASSWORD = config['qBittorrentPWD'] # mysecretpwr
def configure_torrent_class():
# create torrent class
core.TORRENT_CLASS = create_torrent_class(core.TORRENT_CLIENT_AGENT)

View file

@ -1,28 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from deluge_client.client import DelugeRPCClient
import core
from core import logger
def configure_client():
agent = 'deluge'
host = core.DELUGE_HOST
port = core.DELUGE_PORT
user = core.DELUGE_USER
password = core.DELUGE_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
client = DelugeRPCClient(host, port, user, password)
try:
client.connect()
except Exception:
logger.error('Failed to connect to Deluge')
else:
return client

View file

@ -1,31 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from qbittorrent import Client as qBittorrentClient
import core
from core import logger
def configure_client():
agent = 'qbittorrent'
host = core.QBITTORRENT_HOST
port = core.QBITTORRENT_PORT
user = core.QBITTORRENT_USER
password = core.QBITTORRENT_PASSWORD
logger.debug(
'Connecting to {0}: http://{1}:{2}'.format(agent, host, port),
)
client = qBittorrentClient('http://{0}:{1}/'.format(host, port))
try:
client.login(user, password)
except Exception:
logger.error('Failed to connect to qBittorrent')
else:
return client

View file

@ -1,27 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from syno.downloadstation import DownloadStation
import core
from core import logger
def configure_client():
agent = 'synology'
host = core.SYNO_HOST
port = core.SYNO_PORT
user = core.SYNO_USER
password = core.SYNO_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
try:
client = DownloadStation(host, port, user, password)
except Exception:
logger.error('Failed to connect to synology')
else:
return client

View file

@ -1,27 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from transmissionrpc.client import Client as TransmissionClient
import core
from core import logger
def configure_client():
agent = 'transmission'
host = core.TRANSMISSION_HOST
port = core.TRANSMISSION_PORT
user = core.TRANSMISSION_USER
password = core.TRANSMISSION_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
try:
client = TransmissionClient(host, port, user, password)
except Exception:
logger.error('Failed to connect to Transmission')
else:
return client

View file

@ -1,26 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from utorrent.client import UTorrentClient
import core
from core import logger
def configure_client():
agent = 'utorrent'
web_ui = core.UTORRENT_WEB_UI
user = core.UTORRENT_USER
password = core.UTORRENT_PASSWORD
logger.debug('Connecting to {0}: {1}'.format(agent, web_ui))
try:
client = UTorrentClient(web_ui, user, password)
except Exception:
logger.error('Failed to connect to uTorrent')
else:
return client

View file

@ -1,5 +0,0 @@
from core.plugins.downloaders.torrent.utils import (
pause_torrent,
remove_torrent,
resume_torrent,
)

View file

@ -1,107 +0,0 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from babelfish import Language
import subliminal
import core
from core import logger
import os
import re
for provider in subliminal.provider_manager.internal_extensions:
if provider not in [str(x) for x in subliminal.provider_manager.list_entry_points()]:
subliminal.provider_manager.register(str(provider))
def import_subs(filename):
if not core.GETSUBS:
return
try:
subliminal.region.configure('dogpile.cache.dbm', arguments={'filename': 'cachefile.dbm'})
except Exception:
pass
languages = set()
for item in core.SLANGUAGES:
try:
languages.add(Language(item))
except Exception:
pass
if not languages:
return
logger.info('Attempting to download subtitles for {0}'.format(filename), 'SUBTITLES')
try:
video = subliminal.scan_video(filename)
subtitles = subliminal.download_best_subtitles({video}, languages)
subliminal.save_subtitles(video, subtitles[video])
for subtitle in subtitles[video]:
subtitle_path = subliminal.subtitle.get_subtitle_path(video.name, subtitle.language)
os.chmod(subtitle_path, 0o644)
except Exception as e:
logger.error('Failed to download subtitles for {0} due to: {1}'.format(filename, e), 'SUBTITLES')
def rename_subs(path):
filepaths = []
sub_ext = ['.srt', '.sub', '.idx']
vidfiles = core.list_media_files(path, media=True, audio=False, meta=False, archives=False)
if not vidfiles or len(vidfiles) > 1: # If there is more than 1 video file, or no video files, we can't rename subs.
return
name = os.path.splitext(os.path.split(vidfiles[0])[1])[0]
for directory, _, filenames in os.walk(path):
for filename in filenames:
filepaths.extend([os.path.join(directory, filename)])
subfiles = [item for item in filepaths if os.path.splitext(item)[1] in sub_ext]
subfiles.sort() #This should sort subtitle names by language (alpha) and Number (where multiple)
renamed = []
for sub in subfiles:
subname, ext = os.path.splitext(os.path.basename(sub))
if name in subname: # The sub file name already includes the video name.
continue
words = re.findall('[a-zA-Z]+',str(subname)) # find whole words in string
# parse the words for language descriptors.
lan = None
for word in words:
try:
if len(word) == 2:
lan = Language.fromalpha2(word.lower())
elif len(word) == 3:
lan = Language(word.lower())
elif len(word) > 3:
lan = Language.fromname(word.lower())
if lan:
break
except: #if we didn't find a language, try next word.
continue
# rename the sub file as name.lan.ext
if not lan:
# could call ffprobe to parse the sub information and get language if lan unknown here.
new_sub_name = name
else:
new_sub_name = '{name}.{lan}'.format(name=name, lan=str(lan))
new_sub = os.path.join(directory, new_sub_name) # full path and name less ext
if '{new_sub}{ext}'.format(new_sub=new_sub, ext=ext) in renamed: # If duplicate names, add unique number before ext.
for i in range(1,len(renamed)+1):
if '{new_sub}.{i}{ext}'.format(new_sub=new_sub, i=i, ext=ext) in renamed:
continue
new_sub = '{new_sub}.{i}'.format(new_sub=new_sub, i=i)
break
new_sub = '{new_sub}{ext}'.format(new_sub=new_sub, ext=ext) # add extension now
if os.path.isfile(new_sub): # Don't copy over existing - final check.
logger.debug('Unable to rename sub file {old} as destination {new} already exists'.format(old=sub, new=new_sub))
continue
logger.debug('Renaming sub file from {old} to {new}'.format
(old=sub, new=new_sub))
renamed.append(new_sub)
try:
os.rename(sub, new_sub)
except Exception as error:
logger.error('Unable to rename sub file due to: {error}'.format(error=error))
return

View file

@ -1,72 +0,0 @@
import os
import core
from core import logger
from core.auto_process.common import ProcessResult
from core.processor import nzb
from core.utils import (
get_dirs,
get_download_info,
)
try:
text_type = unicode
except NameError:
text_type = str
def process():
# Perform Manual Post-Processing
logger.warning(
'Invalid number of arguments received from client, Switching to manual run mode ...')
# Post-Processing Result
result = ProcessResult(
message='',
status_code=0,
)
for section, subsections in core.SECTIONS.items():
for subsection in subsections:
if not core.CFG[section][subsection].isenabled():
continue
for dir_name in get_dirs(section, subsection, link='move'):
logger.info(
'Starting manual run for {0}:{1} - Folder: {2}'.format(
section, subsection, dir_name))
logger.info(
'Checking database for download info for {0} ...'.format(
os.path.basename(dir_name)))
core.DOWNLOAD_INFO = get_download_info(
os.path.basename(dir_name), 0)
if core.DOWNLOAD_INFO:
logger.info('Found download info for {0}, '
'setting variables now ...'.format
(os.path.basename(dir_name)))
client_agent = text_type(
core.DOWNLOAD_INFO[0]['client_agent']) or 'manual'
download_id = text_type(
core.DOWNLOAD_INFO[0]['input_id']) or ''
else:
logger.info('Unable to locate download info for {0}, '
'continuing to try and process this release ...'.format
(os.path.basename(dir_name)))
client_agent = 'manual'
download_id = ''
if client_agent and client_agent.lower() not in core.NZB_CLIENTS:
continue
input_name = os.path.basename(dir_name)
results = nzb.process(dir_name, input_name, 0,
client_agent=client_agent,
download_id=download_id or None,
input_category=subsection)
if results.status_code != 0:
logger.error(
'A problem was reported when trying to perform a manual run for {0}:{1}.'.format
(section, subsection))
result = results
return result

View file

@ -1,154 +0,0 @@
import datetime
import core
from core import logger, main_db
from core.auto_process import comics, games, movies, music, tv, books
from core.auto_process.common import ProcessResult
from core.plugins.downloaders.nzb.utils import get_nzoid
from core.plugins.plex import plex_update
from core.user_scripts import external_script
from core.utils import (
char_replace,
clean_dir,
convert_to_ascii,
extract_files,
update_download_info_status,
)
try:
text_type = unicode
except NameError:
text_type = str
def process(input_directory, input_name=None, status=0, client_agent='manual', download_id=None, input_category=None, failure_link=None):
if core.SAFE_MODE and input_directory == core.NZB_DEFAULT_DIRECTORY:
logger.error(
'The input directory:[{0}] is the Default Download Directory. Please configure category directories to prevent processing of other media.'.format(
input_directory))
return ProcessResult(
message='',
status_code=-1,
)
if not download_id and client_agent == 'sabnzbd':
download_id = get_nzoid(input_name)
if client_agent != 'manual' and not core.DOWNLOAD_INFO:
logger.debug('Adding NZB download info for directory {0} to database'.format(input_directory))
my_db = main_db.DBConnection()
input_directory1 = input_directory
input_name1 = input_name
try:
encoded, input_directory1 = char_replace(input_directory)
encoded, input_name1 = char_replace(input_name)
except Exception:
pass
control_value_dict = {'input_directory': text_type(input_directory1)}
new_value_dict = {
'input_name': text_type(input_name1),
'input_hash': text_type(download_id),
'input_id': text_type(download_id),
'client_agent': text_type(client_agent),
'status': 0,
'last_update': datetime.date.today().toordinal(),
}
my_db.upsert('downloads', new_value_dict, control_value_dict)
# auto-detect section
if input_category is None:
input_category = 'UNCAT'
usercat = input_category
section = core.CFG.findsection(input_category).isenabled()
if section is None:
section = core.CFG.findsection('ALL').isenabled()
if section is None:
logger.error(
'Category:[{0}] is not defined or is not enabled. Please rename it or ensure it is enabled for the appropriate section in your autoProcessMedia.cfg and try again.'.format(
input_category))
return ProcessResult(
message='',
status_code=-1,
)
else:
usercat = 'ALL'
if len(section) > 1:
logger.error(
'Category:[{0}] is not unique, {1} are using it. Please rename it or disable all other sections using the same category name in your autoProcessMedia.cfg and try again.'.format(
input_category, section.keys()))
return ProcessResult(
message='',
status_code=-1,
)
if section:
section_name = section.keys()[0]
logger.info('Auto-detected SECTION:{0}'.format(section_name))
else:
logger.error('Unable to locate a section with subsection:{0} enabled in your autoProcessMedia.cfg, exiting!'.format(
input_category))
return ProcessResult(
status_code=-1,
message='',
)
cfg = dict(core.CFG[section_name][usercat])
extract = int(cfg.get('extract', 0))
try:
if int(cfg.get('remote_path')) and not core.REMOTE_PATHS:
logger.error('Remote Path is enabled for {0}:{1} but no Network mount points are defined. Please check your autoProcessMedia.cfg, exiting!'.format(
section_name, input_category))
return ProcessResult(
status_code=-1,
message='',
)
except Exception:
logger.error('Remote Path {0} is not valid for {1}:{2} Please set this to either 0 to disable or 1 to enable!'.format(
cfg.get('remote_path'), section_name, input_category))
input_name, input_directory = convert_to_ascii(input_name, input_directory)
if extract == 1 and not (status > 0 and core.NOEXTRACTFAILED):
logger.debug('Checking for archives to extract in directory: {0}'.format(input_directory))
extract_files(input_directory)
logger.info('Calling {0}:{1} to post-process:{2}'.format(section_name, input_category, input_name))
if section_name in ['CouchPotato', 'Radarr', 'Watcher3']:
result = movies.process(section_name, input_directory, input_name, status, client_agent, download_id, input_category, failure_link)
elif section_name in ['SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr']:
result = tv.process(section_name, input_directory, input_name, status, client_agent, download_id, input_category, failure_link)
elif section_name in ['HeadPhones', 'Lidarr']:
result = music.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'Mylar':
result = comics.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'Gamez':
result = games.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'LazyLibrarian':
result = books.process(section_name, input_directory, input_name, status, client_agent, input_category)
elif section_name == 'UserScript':
result = external_script(input_directory, input_name, input_category, section[usercat])
else:
result = ProcessResult(
message='',
status_code=-1,
)
plex_update(input_category)
if result.status_code == 0:
if client_agent != 'manual':
# update download status in our DB
update_download_info_status(input_name, 1)
if section_name not in ['UserScript', 'NzbDrone', 'Sonarr', 'Radarr', 'Lidarr']:
# cleanup our processing folders of any misc unwanted files and empty directories
clean_dir(input_directory, section_name, input_category)
return result

View file

@ -1,108 +0,0 @@
import os
import sys
import core
from core import logger
from core.processor import nzb
def parse_download_id():
"""Parse nzbget download_id from environment."""
download_id_keys = [
'NZBPR_COUCHPOTATO',
'NZBPR_DRONE',
'NZBPR_SONARR',
'NZBPR_RADARR',
'NZBPR_LIDARR',
]
for download_id_key in download_id_keys:
try:
return os.environ[download_id_key]
except KeyError:
pass
else:
return ''
def parse_failure_link():
"""Parse nzbget failure_link from environment."""
return os.environ.get('NZBPR__DNZB_FAILURE')
def _parse_total_status():
status_summary = os.environ['NZBPP_TOTALSTATUS']
if status_summary != 'SUCCESS':
status = os.environ['NZBPP_STATUS']
logger.info('Download failed with status {0}.'.format(status))
return 1
return 0
def _parse_par_status():
"""Parse nzbget par status from environment."""
par_status = os.environ['NZBPP_PARSTATUS']
if par_status == '1' or par_status == '4':
logger.warning('Par-repair failed, setting status \'failed\'')
return 1
return 0
def _parse_unpack_status():
if os.environ['NZBPP_UNPACKSTATUS'] == '1':
logger.warning('Unpack failed, setting status \'failed\'')
return 1
return 0
def _parse_health_status():
"""Parse nzbget download health from environment."""
status = 0
unpack_status_value = os.environ['NZBPP_UNPACKSTATUS']
par_status_value = os.environ['NZBPP_PARSTATUS']
if unpack_status_value == '0' and par_status_value == '0':
# Unpack was skipped due to nzb-file properties
# or due to errors during par-check
if int(os.environ['NZBPP_HEALTH']) < 1000:
logger.warning('Download health is compromised and Par-check/repair disabled or no .par2 files found. Setting status \'failed\'')
status = 1
else:
logger.info('Par-check/repair disabled or no .par2 files found, and Unpack not required. Health is ok so handle as though download successful')
logger.info('Please check your Par-check/repair settings for future downloads.')
return status
def parse_status():
if 'NZBPP_TOTALSTATUS' in os.environ: # Called from nzbget 13.0 or later
status = _parse_total_status()
else:
par_status = _parse_par_status()
unpack_status = _parse_unpack_status()
health_status = _parse_health_status()
status = par_status or unpack_status or health_status
return status
def check_version():
"""Check nzbget version and if version is unsupported, exit."""
version = os.environ['NZBOP_VERSION']
# Check if the script is called from nzbget 11.0 or later
if version[0:5] < '11.0':
logger.error('NZBGet Version {0} is not supported. Please update NZBGet.'.format(version))
sys.exit(core.NZBGET_POSTPROCESS_ERROR)
logger.info('Script triggered from NZBGet Version {0}.'.format(version))
def process():
check_version()
status = parse_status()
download_id = parse_download_id()
failure_link = parse_failure_link()
return nzb.process(
input_directory=os.environ['NZBPP_DIRECTORY'],
input_name=os.environ['NZBPP_NZBNAME'],
status=status,
client_agent='nzbget',
download_id=download_id,
input_category=os.environ['NZBPP_CATEGORY'],
failure_link=failure_link,
)

View file

@ -1,50 +0,0 @@
import os
from core import logger
from core.processor import nzb
# Constants
MINIMUM_ARGUMENTS = 8
def process_script():
version = os.environ['SAB_VERSION']
logger.info('Script triggered from SABnzbd {0}.'.format(version))
return nzb.process(
input_directory=os.environ['SAB_COMPLETE_DIR'],
input_name=os.environ['SAB_FINAL_NAME'],
status=int(os.environ['SAB_PP_STATUS']),
client_agent='sabnzbd',
download_id=os.environ['SAB_NZO_ID'],
input_category=os.environ['SAB_CAT'],
failure_link=os.environ['SAB_FAILURE_URL'],
)
def process(args):
"""
SABnzbd arguments:
1. The final directory of the job (full path)
2. The original name of the NZB file
3. Clean version of the job name (no path info and '.nzb' removed)
4. Indexer's report number (if supported)
5. User-defined category
6. Group that the NZB was posted in e.g. alt.binaries.x
7. Status of post processing:
0 = OK
1 = failed verification
2 = failed unpack
3 = 1+2
8. Failure URL
"""
version = '0.7.17+' if len(args) > MINIMUM_ARGUMENTS else ''
logger.info('Script triggered from SABnzbd {}'.format(version))
return nzb.process(
input_directory=args[1],
input_name=args[2],
status=int(args[7]),
input_category=args[5],
client_agent='sabnzbd',
download_id='',
failure_link=''.join(args[8:]),
)

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import platform
import re
@ -32,7 +25,7 @@ media_list = [r'\.s\d{2}e\d{2}\.', r'\.1080[pi]\.', r'\.720p\.', r'\.576[pi]', r
r'\.internal\.', r'\bac3\b', r'\.ntsc\.', r'\.pal\.', r'\.secam\.', r'\bdivx\b', r'\bxvid\b']
media_pattern = re.compile('|'.join(media_list), flags=re.IGNORECASE)
garbage_name = re.compile(r'^[a-zA-Z0-9]*$')
char_replace = [[r'(\w)1\.(\w)', r'\1i\2'],
char_replace = [[r'(\w)1\.(\w)', r'\1i\2']
]
@ -128,7 +121,7 @@ def reverse_filename(filename, dirname, name):
def rename_script(dirname):
rename_file = ''
for directory, _, files in os.walk(dirname):
for directory, directories, files in os.walk(dirname):
for file in files:
if re.search(r'(rename\S*\.(sh|bat)$)', file, re.IGNORECASE):
rename_file = os.path.join(directory, file)
@ -178,7 +171,7 @@ def par2(dirname):
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.debug('calling command:{0}'.format(cmd), 'PAR2')
logger.debug('calling command:{0}'.format(cmd), 'PAR2')
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
proc.communicate()

View file

@ -1,17 +1,8 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import errno
import json
import sys
import os
import time
import platform
import re
import shutil
@ -27,7 +18,7 @@ from core.utils import make_dir
__author__ = 'Justin'
def is_video_good(videofile, status, require_lan=None):
def is_video_good(videofile, status):
file_name_ext = os.path.basename(videofile)
file_name, file_ext = os.path.splitext(file_name_ext)
disable = False
@ -63,11 +54,7 @@ def is_video_good(videofile, status, require_lan=None):
if video_details.get('streams'):
video_streams = [item for item in video_details['streams'] if item['codec_type'] == 'video']
audio_streams = [item for item in video_details['streams'] if item['codec_type'] == 'audio']
if require_lan:
valid_audio = [item for item in audio_streams if 'tags' in item and 'language' in item['tags'] and item['tags']['language'] in require_lan ]
else:
valid_audio = audio_streams
if len(video_streams) > 0 and len(valid_audio) > 0:
if len(video_streams) > 0 and len(audio_streams) > 0:
logger.info('SUCCESS: [{0}] has no corruption.'.format(file_name_ext), 'TRANSCODER')
return True
else:
@ -79,10 +66,7 @@ def is_video_good(videofile, status, require_lan=None):
def zip_out(file, img, bitbucket):
procin = None
if os.path.isfile(file):
cmd = ['cat', file]
else:
cmd = [core.SEVENZIP, '-so', 'e', img, file]
cmd = [core.SEVENZIP, '-so', 'e', img, file]
try:
procin = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
except Exception:
@ -113,11 +97,12 @@ def get_video_details(videofile, img=None, bitbucket=None):
result = proc.returncode
video_details = json.loads(out.decode())
except Exception:
try: # try this again without -show error in case of ffmpeg limitation
pass
if not video_details:
try:
command = [core.FFPROBE, '-v', 'quiet', print_format, 'json', '-show_format', '-show_streams', videofile]
print_cmd(command)
if img:
procin = zip_out(file, img, bitbucket)
procin = zip_out(file, img)
proc = subprocess.Popen(command, stdout=subprocess.PIPE, stdin=procin.stdout)
procin.stdout.close()
else:
@ -130,21 +115,6 @@ def get_video_details(videofile, img=None, bitbucket=None):
return video_details, result
def check_vid_file(video_details, result):
if result != 0:
return False
if video_details.get('error'):
return False
if not video_details.get('streams'):
return False
video_streams = [item for item in video_details['streams'] if item['codec_type'] == 'video']
audio_streams = [item for item in video_details['streams'] if item['codec_type'] == 'audio']
if len(video_streams) > 0 and len(audio_streams) > 0:
return True
else:
return False
def build_commands(file, new_dir, movie_name, bitbucket):
if isinstance(file, string_types):
input_file = file
@ -162,18 +132,9 @@ def build_commands(file, new_dir, movie_name, bitbucket):
name = re.sub('([ ._=:-]+[cC][dD][0-9])', '', name)
if ext == core.VEXTENSION and new_dir == directory: # we need to change the name to prevent overwriting itself.
core.VEXTENSION = '-transcoded{ext}'.format(ext=core.VEXTENSION) # adds '-transcoded.ext'
new_file = file
else:
img, data = next(iteritems(file))
name = data['name']
new_file = []
rem_vid = []
for vid in data['files']:
video_details, result = get_video_details(vid, img, bitbucket)
if not check_vid_file(video_details, result): #lets not transcode menu or other clips that don't have audio and video.
rem_vid.append(vid)
data['files'] = [ f for f in data['files'] if f not in rem_vid ]
new_file = {img: {'name': data['name'], 'files': data['files']}}
video_details, result = get_video_details(data['files'][0], img, bitbucket)
input_file = '-'
file = '-'
@ -479,7 +440,7 @@ def build_commands(file, new_dir, movie_name, bitbucket):
burnt = 1
if not core.ALLOWSUBS:
break
if sub['codec_name'] in ['dvd_subtitle', 'dvb_subtitle', 'VobSub'] and core.SCODEC == 'mov_text': # We can't convert these.
if sub['codec_name'] in ['dvd_subtitle', 'VobSub'] and core.SCODEC == 'mov_text': # We can't convert these.
continue
map_cmd.extend(['-map', '0:{index}'.format(index=sub['index'])])
s_mapped.extend([sub['index']])
@ -490,15 +451,13 @@ def build_commands(file, new_dir, movie_name, bitbucket):
break
if sub['index'] in s_mapped:
continue
if sub['codec_name'] in ['dvd_subtitle', 'dvb_subtitle', 'VobSub'] and core.SCODEC == 'mov_text': # We can't convert these.
if sub['codec_name'] in ['dvd_subtitle', 'VobSub'] and core.SCODEC == 'mov_text': # We can't convert these.
continue
map_cmd.extend(['-map', '0:{index}'.format(index=sub['index'])])
s_mapped.extend([sub['index']])
if core.OUTPUTFASTSTART:
other_cmd.extend(['-movflags', '+faststart'])
if core.OTHEROPTS:
other_cmd.extend(core.OTHEROPTS)
command = [core.FFMPEG, '-loglevel', 'warning']
@ -516,7 +475,7 @@ def build_commands(file, new_dir, movie_name, bitbucket):
continue
if core.SCODEC == 'mov_text':
subcode = [stream['codec_name'] for stream in sub_details['streams']]
if set(subcode).intersection(['dvd_subtitle', 'dvb_subtitle', 'VobSub']): # We can't convert these.
if set(subcode).intersection(['dvd_subtitle', 'VobSub']): # We can't convert these.
continue
command.extend(['-i', subfile])
lan = os.path.splitext(os.path.splitext(subfile)[0])[1][1:].split('-')[0]
@ -552,7 +511,7 @@ def build_commands(file, new_dir, movie_name, bitbucket):
command.append(newfile_path)
if platform.system() != 'Windows':
command = core.NICENESS + command
return command, new_file
return command
def get_subs(file):
@ -560,7 +519,7 @@ def get_subs(file):
sub_ext = ['.srt', '.sub', '.idx']
name = os.path.splitext(os.path.split(file)[1])[0]
path = os.path.split(file)[0]
for directory, _, filenames in os.walk(path):
for directory, directories, filenames in os.walk(path):
for filename in filenames:
filepaths.extend([os.path.join(directory, filename)])
subfiles = [item for item in filepaths if os.path.splitext(item)[1] in sub_ext and name in item]
@ -611,7 +570,7 @@ def extract_subs(file, newfile_path, bitbucket):
result = 1 # set result to failed in case call fails.
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
out, err = proc.communicate()
proc.communicate()
result = proc.returncode
except Exception:
logger.error('Extracting subtitle has failed')
@ -631,7 +590,6 @@ def process_list(it, new_dir, bitbucket):
new_list = []
combine = []
vts_path = None
mts_path = None
success = True
for item in it:
ext = os.path.splitext(item)[1].lower()
@ -647,14 +605,6 @@ def process_list(it, new_dir, bitbucket):
except Exception:
vts_path = os.path.split(item)[0]
rem_list.append(item)
elif re.match('.+BDMV[/\\]SOURCE[/\\][0-9]+[0-9].[Mm][Tt][Ss]', item) and '.mts' not in core.IGNOREEXTENSIONS:
logger.debug('Found MTS image file: {0}'.format(item), 'TRANSCODER')
if not mts_path:
try:
mts_path = re.match('(.+BDMV[/\\]SOURCE)', item).groups()[0]
except Exception:
mts_path = os.path.split(item)[0]
rem_list.append(item)
elif re.match('.+VIDEO_TS.', item) or re.match('.+VTS_[0-9][0-9]_[0-9].', item):
rem_list.append(item)
elif core.CONCAT and re.match('.+[cC][dD][0-9].', item):
@ -664,8 +614,6 @@ def process_list(it, new_dir, bitbucket):
continue
if vts_path:
new_list.extend(combine_vts(vts_path))
if mts_path:
new_list.extend(combine_mts(mts_path))
if combine:
new_list.extend(combine_cd(combine))
for file in new_list:
@ -684,118 +632,48 @@ def process_list(it, new_dir, bitbucket):
return it, rem_list, new_list, success
def mount_iso(item, new_dir, bitbucket): #Currently only supports Linux Mount when permissions allow.
if platform.system() == 'Windows':
logger.error('No mounting options available under Windows for image file {0}'.format(item), 'TRANSCODER')
return []
mount_point = os.path.join(os.path.dirname(os.path.abspath(item)),'temp')
make_dir(mount_point)
cmd = ['mount', '-o', 'loop', item, mount_point]
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
core.MOUNTED = mount_point # Allows us to verify this has been done and then cleanup.
for root, dirs, files in os.walk(mount_point):
for file in files:
full_path = os.path.join(root, file)
if re.match('.+VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]', full_path) and '.vob' not in core.IGNOREEXTENSIONS:
logger.debug('Found VIDEO_TS image file: {0}'.format(full_path), 'TRANSCODER')
try:
vts_path = re.match('(.+VIDEO_TS)', full_path).groups()[0]
except Exception:
vts_path = os.path.split(full_path)[0]
return combine_vts(vts_path)
elif re.match('.+BDMV[/\\]STREAM[/\\][0-9]+[0-9].[Mm]', full_path) and '.mts' not in core.IGNOREEXTENSIONS:
logger.debug('Found MTS image file: {0}'.format(full_path), 'TRANSCODER')
try:
mts_path = re.match('(.+BDMV[/\\]STREAM)', full_path).groups()[0]
except Exception:
mts_path = os.path.split(full_path)[0]
return combine_mts(mts_path)
logger.error('No VIDEO_TS or BDMV/SOURCE folder found in image file {0}'.format(mount_point), 'TRANSCODER')
return ['failure'] # If we got here, nothing matched our criteria
def rip_iso(item, new_dir, bitbucket):
new_files = []
failure_dir = 'failure'
# Mount the ISO in your OS and call combineVTS.
if not core.SEVENZIP:
logger.debug('No 7zip installed. Attempting to mount image file {0}'.format(item), 'TRANSCODER')
try:
new_files = mount_iso(item, new_dir, bitbucket) # Currently only works for Linux.
except Exception:
logger.error('Failed to mount and extract from image file {0}'.format(item), 'TRANSCODER')
new_files = [failure_dir]
logger.error('No 7zip installed. Can\'t extract image file {0}'.format(item), 'TRANSCODER')
new_files = [failure_dir]
return new_files
cmd = [core.SEVENZIP, 'l', item]
try:
logger.debug('Attempting to extract .vob or .mts from image file {0}'.format(item), 'TRANSCODER')
logger.debug('Attempting to extract .vob from image file {0}'.format(item), 'TRANSCODER')
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
file_match_gen = (
re.match(r'.+(VIDEO_TS[/\\]VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb])', line)
for line in out.decode().splitlines()
)
file_list = [
file_match.groups()[0]
for file_match in file_match_gen
if file_match
]
file_list = [re.match(r'.+(VIDEO_TS[/\\]VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb])', line.decode()).groups()[0] for line in
out.splitlines() if re.match(r'.+VIDEO_TS[/\\]VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]', line.decode())]
combined = []
if file_list: # handle DVD
for n in range(99):
concat = []
m = 1
while True:
vts_name = 'VIDEO_TS{0}VTS_{1:02d}_{2:d}.VOB'.format(os.sep, n + 1, m)
if vts_name in file_list:
concat.append(vts_name)
m += 1
else:
break
if not concat:
for n in range(99):
concat = []
m = 1
while True:
vts_name = 'VIDEO_TS{0}VTS_{1:02d}_{2:d}.VOB'.format(os.sep, n + 1, m)
if vts_name in file_list:
concat.append(vts_name)
m += 1
else:
break
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=os.path.splitext(os.path.split(item)[1])[0], x=n + 1
)
new_files.append({item: {'name': name, 'files': concat}})
else: #check BlueRay for BDMV/STREAM/XXXX.MTS
mts_list_gen = (
re.match(r'.+(BDMV[/\\]STREAM[/\\][0-9]+[0-9].[Mm]).', line)
for line in out.decode().splitlines()
if not concat:
break
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=os.path.splitext(os.path.split(item)[1])[0], x=n + 1
)
mts_list = [
file_match.groups()[0]
for file_match in mts_list_gen
if file_match
]
if sys.version_info[0] == 2: # Python2 sorting
mts_list.sort(key=lambda f: int(filter(str.isdigit, f))) # Sort all .mts files in numerical order
else: # Python3 sorting
mts_list.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
n = 0
for mts_name in mts_list:
concat = []
n += 1
concat.append(mts_name)
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=os.path.splitext(os.path.split(item)[1])[0], x=n
)
new_files.append({item: {'name': name, 'files': concat}})
if core.CONCAT and combined:
new_files.append({item: {'name': name, 'files': concat}})
if core.CONCAT:
name = os.path.splitext(os.path.split(item)[1])[0]
new_files.append({item: {'name': name, 'files': combined}})
if not new_files:
logger.error('No VIDEO_TS or BDMV/SOURCE folder found in image file. Attempting to mount and scan {0}'.format(item), 'TRANSCODER')
new_files = mount_iso(item, new_dir, bitbucket)
logger.error('No VIDEO_TS folder found in image file {0}'.format(item), 'TRANSCODER')
new_files = [failure_dir]
except Exception:
logger.error('Failed to extract from image file {0}'.format(item), 'TRANSCODER')
new_files = [failure_dir]
@ -804,69 +682,31 @@ def rip_iso(item, new_dir, bitbucket):
def combine_vts(vts_path):
new_files = []
combined = []
name = re.match(r'(.+)[/\\]VIDEO_TS', vts_path).groups()[0]
if os.path.basename(name) == 'temp':
name = os.path.basename(os.path.dirname(name))
else:
name = os.path.basename(name)
combined = ''
for n in range(99):
concat = []
concat = ''
m = 1
while True:
vts_name = 'VTS_{0:02d}_{1:d}.VOB'.format(n + 1, m)
if os.path.isfile(os.path.join(vts_path, vts_name)):
concat.append(os.path.join(vts_path, vts_name))
concat += '{file}|'.format(file=os.path.join(vts_path, vts_name))
m += 1
else:
break
if not concat:
break
if core.CONCAT:
combined.extend(concat)
combined += '{files}|'.format(files=concat)
continue
name = '{name}.cd{x}'.format(
name=name, x=n + 1
)
new_files.append({vts_path: {'name': name, 'files': concat}})
new_files.append('concat:{0}'.format(concat[:-1]))
if core.CONCAT:
new_files.append({vts_path: {'name': name, 'files': combined}})
return new_files
def combine_mts(mts_path):
new_files = []
combined = []
name = re.match(r'(.+)[/\\]BDMV[/\\]STREAM', mts_path).groups()[0]
if os.path.basename(name) == 'temp':
name = os.path.basename(os.path.dirname(name))
else:
name = os.path.basename(name)
n = 0
mts_list = [f for f in os.listdir(mts_path) if os.path.isfile(os.path.join(mts_path, f))]
if sys.version_info[0] == 2: # Python2 sorting
mts_list.sort(key=lambda f: int(filter(str.isdigit, f)))
else: # Python3 sorting
mts_list.sort(key=lambda f: int(''.join(filter(str.isdigit, f))))
for mts_name in mts_list: ### need to sort all files [1 - 998].mts in order
concat = []
concat.append(os.path.join(mts_path, mts_name))
if core.CONCAT:
combined.extend(concat)
continue
name = '{name}.cd{x}'.format(
name=name, x=n + 1
)
new_files.append({mts_path: {'name': name, 'files': concat}})
n += 1
if core.CONCAT:
new_files.append({mts_path: {'name': name, 'files': combined}})
new_files.append('concat:{0}'.format(combined[:-1]))
return new_files
def combine_cd(combine):
new_files = []
for item in {re.match('(.+)[cC][dD][0-9].', item).groups()[0] for item in combine}:
for item in set([re.match('(.+)[cC][dD][0-9].', item).groups()[0] for item in combine]):
concat = ''
for n in range(99):
files = [file for file in combine if
@ -914,7 +754,7 @@ def transcode_directory(dir_name):
for file in file_list:
if isinstance(file, string_types) and os.path.splitext(file)[1] in core.IGNOREEXTENSIONS:
continue
command, file = build_commands(file, new_dir, movie_name, bitbucket)
command = build_commands(file, new_dir, movie_name, bitbucket)
newfile_path = command[-1]
# transcoding files may remove the original file, so make sure to extract subtitles first
@ -934,19 +774,16 @@ def transcode_directory(dir_name):
result = 1 # set result to failed in case call fails.
try:
if isinstance(file, string_types):
proc = subprocess.Popen(command, stdout=bitbucket, stderr=subprocess.PIPE)
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
else:
img, data = next(iteritems(file))
proc = subprocess.Popen(command, stdout=bitbucket, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket, stdin=subprocess.PIPE)
for vob in data['files']:
procin = zip_out(vob, img, bitbucket)
if procin:
logger.debug('Feeding in file: {0} to Transcoder'.format(vob))
shutil.copyfileobj(procin.stdout, proc.stdin)
procin.stdout.close()
out, err = proc.communicate()
if err:
logger.error('Transcoder returned:{0} has failed'.format(err))
proc.communicate()
result = proc.returncode
except Exception:
logger.error('Transcoding of video {0} has failed'.format(newfile_path))
@ -975,15 +812,6 @@ def transcode_directory(dir_name):
logger.error('Transcoding of video to {0} failed with result {1}'.format(newfile_path, result))
# this will be 0 (successful) it all are successful, else will return a positive integer for failure.
final_result = final_result + result
if core.MOUNTED: # In case we mounted an .iso file, unmount here.
time.sleep(5) # play it safe and avoid failing to unmount.
cmd = ['umount', '-l', core.MOUNTED]
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
time.sleep(5)
os.rmdir(core.MOUNTED)
core.MOUNTED = None
if final_result == 0 and not core.DUPLICATE:
for file in rem_list:
try:
@ -993,7 +821,7 @@ def transcode_directory(dir_name):
if not os.listdir(text_type(new_dir)): # this is an empty directory and we didn't transcode into it.
os.rmdir(new_dir)
new_dir = dir_name
if not core.PROCESSOUTPUT and core.DUPLICATE: # We postprocess the original files to CP/SB
if not core.PROCESSOUTPUT and core.DUPLICATE: # We postprocess the original files to CP/SB
new_dir = dir_name
bitbucket.close()
return final_result, new_dir

View file

@ -1,59 +1,38 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
from subprocess import Popen
import core
from core import logger, transcoder
from core.plugins.subtitles import import_subs
from core.utils import list_media_files, remove_dir
from core.auto_process.common import (
ProcessResult,
)
from core.utils import import_subs, list_media_files, remove_dir
def external_script(output_destination, torrent_name, torrent_label, settings):
final_result = 0 # start at 0.
num_files = 0
core.USER_SCRIPT_MEDIAEXTENSIONS = settings.get('user_script_mediaExtensions', '')
try:
core.USER_SCRIPT_MEDIAEXTENSIONS = settings['user_script_mediaExtensions'].lower()
if isinstance(core.USER_SCRIPT_MEDIAEXTENSIONS, str):
core.USER_SCRIPT_MEDIAEXTENSIONS = core.USER_SCRIPT_MEDIAEXTENSIONS.lower().split(',')
core.USER_SCRIPT_MEDIAEXTENSIONS = core.USER_SCRIPT_MEDIAEXTENSIONS.split(',')
except Exception:
logger.error('user_script_mediaExtensions could not be set', 'USERSCRIPT')
core.USER_SCRIPT_MEDIAEXTENSIONS = []
core.USER_SCRIPT = settings.get('user_script_path', '')
core.USER_SCRIPT = settings.get('user_script_path')
if not core.USER_SCRIPT or core.USER_SCRIPT == 'None':
# do nothing and return success. This allows the user an option to Link files only and not run a script.
return ProcessResult(
status_code=0,
message='No user script defined',
)
core.USER_SCRIPT_PARAM = settings.get('user_script_param', '')
if not core.USER_SCRIPT or core.USER_SCRIPT == 'None': # do nothing and return success.
return [0, '']
try:
core.USER_SCRIPT_PARAM = settings['user_script_param']
if isinstance(core.USER_SCRIPT_PARAM, str):
core.USER_SCRIPT_PARAM = core.USER_SCRIPT_PARAM.split(',')
except Exception:
logger.error('user_script_params could not be set', 'USERSCRIPT')
core.USER_SCRIPT_PARAM = []
core.USER_SCRIPT_SUCCESSCODES = settings.get('user_script_successCodes', 0)
try:
core.USER_SCRIPT_SUCCESSCODES = settings['user_script_successCodes']
if isinstance(core.USER_SCRIPT_SUCCESSCODES, str):
core.USER_SCRIPT_SUCCESSCODES = core.USER_SCRIPT_SUCCESSCODES.split(',')
except Exception:
logger.error('user_script_successCodes could not be set', 'USERSCRIPT')
core.USER_SCRIPT_SUCCESSCODES = 0
core.USER_SCRIPT_CLEAN = int(settings.get('user_script_clean', 1))
@ -67,12 +46,11 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
logger.info('Corrupt video file found {0}. Deleting.'.format(video), 'USERSCRIPT')
os.unlink(video)
for dirpath, _, filenames in os.walk(output_destination):
for dirpath, dirnames, filenames in os.walk(output_destination):
for file in filenames:
file_path = core.os.path.join(dirpath, file)
file_name, file_extension = os.path.splitext(file)
logger.debug('Checking file {0} to see if this should be processed.'.format(file), 'USERSCRIPT')
if file_extension in core.USER_SCRIPT_MEDIAEXTENSIONS or 'all' in core.USER_SCRIPT_MEDIAEXTENSIONS:
num_files += 1
@ -123,7 +101,7 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
final_result += result
num_files_new = 0
for _, _, filenames in os.walk(output_destination):
for dirpath, dirnames, filenames in os.walk(output_destination):
for file in filenames:
file_name, file_extension = os.path.splitext(file)
@ -136,7 +114,4 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
elif core.USER_SCRIPT_CLEAN == int(1) and num_files_new != 0:
logger.info('{0} files were processed, but {1} still remain. outputDirectory will not be cleaned.'.format(
num_files, num_files_new))
return ProcessResult(
status_code=final_result,
message='User Script Completed',
)
return [final_result, '']

View file

@ -1,12 +1,5 @@
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
from core.utils import shutil_custom
@ -26,6 +19,8 @@ from core.utils.identification import category_search, find_imdbid
from core.utils.links import copy_link, replace_links
from core.utils.naming import clean_file_name, is_sample, sanitize_name
from core.utils.network import find_download, server_responding, test_connection, wake_on_lan, wake_up
from core.utils.notifications import plex_update
from core.utils.nzbs import get_nzoid, report_nzb
from core.utils.parsers import (
parse_args,
parse_deluge,
@ -49,6 +44,8 @@ from core.utils.paths import (
remove_read_only,
)
from core.utils.processes import RunningProcess, restart
from core.utils.subtitles import import_subs
from core.utils.torrents import create_torrent_class, pause_torrent, remove_torrent, resume_torrent
requests.packages.urllib3.disable_warnings()
shutil_custom.monkey_patch()

View file

@ -1,9 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os.path

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import datetime
from six import text_type

View file

@ -1,23 +1,12 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
from six import text_type
from six import PY2
import core
from core import logger
if not PY2:
from builtins import bytes
def char_replace(name_in):
def char_replace(name):
# Special character hex range:
# CP850: 0x80-0xA5 (fortunately not used in ISO-8859-15)
# UTF-8: 1st hex code 0xC2-0xC3 followed by a 2nd hex code 0xA1-0xFF
@ -26,69 +15,36 @@ def char_replace(name_in):
# If there is special character, detects if it is a UTF-8, CP850 or ISO-8859-15 encoding
encoded = False
encoding = None
if isinstance(name_in, text_type):
return encoded, name_in
if PY2:
name = name_in
for Idx in range(len(name)):
# print('Trying to intuit the encoding')
# /!\ detection is done 2char by 2char for UTF-8 special character
if (len(name) != 1) & (Idx < (len(name) - 1)):
# Detect UTF-8
if ((name[Idx] == '\xC2') | (name[Idx] == '\xC3')) & (
(name[Idx + 1] >= '\xA0') & (name[Idx + 1] <= '\xFF')):
encoding = 'utf-8'
break
# Detect CP850
elif (name[Idx] >= '\x80') & (name[Idx] <= '\xA5'):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= '\xA6') & (name[Idx] <= '\xFF'):
encoding = 'iso-8859-15'
break
else:
# Detect CP850
if (name[Idx] >= '\x80') & (name[Idx] <= '\xA5'):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= '\xA6') & (name[Idx] <= '\xFF'):
encoding = 'iso-8859-15'
break
else:
name = bytes(name_in)
for Idx in range(len(name)):
# print('Trying to intuit the encoding')
# /!\ detection is done 2char by 2char for UTF-8 special character
if (len(name) != 1) & (Idx < (len(name) - 1)):
# Detect UTF-8
if ((name[Idx] == 0xC2) | (name[Idx] == 0xC3)) & (
(name[Idx + 1] >= 0xA0) & (name[Idx + 1] <= 0xFF)):
encoding = 'utf-8'
break
# Detect CP850
elif (name[Idx] >= 0x80) & (name[Idx] <= 0xA5):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= 0xA6) & (name[Idx] <= 0xFF):
encoding = 'iso-8859-15'
break
else:
# Detect CP850
if (name[Idx] >= 0x80) & (name[Idx] <= 0xA5):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= 0xA6) & (name[Idx] <= 0xFF):
encoding = 'iso-8859-15'
break
if encoding:
if isinstance(name, text_type):
return encoded, name.encode(core.SYS_ENCODING)
for Idx in range(len(name)):
# /!\ detection is done 2char by 2char for UTF-8 special character
if (len(name) != 1) & (Idx < (len(name) - 1)):
# Detect UTF-8
if ((name[Idx] == '\xC2') | (name[Idx] == '\xC3')) & (
(name[Idx + 1] >= '\xA0') & (name[Idx + 1] <= '\xFF')):
encoding = 'utf-8'
break
# Detect CP850
elif (name[Idx] >= '\x80') & (name[Idx] <= '\xA5'):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= '\xA6') & (name[Idx] <= '\xFF'):
encoding = 'iso-8859-15'
break
else:
# Detect CP850
if (name[Idx] >= '\x80') & (name[Idx] <= '\xA5'):
encoding = 'cp850'
break
# Detect ISO-8859-15
elif (name[Idx] >= '\xA6') & (name[Idx] <= '\xFF'):
encoding = 'iso-8859-15'
break
if encoding and not encoding == core.SYS_ENCODING:
encoded = True
name = name.decode(encoding)
elif not PY2:
name = name.decode()
name = name.decode(encoding).encode(core.SYS_ENCODING)
return encoded, name
@ -112,14 +68,14 @@ def convert_to_ascii(input_name, dir_name):
if 'NZBOP_SCRIPTDIR' in os.environ:
print('[NZB] DIRECTORY={0}'.format(dir_name))
for dirname, dirnames, _ in os.walk(dir_name, topdown=False):
for dirname, dirnames, filenames in os.walk(dir_name, topdown=False):
for subdirname in dirnames:
encoded, subdirname2 = char_replace(subdirname)
if encoded:
logger.info('Renaming directory to: {0}.'.format(subdirname2), 'ENCODER')
os.rename(os.path.join(dirname, subdirname), os.path.join(dirname, subdirname2))
for dirname, _, filenames in os.walk(dir_name):
for dirname, dirnames, filenames in os.walk(dir_name):
for filename in filenames:
encoded, filename2 = char_replace(filename)
if encoded:

View file

@ -1,17 +1,10 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import re
import shutil
import stat
import time
import mediafile as mediafiletool
import beets.mediafile
import guessit
from six import text_type
@ -28,7 +21,7 @@ def move_file(mediafile, path, link):
file_ext = os.path.splitext(mediafile)[1]
try:
if file_ext in core.AUDIO_CONTAINER:
f = mediafiletool.MediaFile(mediafile)
f = beets.mediafile.MediaFile(mediafile)
# get artist and album info
artist = f.artist
@ -53,11 +46,10 @@ def move_file(mediafile, path, link):
title = os.path.splitext(os.path.basename(mediafile))[0]
new_path = os.path.join(path, sanitize_name(title))
# Removed as encoding of directory no-longer required
#try:
# new_path = new_path.encode(core.SYS_ENCODING)
#except Exception:
# pass
try:
new_path = new_path.encode(core.SYS_ENCODING)
except Exception:
pass
# Just fail-safe incase we already have afile with this clean-name (was actually a bug from earlier code, but let's be safe).
if os.path.isfile(new_path):
@ -96,7 +88,7 @@ def is_min_size(input_name, min_size):
def is_archive_file(filename):
"""Check if the filename is allowed for the Archive."""
"""Check if the filename is allowed for the Archive"""
for regext in core.COMPRESSED_CONTAINER:
if regext.search(filename):
return regext.split(filename)[0]

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import re
@ -12,6 +5,7 @@ import guessit
import requests
from six import text_type
import core
from core import logger
from core.utils.naming import sanitize_name
@ -23,18 +17,18 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
# find imdbid in dirName
logger.info('Searching folder and file names for imdbID ...')
m = re.search(r'\b(tt\d{7,8})\b', dir_name + input_name)
m = re.search(r'(tt\d{7})', dir_name + input_name)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}]'.format(imdbid))
return imdbid, dir_name
return imdbid
if os.path.isdir(dir_name):
for file in os.listdir(text_type(dir_name)):
m = re.search(r'\b(tt\d{7,8})\b', file)
m = re.search(r'(tt\d{7})', file)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}] via file name'.format(imdbid))
return imdbid, dir_name
return imdbid
if 'NZBPR__DNZB_MOREINFO' in os.environ:
dnzb_more_info = os.environ.get('NZBPR__DNZB_MOREINFO', '')
if dnzb_more_info != '':
@ -43,7 +37,7 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}] from DNZB-MoreInfo'.format(imdbid))
return imdbid, dir_name
return imdbid
logger.info('Searching IMDB for imdbID ...')
try:
guess = guessit.guessit(input_name)
@ -63,8 +57,8 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
url = 'http://www.omdbapi.com'
if not omdb_api_key:
logger.info('Unable to determine imdbID: No api key provided for omdbapi.com.')
return imdbid, dir_name
logger.info('Unable to determine imdbID: No api key provided for ombdapi.com.')
return
logger.debug('Opening URL: {0}'.format(url))
@ -73,7 +67,7 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
verify=False, timeout=(60, 300))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url))
return imdbid, dir_name
return
try:
results = r.json()
@ -87,17 +81,24 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
if imdbid:
logger.info('Found imdbID [{0}]'.format(imdbid))
new_dir_name = '{}.cp({})'.format(dir_name, imdbid)
os.rename(dir_name, new_dir_name)
return imdbid, new_dir_name
return imdbid
logger.warning('Unable to find a imdbID for {0}'.format(input_name))
return imdbid, dir_name
return imdbid
def category_search(input_directory, input_name, input_category, root, categories):
tordir = False
try:
input_name = input_name.encode(core.SYS_ENCODING)
except Exception:
pass
try:
input_directory = input_directory.encode(core.SYS_ENCODING)
except Exception:
pass
if input_directory is None: # =Nothing to process here.
return input_directory, input_name, input_category, root
@ -146,15 +147,6 @@ def category_search(input_directory, input_name, input_category, root, categorie
input_directory = os.path.join(input_directory, sanitize_name(input_name))
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
tordir = True
elif input_name and os.path.isdir(input_directory):
for file in os.listdir(text_type(input_directory)):
if os.path.splitext(file)[0] in [input_name, sanitize_name(input_name)]:
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(file, input_directory))
input_directory = os.path.join(input_directory, file)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
input_name = file
tordir = True
break
imdbid = [item for item in pathlist if '.cp(tt' in item] # This looks for the .cp(tt imdb id in the path.
if imdbid and '.cp(tt' not in input_name:

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import shutil

View file

@ -1,17 +1,9 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import re
import core
def sanitize_name(name):
"""
Remove bad chars from the filename.
>>> sanitize_name('a/b/c')
'a-b-c'
>>> sanitize_name('abc')
@ -21,22 +13,29 @@ def sanitize_name(name):
>>> sanitize_name('.a.b..')
'a.b'
"""
# remove bad chars from the filename
name = re.sub(r'[\\/*]', '-', name)
name = re.sub(r'[:\'<>|?]', '', name)
# remove leading/trailing periods and spaces
name = name.strip(' .')
try:
name = name.encode(core.SYS_ENCODING)
except Exception:
pass
return name
def clean_file_name(filename):
"""
Clean up nzb name by removing any . and _ characters and trailing hyphens.
"""Cleans up nzb name by removing any . and _
characters, along with any trailing hyphens.
Is basically equivalent to replacing all _ and . with a
space, but handles decimal numbers in string, for example:
"""
filename = re.sub(r'(\D)\.(?!\s)(\D)', r'\1 \2', filename)
filename = re.sub(r'(\d)\.(\d{4})', r'\1 \2', filename) # if it ends in a year then don't keep the dot
filename = re.sub(r'(\D)\.(?!\s)', r'\1 ', filename)

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import socket
import struct
import time

View file

@ -1,34 +1,9 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import requests
import core
from core import logger
def configure_plex(config):
core.PLEX_SSL = int(config['Plex']['plex_ssl'])
core.PLEX_HOST = config['Plex']['plex_host']
core.PLEX_PORT = config['Plex']['plex_port']
core.PLEX_TOKEN = config['Plex']['plex_token']
plex_section = config['Plex']['plex_sections'] or []
if plex_section:
if isinstance(plex_section, list):
plex_section = ','.join(plex_section) # fix in case this imported as list.
plex_section = [
tuple(item.split(','))
for item in plex_section.split('|')
]
core.PLEX_SECTION = plex_section
def plex_update(category):
if core.FAILED:
return
@ -51,3 +26,5 @@ def plex_update(category):
logger.debug('Plex Library has been refreshed.', 'PLEX')
else:
logger.debug('Could not identify section for plex update', 'PLEX')

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import requests

View file

@ -1,14 +1,6 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import core
from core import logger
def parse_other(args):
@ -66,7 +58,7 @@ def parse_deluge(args):
input_hash = args[1]
input_id = args[1]
try:
input_category = core.TORRENT_CLASS.core.get_torrent_status(input_id, ['label']).get(b'label').decode()
input_category = core.TORRENT_CLASS.core.get_torrent_status(input_id, ['label']).get()['label']
except Exception:
input_category = ''
return input_directory, input_name, input_category, input_hash, input_id
@ -82,36 +74,6 @@ def parse_transmission(args):
return input_directory, input_name, input_category, input_hash, input_id
def parse_synods(args):
# Synology/Transmission usage: call TorrenToMedia.py (%TR_TORRENT_DIR% %TR_TORRENT_NAME% is passed on as environmental variables)
input_directory = ''
input_id = ''
input_category = ''
input_name = os.getenv('TR_TORRENT_NAME')
input_hash = os.getenv('TR_TORRENT_HASH')
if not input_name: # No info passed. Assume manual download.
return input_directory, input_name, input_category, input_hash, input_id
input_id = 'dbid_{0}'.format(os.getenv('TR_TORRENT_ID'))
#res = core.TORRENT_CLASS.tasks_list(additional_param='detail')
res = core.TORRENT_CLASS.tasks_info(input_id, additional_param='detail')
logger.debug('result from syno {0}'.format(res))
if res['success']:
try:
tasks = res['data']['tasks']
task = [ task for task in tasks if task['id'] == input_id ][0]
input_id = task['id']
input_directory = task['additional']['detail']['destination']
except:
logger.error('unable to find download details in Synology DS')
#Syno paths appear to be relative. Let's test to see if the returned path exists, and if not append to /volume1/
if not os.path.isdir(input_directory):
for root in ['/volume1/', '/volume2/', '/volume3/', '/volume4/']:
if os.path.isdir(os.path.join(root, input_directory)):
input_directory = os.path.join(root, input_directory)
break
return input_directory, input_name, input_category, input_hash, input_id
def parse_vuze(args):
# vuze usage: C:\full\path\to\nzbToMedia\TorrentToMedia.py '%D%N%L%I%K%F'
try:
@ -158,11 +120,7 @@ def parse_qbittorrent(args):
except Exception:
input_directory = ''
try:
input_name = cur_input[1]
if input_name[0] == '\'':
input_name = input_name[1:]
if input_name[-1] == '\'':
input_name = input_name[:-1]
input_name = cur_input[1].replace('\'', '')
except Exception:
input_name = ''
try:
@ -190,7 +148,6 @@ def parse_args(client_agent, args):
'transmission': parse_transmission,
'qbittorrent': parse_qbittorrent,
'vuze': parse_vuze,
'synods': parse_synods,
}
try:

View file

@ -1,9 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from functools import partial
import os
@ -73,14 +67,14 @@ def remote_dir(path):
def get_dir_size(input_path):
prepend = partial(os.path.join, input_path)
return sum(
return sum([
(os.path.getsize(f) if os.path.isfile(f) else get_dir_size(f))
for f in map(prepend, os.listdir(text_type(input_path)))
)
])
def remove_empty_folders(path, remove_root=True):
"""Remove empty folders."""
"""Function to remove empty folders"""
if not os.path.isdir(path):
return

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import socket
import subprocess
@ -54,7 +47,7 @@ class PosixProcess(object):
self.lasterror = False
return self.lasterror
except socket.error as e:
if 'Address already in use' in str(e):
if 'Address already in use' in e:
self.lasterror = True
return self.lasterror
except AttributeError:

View file

@ -1,10 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
from functools import partial
import shutil
from six import PY2

31
core/utils/subtitles.py Normal file
View file

@ -0,0 +1,31 @@
from babelfish import Language
import subliminal
import core
from core import logger
def import_subs(filename):
if not core.GETSUBS:
return
try:
subliminal.region.configure('dogpile.cache.dbm', arguments={'filename': 'cachefile.dbm'})
except Exception:
pass
languages = set()
for item in core.SLANGUAGES:
try:
languages.add(Language(item))
except Exception:
pass
if not languages:
return
logger.info('Attempting to download subtitles for {0}'.format(filename), 'SUBTITLES')
try:
video = subliminal.scan_video(filename)
subtitles = subliminal.download_best_subtitles({video}, languages)
subliminal.save_subtitles(video, subtitles[video])
except Exception as e:
logger.error('Failed to download subtitles for {0} due to: {1}'.format(filename, e), 'SUBTITLES')

View file

@ -1,37 +1,55 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import time
from qbittorrent import Client as qBittorrentClient
from synchronousdeluge.client import DelugeClient
from transmissionrpc.client import Client as TransmissionClient
from utorrent.client import UTorrentClient
import core
from core import logger
from .deluge import configure_client as deluge_client
from .qbittorrent import configure_client as qbittorrent_client
from .transmission import configure_client as transmission_client
from .utorrent import configure_client as utorrent_client
from .synology import configure_client as synology_client
torrent_clients = {
'deluge': deluge_client,
'qbittorrent': qbittorrent_client,
'transmission': transmission_client,
'utorrent': utorrent_client,
'synods': synology_client,
}
def create_torrent_class(client_agent):
if not core.APP_NAME == 'TorrentToMedia.py':
return # Skip loading Torrent for NZBs.
# Hardlink solution for Torrents
tc = None
if not core.APP_NAME == 'TorrentToMedia.py': #Skip loading Torrent for NZBs.
return tc
client = torrent_clients.get(client_agent)
if client:
return client()
if client_agent == 'utorrent':
try:
logger.debug('Connecting to {0}: {1}'.format(client_agent, core.UTORRENT_WEB_UI))
tc = UTorrentClient(core.UTORRENT_WEB_UI, core.UTORRENT_USER, core.UTORRENT_PASSWORD)
except Exception:
logger.error('Failed to connect to uTorrent')
if client_agent == 'transmission':
try:
logger.debug('Connecting to {0}: http://{1}:{2}'.format(
client_agent, core.TRANSMISSION_HOST, core.TRANSMISSION_PORT))
tc = TransmissionClient(core.TRANSMISSION_HOST, core.TRANSMISSION_PORT,
core.TRANSMISSION_USER,
core.TRANSMISSION_PASSWORD)
except Exception:
logger.error('Failed to connect to Transmission')
if client_agent == 'deluge':
try:
logger.debug('Connecting to {0}: http://{1}:{2}'.format(client_agent, core.DELUGE_HOST, core.DELUGE_PORT))
tc = DelugeClient()
tc.connect(host=core.DELUGE_HOST, port=core.DELUGE_PORT, username=core.DELUGE_USER,
password=core.DELUGE_PASSWORD)
except Exception:
logger.error('Failed to connect to Deluge')
if client_agent == 'qbittorrent':
try:
logger.debug('Connecting to {0}: http://{1}:{2}'.format(client_agent, core.QBITTORRENT_HOST, core.QBITTORRENT_PORT))
tc = qBittorrentClient('http://{0}:{1}/'.format(core.QBITTORRENT_HOST, core.QBITTORRENT_PORT))
tc.login(core.QBITTORRENT_USER, core.QBITTORRENT_PASSWORD)
except Exception:
logger.error('Failed to connect to qBittorrent')
return tc
def pause_torrent(client_agent, input_hash, input_id, input_name):
@ -41,8 +59,6 @@ def pause_torrent(client_agent, input_hash, input_id, input_name):
core.TORRENT_CLASS.stop(input_hash)
if client_agent == 'transmission' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.stop_torrent(input_id)
if client_agent == 'synods' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.pause_task(input_id)
if client_agent == 'deluge' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.core.pause_torrent([input_id])
if client_agent == 'qbittorrent' and core.TORRENT_CLASS != '':
@ -61,8 +77,6 @@ def resume_torrent(client_agent, input_hash, input_id, input_name):
core.TORRENT_CLASS.start(input_hash)
if client_agent == 'transmission' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.start_torrent(input_id)
if client_agent == 'synods' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.resume_task(input_id)
if client_agent == 'deluge' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.core.resume_torrent([input_id])
if client_agent == 'qbittorrent' and core.TORRENT_CLASS != '':
@ -81,8 +95,6 @@ def remove_torrent(client_agent, input_hash, input_id, input_name):
core.TORRENT_CLASS.remove(input_hash)
if client_agent == 'transmission' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.remove_torrent(input_id, True)
if client_agent == 'synods' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.delete_task(input_id)
if client_agent == 'deluge' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.core.remove_torrent(input_id, True)
if client_agent == 'qbittorrent' and core.TORRENT_CLASS != '':

View file

@ -2,13 +2,6 @@
# Author: Nic Wolfe <nic@wolfeden.ca>
# Modified by: echel0n
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import platform
import re
@ -26,7 +19,9 @@ from core import github_api as github, logger
class CheckVersion(object):
"""Version checker that runs in a thread with the SB scheduler."""
"""
Version check class meant to run as a thread object with the SB scheduler.
"""
def __init__(self):
self.install_type = self.find_install_type()
@ -45,15 +40,16 @@ class CheckVersion(object):
def find_install_type(self):
"""
Determine how this copy of SB was installed.
Determines how this copy of SB was installed.
returns: type of installation. Possible values are:
'win': any compiled windows build
'git': running from source using git
'source': running from source without git
"""
# check if we're a windows build
if os.path.exists(os.path.join(core.APP_ROOT, u'.git')):
if os.path.isdir(os.path.join(core.APP_ROOT, u'.git')):
install_type = 'git'
else:
install_type = 'source'
@ -62,12 +58,13 @@ class CheckVersion(object):
def check_for_new_version(self, force=False):
"""
Check the internet for a newer version.
Checks the internet for a newer version.
returns: bool, True for new version or False for no new version.
force: if true the VERSION_NOTIFY setting will be ignored and a check will be forced
"""
if not core.VERSION_NOTIFY and not force:
logger.log(u'Version checking is disabled, not checking for the newest version')
return False
@ -202,8 +199,8 @@ class GitUpdateManager(UpdateManager):
logger.log(u'{cmd} : returned successful'.format(cmd=cmd), logger.DEBUG)
exit_status = 0
elif core.LOG_GIT and exit_status in (1, 128):
logger.log(u'{cmd} returned : {output}'.format
(cmd=cmd, output=output), logger.DEBUG)
logger.log(u'{cmd} returned : {output}'.format
(cmd=cmd, output=output), logger.DEBUG)
else:
if core.LOG_GIT:
logger.log(u'{cmd} returned : {output}, treat as error for now'.format
@ -214,12 +211,13 @@ class GitUpdateManager(UpdateManager):
def _find_installed_version(self):
"""
Attempt to find the currently installed version of Sick Beard.
Attempts to find the currently installed version of Sick Beard.
Uses git show to get commit version.
Returns: True for success or False for failure
"""
output, err, exit_status = self._run_git(self._git_path, 'rev-parse HEAD') # @UnusedVariable
if exit_status == 0 and output:
@ -246,12 +244,10 @@ class GitUpdateManager(UpdateManager):
def _check_github_for_update(self):
"""
Check Github for a new version.
Uses git commands to check if there is a newer version than
the provided commit hash. If there is a newer version it
sets _num_commits_behind.
Uses git commands to check if there is a newer version that the provided
commit hash. If there is a newer version it sets _num_commits_behind.
"""
self._newest_commit_hash = None
self._num_commits_behind = 0
self._num_commits_ahead = 0
@ -328,11 +324,10 @@ class GitUpdateManager(UpdateManager):
def update(self):
"""
Check git for a new version.
Calls git pull origin <branch> in order to update Sick Beard.
Returns a bool depending on the call's success.
Calls git pull origin <branch> in order to update Sick Beard. Returns a bool depending
on the call's success.
"""
output, err, exit_status = self._run_git(self._git_path, 'pull origin {branch}'.format(branch=self.branch)) # @UnusedVariable
if exit_status == 0:
@ -387,14 +382,12 @@ class SourceUpdateManager(UpdateManager):
def _check_github_for_update(self):
"""
Check Github for a new version.
Uses pygithub to ask github if there is a newer version than
the provided commit hash. If there is a newer version it sets
Sick Beard's version text.
Uses pygithub to ask github if there is a newer version that the provided
commit hash. If there is a newer version it sets Sick Beard's version text.
commit_hash: hash that we're checking against
"""
self._num_commits_behind = 0
self._newest_commit_hash = None
@ -442,7 +435,9 @@ class SourceUpdateManager(UpdateManager):
return
def update(self):
"""Download and install latest source tarball from github."""
"""
Downloads the latest source tarball from github and installs it over the existing version.
"""
tar_download_url = 'https://github.com/{org}/{repo}/tarball/{branch}'.format(
org=self.github_repo_user, repo=self.github_repo, branch=self.branch)
version_path = os.path.join(core.APP_ROOT, u'version.txt')
@ -494,7 +489,7 @@ class SourceUpdateManager(UpdateManager):
# walk temp folder and move files to main folder
logger.log(u'Moving files from {source} to {destination}'.format
(source=content_dir, destination=core.APP_ROOT))
for dirname, _, filenames in os.walk(content_dir): # @UnusedVariable
for dirname, dirnames, filenames in os.walk(content_dir): # @UnusedVariable
dirname = dirname[len(content_dir) + 1:]
for curfile in filenames:
old_path = os.path.join(content_dir, dirname, curfile)

17
eol.py
View file

@ -1,12 +1,5 @@
#!/usr/bin/env python
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import datetime
import sys
import warnings
@ -28,12 +21,6 @@ def date(string, fmt='%Y-%m-%d'):
# https://devguide.python.org/
# https://devguide.python.org/devcycle/#devcycle
PYTHON_EOL = {
(3, 13): date('2029-10-1'),
(3, 12): date('2028-10-1'),
(3, 11): date('2027-10-1'),
(3, 10): date('2026-10-01'),
(3, 9): date('2025-10-05'),
(3, 8): date('2024-10-14'),
(3, 7): date('2023-06-27'),
(3, 6): date('2021-12-23'),
(3, 5): date('2020-09-13'),
@ -170,7 +157,7 @@ def print_statuses(show_expired=False):
major=python_version[0],
minor=python_version[1],
remaining=days_left,
),
)
)
if not show_expired:
return
@ -184,7 +171,7 @@ def print_statuses(show_expired=False):
major=python_version[0],
minor=python_version[1],
remaining=-days_left,
),
)
)

View file

@ -3,7 +3,7 @@
# get ffmpeg/yasm/x264
git clone git://source.ffmpeg.org/ffmpeg.git FFmpeg
git clone git://github.com/yasm/yasm.git FFmpeg/yasm
git clone https://code.videolan.org/videolan/x264.git FFmpeg/x264
git clone git://git.videolan.org/x264.git FFmpeg/x264
# compile/install yasm
cd FFmpeg/yasm
@ -25,4 +25,4 @@ cd -
cd FFmpeg
./configure --disable-asm --enable-libx264 --enable-gpl
make install
cd -
cd -

View file

@ -1,11 +1,4 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import site
import sys

View file

@ -1,11 +1,4 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import shutil
import os
import time

View file

@ -1,11 +1,4 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import libs
__all__ = ['completed']

View file

@ -1 +0,0 @@

Binary file not shown.

View file

@ -1,33 +0,0 @@
# This is a stub package designed to roughly emulate the _yaml
# extension module, which previously existed as a standalone module
# and has been moved into the `yaml` package namespace.
# It does not perfectly mimic its old counterpart, but should get
# close enough for anyone who's relying on it even when they shouldn't.
import yaml
# in some circumstances, the yaml module we imoprted may be from a different version, so we need
# to tread carefully when poking at it here (it may not have the attributes we expect)
if not getattr(yaml, '__with_libyaml__', False):
from sys import version_info
exc = ModuleNotFoundError if version_info >= (3, 6) else ImportError
raise exc("No module named '_yaml'")
else:
from yaml._yaml import *
import warnings
warnings.warn(
'The _yaml extension module is now located at yaml._yaml'
' and its location is subject to change. To use the'
' LibYAML-based parser and emitter, import from `yaml`:'
' `from yaml import CLoader as Loader, CDumper as Dumper`.',
DeprecationWarning
)
del warnings
# Don't `del yaml` here because yaml is actually an existing
# namespace member of _yaml.
__name__ = '_yaml'
# If the module is top-level (i.e. not a part of any specific package)
# then the attribute should be set to ''.
# https://docs.python.org/3.8/library/types.html
__package__ = ''

View file

@ -13,8 +13,8 @@ See <http://github.com/ActiveState/appdirs> for details and usage.
# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html
# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html
__version__ = "1.4.4"
__version_info__ = tuple(int(segment) for segment in __version__.split("."))
__version_info__ = (1, 4, 3)
__version__ = '.'.join(map(str, __version_info__))
import sys

View file

@ -4,6 +4,12 @@
# Use of this source code is governed by the 3-clause BSD license
# that can be found in the LICENSE file.
#
__title__ = 'babelfish'
__version__ = '0.5.5-dev'
__author__ = 'Antoine Bertin'
__license__ = 'BSD'
__copyright__ = 'Copyright 2015 the BabelFish authors'
import sys
if sys.version_info[0] >= 3:

View file

@ -2,22 +2,17 @@
# Use of this source code is governed by the 3-clause BSD license
# that can be found in the LICENSE file.
#
import collections
from pkg_resources import iter_entry_points, EntryPoint
from ..exceptions import LanguageConvertError, LanguageReverseError
try:
# Python 3.3+
from collections.abc import Mapping, MutableMapping
except ImportError:
from collections import Mapping, MutableMapping
# from https://github.com/kennethreitz/requests/blob/master/requests/structures.py
class CaseInsensitiveDict(MutableMapping):
class CaseInsensitiveDict(collections.MutableMapping):
"""A case-insensitive ``dict``-like object.
Implements all methods and operations of
``collections.abc.MutableMapping`` as well as dict's ``copy``. Also
``collections.MutableMapping`` as well as dict's ``copy``. Also
provides ``lower_items``.
All keys are expected to be strings. The structure remembers the
@ -68,7 +63,7 @@ class CaseInsensitiveDict(MutableMapping):
)
def __eq__(self, other):
if isinstance(other, Mapping):
if isinstance(other, collections.Mapping):
other = CaseInsensitiveDict(other)
else:
return NotImplemented

View file

@ -14,10 +14,10 @@ class OpenSubtitlesConverter(LanguageReverseConverter):
def __init__(self):
self.alpha3b_converter = language_converters['alpha3b']
self.alpha2_converter = language_converters['alpha2']
self.to_opensubtitles = {('por', 'BR'): 'pob', ('gre', None): 'ell', ('srp', None): 'scc', ('srp', 'ME'): 'mne', ('chi', 'TW'): 'zht'}
self.to_opensubtitles = {('por', 'BR'): 'pob', ('gre', None): 'ell', ('srp', None): 'scc', ('srp', 'ME'): 'mne'}
self.from_opensubtitles = CaseInsensitiveDict({'pob': ('por', 'BR'), 'pb': ('por', 'BR'), 'ell': ('ell', None),
'scc': ('srp', None), 'mne': ('srp', 'ME'), 'zht': ('zho', 'TW')})
self.codes = (self.alpha2_converter.codes | self.alpha3b_converter.codes | set(self.from_opensubtitles.keys()))
'scc': ('srp', None), 'mne': ('srp', 'ME')})
self.codes = (self.alpha2_converter.codes | self.alpha3b_converter.codes | set(['pob', 'pb', 'scc', 'mne']))
def convert(self, alpha3, country=None, script=None):
alpha3b = self.alpha3b_converter.convert(alpha3, country, script)

View file

@ -0,0 +1,373 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2013 the BabelFish authors. All rights reserved.
# Use of this source code is governed by the 3-clause BSD license
# that can be found in the LICENSE file.
#
from __future__ import unicode_literals
import re
import sys
import pickle
from unittest import TestCase, TestSuite, TestLoader, TextTestRunner
from pkg_resources import resource_stream # @UnresolvedImport
from babelfish import (LANGUAGES, Language, Country, Script, language_converters, country_converters,
LanguageReverseConverter, LanguageConvertError, LanguageReverseError, CountryReverseError)
if sys.version_info[:2] <= (2, 6):
_MAX_LENGTH = 80
def safe_repr(obj, short=False):
try:
result = repr(obj)
except Exception:
result = object.__repr__(obj)
if not short or len(result) < _MAX_LENGTH:
return result
return result[:_MAX_LENGTH] + ' [truncated]...'
class _AssertRaisesContext(object):
"""A context manager used to implement TestCase.assertRaises* methods."""
def __init__(self, expected, test_case, expected_regexp=None):
self.expected = expected
self.failureException = test_case.failureException
self.expected_regexp = expected_regexp
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, tb):
if exc_type is None:
try:
exc_name = self.expected.__name__
except AttributeError:
exc_name = str(self.expected)
raise self.failureException(
"{0} not raised".format(exc_name))
if not issubclass(exc_type, self.expected):
# let unexpected exceptions pass through
return False
self.exception = exc_value # store for later retrieval
if self.expected_regexp is None:
return True
expected_regexp = self.expected_regexp
if isinstance(expected_regexp, basestring):
expected_regexp = re.compile(expected_regexp)
if not expected_regexp.search(str(exc_value)):
raise self.failureException('"%s" does not match "%s"' %
(expected_regexp.pattern, str(exc_value)))
return True
class _Py26FixTestCase(object):
def assertIsNone(self, obj, msg=None):
"""Same as self.assertTrue(obj is None), with a nicer default message."""
if obj is not None:
standardMsg = '%s is not None' % (safe_repr(obj),)
self.fail(self._formatMessage(msg, standardMsg))
def assertIsNotNone(self, obj, msg=None):
"""Included for symmetry with assertIsNone."""
if obj is None:
standardMsg = 'unexpectedly None'
self.fail(self._formatMessage(msg, standardMsg))
def assertIn(self, member, container, msg=None):
"""Just like self.assertTrue(a in b), but with a nicer default message."""
if member not in container:
standardMsg = '%s not found in %s' % (safe_repr(member),
safe_repr(container))
self.fail(self._formatMessage(msg, standardMsg))
def assertNotIn(self, member, container, msg=None):
"""Just like self.assertTrue(a not in b), but with a nicer default message."""
if member in container:
standardMsg = '%s unexpectedly found in %s' % (safe_repr(member),
safe_repr(container))
self.fail(self._formatMessage(msg, standardMsg))
def assertIs(self, expr1, expr2, msg=None):
"""Just like self.assertTrue(a is b), but with a nicer default message."""
if expr1 is not expr2:
standardMsg = '%s is not %s' % (safe_repr(expr1),
safe_repr(expr2))
self.fail(self._formatMessage(msg, standardMsg))
def assertIsNot(self, expr1, expr2, msg=None):
"""Just like self.assertTrue(a is not b), but with a nicer default message."""
if expr1 is expr2:
standardMsg = 'unexpectedly identical: %s' % (safe_repr(expr1),)
self.fail(self._formatMessage(msg, standardMsg))
else:
class _Py26FixTestCase(object):
pass
class TestScript(TestCase, _Py26FixTestCase):
def test_wrong_script(self):
self.assertRaises(ValueError, lambda: Script('Azer'))
def test_eq(self):
self.assertEqual(Script('Latn'), Script('Latn'))
def test_ne(self):
self.assertNotEqual(Script('Cyrl'), Script('Latn'))
def test_hash(self):
self.assertEqual(hash(Script('Hira')), hash('Hira'))
def test_pickle(self):
self.assertEqual(pickle.loads(pickle.dumps(Script('Latn'))), Script('Latn'))
class TestCountry(TestCase, _Py26FixTestCase):
def test_wrong_country(self):
self.assertRaises(ValueError, lambda: Country('ZZ'))
def test_eq(self):
self.assertEqual(Country('US'), Country('US'))
def test_ne(self):
self.assertNotEqual(Country('GB'), Country('US'))
self.assertIsNotNone(Country('US'))
def test_hash(self):
self.assertEqual(hash(Country('US')), hash('US'))
def test_pickle(self):
for country in [Country('GB'), Country('US')]:
self.assertEqual(pickle.loads(pickle.dumps(country)), country)
def test_converter_name(self):
self.assertEqual(Country('US').name, 'UNITED STATES')
self.assertEqual(Country.fromname('UNITED STATES'), Country('US'))
self.assertEqual(Country.fromcode('UNITED STATES', 'name'), Country('US'))
self.assertRaises(CountryReverseError, lambda: Country.fromname('ZZZZZ'))
self.assertEqual(len(country_converters['name'].codes), 249)
class TestLanguage(TestCase, _Py26FixTestCase):
def test_languages(self):
self.assertEqual(len(LANGUAGES), 7874)
def test_wrong_language(self):
self.assertRaises(ValueError, lambda: Language('zzz'))
def test_unknown_language(self):
self.assertEqual(Language('zzzz', unknown='und'), Language('und'))
def test_converter_alpha2(self):
self.assertEqual(Language('eng').alpha2, 'en')
self.assertEqual(Language.fromalpha2('en'), Language('eng'))
self.assertEqual(Language.fromcode('en', 'alpha2'), Language('eng'))
self.assertRaises(LanguageReverseError, lambda: Language.fromalpha2('zz'))
self.assertRaises(LanguageConvertError, lambda: Language('aaa').alpha2)
self.assertEqual(len(language_converters['alpha2'].codes), 184)
def test_converter_alpha3b(self):
self.assertEqual(Language('fra').alpha3b, 'fre')
self.assertEqual(Language.fromalpha3b('fre'), Language('fra'))
self.assertEqual(Language.fromcode('fre', 'alpha3b'), Language('fra'))
self.assertRaises(LanguageReverseError, lambda: Language.fromalpha3b('zzz'))
self.assertRaises(LanguageConvertError, lambda: Language('aaa').alpha3b)
self.assertEqual(len(language_converters['alpha3b'].codes), 418)
def test_converter_alpha3t(self):
self.assertEqual(Language('fra').alpha3t, 'fra')
self.assertEqual(Language.fromalpha3t('fra'), Language('fra'))
self.assertEqual(Language.fromcode('fra', 'alpha3t'), Language('fra'))
self.assertRaises(LanguageReverseError, lambda: Language.fromalpha3t('zzz'))
self.assertRaises(LanguageConvertError, lambda: Language('aaa').alpha3t)
self.assertEqual(len(language_converters['alpha3t'].codes), 418)
def test_converter_name(self):
self.assertEqual(Language('eng').name, 'English')
self.assertEqual(Language.fromname('English'), Language('eng'))
self.assertEqual(Language.fromcode('English', 'name'), Language('eng'))
self.assertRaises(LanguageReverseError, lambda: Language.fromname('Zzzzzzzzz'))
self.assertEqual(len(language_converters['name'].codes), 7874)
def test_converter_scope(self):
self.assertEqual(language_converters['scope'].codes, set(['I', 'S', 'M']))
self.assertEqual(Language('eng').scope, 'individual')
self.assertEqual(Language('und').scope, 'special')
def test_converter_type(self):
self.assertEqual(language_converters['type'].codes, set(['A', 'C', 'E', 'H', 'L', 'S']))
self.assertEqual(Language('eng').type, 'living')
self.assertEqual(Language('und').type, 'special')
def test_converter_opensubtitles(self):
self.assertEqual(Language('fra').opensubtitles, Language('fra').alpha3b)
self.assertEqual(Language('por', 'BR').opensubtitles, 'pob')
self.assertEqual(Language.fromopensubtitles('fre'), Language('fra'))
self.assertEqual(Language.fromopensubtitles('pob'), Language('por', 'BR'))
self.assertEqual(Language.fromopensubtitles('pb'), Language('por', 'BR'))
# Montenegrin is not recognized as an ISO language (yet?) but for now it is
# unofficially accepted as Serbian from Montenegro
self.assertEqual(Language.fromopensubtitles('mne'), Language('srp', 'ME'))
self.assertEqual(Language.fromcode('pob', 'opensubtitles'), Language('por', 'BR'))
self.assertRaises(LanguageReverseError, lambda: Language.fromopensubtitles('zzz'))
self.assertRaises(LanguageConvertError, lambda: Language('aaa').opensubtitles)
self.assertEqual(len(language_converters['opensubtitles'].codes), 606)
# test with all the LANGUAGES from the opensubtitles api
# downloaded from: http://www.opensubtitles.org/addons/export_languages.php
f = resource_stream('babelfish', 'data/opensubtitles_languages.txt')
f.readline()
for l in f:
idlang, alpha2, _, upload_enabled, web_enabled = l.decode('utf-8').strip().split('\t')
if not int(upload_enabled) and not int(web_enabled):
# do not test LANGUAGES that are too esoteric / not widely available
continue
self.assertEqual(Language.fromopensubtitles(idlang).opensubtitles, idlang)
if alpha2:
self.assertEqual(Language.fromopensubtitles(idlang), Language.fromopensubtitles(alpha2))
f.close()
def test_fromietf_country_script(self):
language = Language.fromietf('fra-FR-Latn')
self.assertEqual(language.alpha3, 'fra')
self.assertEqual(language.country, Country('FR'))
self.assertEqual(language.script, Script('Latn'))
def test_fromietf_country_no_script(self):
language = Language.fromietf('fra-FR')
self.assertEqual(language.alpha3, 'fra')
self.assertEqual(language.country, Country('FR'))
self.assertIsNone(language.script)
def test_fromietf_no_country_no_script(self):
language = Language.fromietf('fra-FR')
self.assertEqual(language.alpha3, 'fra')
self.assertEqual(language.country, Country('FR'))
self.assertIsNone(language.script)
def test_fromietf_no_country_script(self):
language = Language.fromietf('fra-Latn')
self.assertEqual(language.alpha3, 'fra')
self.assertIsNone(language.country)
self.assertEqual(language.script, Script('Latn'))
def test_fromietf_alpha2_language(self):
language = Language.fromietf('fr-Latn')
self.assertEqual(language.alpha3, 'fra')
self.assertIsNone(language.country)
self.assertEqual(language.script, Script('Latn'))
def test_fromietf_wrong_language(self):
self.assertRaises(ValueError, lambda: Language.fromietf('xyz-FR'))
def test_fromietf_wrong_country(self):
self.assertRaises(ValueError, lambda: Language.fromietf('fra-YZ'))
def test_fromietf_wrong_script(self):
self.assertRaises(ValueError, lambda: Language.fromietf('fra-FR-Wxyz'))
def test_eq(self):
self.assertEqual(Language('eng'), Language('eng'))
def test_ne(self):
self.assertNotEqual(Language('fra'), Language('eng'))
self.assertIsNotNone(Language('fra'))
def test_nonzero(self):
self.assertFalse(bool(Language('und')))
self.assertTrue(bool(Language('eng')))
def test_language_hasattr(self):
self.assertTrue(hasattr(Language('fra'), 'alpha3'))
self.assertTrue(hasattr(Language('fra'), 'alpha2'))
self.assertFalse(hasattr(Language('bej'), 'alpha2'))
def test_country_hasattr(self):
self.assertTrue(hasattr(Country('US'), 'name'))
self.assertTrue(hasattr(Country('FR'), 'alpha2'))
self.assertFalse(hasattr(Country('BE'), 'none'))
def test_country(self):
self.assertEqual(Language('por', 'BR').country, Country('BR'))
self.assertEqual(Language('eng', Country('US')).country, Country('US'))
def test_eq_with_country(self):
self.assertEqual(Language('eng', 'US'), Language('eng', Country('US')))
def test_ne_with_country(self):
self.assertNotEqual(Language('eng', 'US'), Language('eng', Country('GB')))
def test_script(self):
self.assertEqual(Language('srp', script='Latn').script, Script('Latn'))
self.assertEqual(Language('srp', script=Script('Cyrl')).script, Script('Cyrl'))
def test_eq_with_script(self):
self.assertEqual(Language('srp', script='Latn'), Language('srp', script=Script('Latn')))
def test_ne_with_script(self):
self.assertNotEqual(Language('srp', script='Latn'), Language('srp', script=Script('Cyrl')))
def test_eq_with_country_and_script(self):
self.assertEqual(Language('srp', 'SR', 'Latn'), Language('srp', Country('SR'), Script('Latn')))
def test_ne_with_country_and_script(self):
self.assertNotEqual(Language('srp', 'SR', 'Latn'), Language('srp', Country('SR'), Script('Cyrl')))
def test_hash(self):
self.assertEqual(hash(Language('fra')), hash('fr'))
self.assertEqual(hash(Language('ace')), hash('ace'))
self.assertEqual(hash(Language('por', 'BR')), hash('pt-BR'))
self.assertEqual(hash(Language('srp', script='Cyrl')), hash('sr-Cyrl'))
self.assertEqual(hash(Language('eng', 'US', 'Latn')), hash('en-US-Latn'))
def test_pickle(self):
for lang in [Language('fra'),
Language('eng', 'US'),
Language('srp', script='Latn'),
Language('eng', 'US', 'Latn')]:
self.assertEqual(pickle.loads(pickle.dumps(lang)), lang)
def test_str(self):
self.assertEqual(Language.fromietf(str(Language('eng', 'US', 'Latn'))), Language('eng', 'US', 'Latn'))
self.assertEqual(Language.fromietf(str(Language('fra', 'FR'))), Language('fra', 'FR'))
self.assertEqual(Language.fromietf(str(Language('bel'))), Language('bel'))
def test_register_converter(self):
class TestConverter(LanguageReverseConverter):
def __init__(self):
self.to_test = {'fra': 'test1', 'eng': 'test2'}
self.from_test = {'test1': 'fra', 'test2': 'eng'}
def convert(self, alpha3, country=None, script=None):
if alpha3 not in self.to_test:
raise LanguageConvertError(alpha3, country, script)
return self.to_test[alpha3]
def reverse(self, test):
if test not in self.from_test:
raise LanguageReverseError(test)
return (self.from_test[test], None)
language = Language('fra')
self.assertFalse(hasattr(language, 'test'))
language_converters['test'] = TestConverter()
self.assertTrue(hasattr(language, 'test'))
self.assertIn('test', language_converters)
self.assertEqual(Language('fra').test, 'test1')
self.assertEqual(Language.fromtest('test2').alpha3, 'eng')
del language_converters['test']
self.assertNotIn('test', language_converters)
self.assertRaises(KeyError, lambda: Language.fromtest('test1'))
self.assertRaises(AttributeError, lambda: Language('fra').test)
def suite():
suite = TestSuite()
suite.addTest(TestLoader().loadTestsFromTestCase(TestScript))
suite.addTest(TestLoader().loadTestsFromTestCase(TestCountry))
suite.addTest(TestLoader().loadTestsFromTestCase(TestLanguage))
return suite
if __name__ == '__main__':
TextTestRunner().run(suite())

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -12,29 +13,30 @@
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
from __future__ import division, absolute_import, print_function
import confuse
from sys import stderr
import os
__version__ = '1.6.0'
__author__ = 'Adrian Sampson <adrian@radbox.org>'
from beets.util import confit
__version__ = u'1.4.7'
__author__ = u'Adrian Sampson <adrian@radbox.org>'
class IncludeLazyConfig(confuse.LazyConfig):
"""A version of Confuse's LazyConfig that also merges in data from
class IncludeLazyConfig(confit.LazyConfig):
"""A version of Confit's LazyConfig that also merges in data from
YAML files specified in an `include` setting.
"""
def read(self, user=True, defaults=True):
super().read(user, defaults)
super(IncludeLazyConfig, self).read(user, defaults)
try:
for view in self['include']:
self.set_file(view.as_filename())
except confuse.NotFoundError:
filename = view.as_filename()
if os.path.isfile(filename):
self.set_file(filename)
except confit.NotFoundError:
pass
except confuse.ConfigReadError as err:
stderr.write("configuration `import` failed: {}"
.format(err.reason))
config = IncludeLazyConfig('beets', __name__)

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2017, Adrian Sampson.
#
@ -16,6 +17,7 @@
`python -m beets`.
"""
from __future__ import division, absolute_import, print_function
import sys
from .ui import main

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -16,6 +17,7 @@
music and items' embedded album art.
"""
from __future__ import division, absolute_import, print_function
import subprocess
import platform
@ -24,7 +26,7 @@ import os
from beets.util import displayable_path, syspath, bytestring_path
from beets.util.artresizer import ArtResizer
import mediafile
from beets import mediafile
def mediafile_image(image_path, maxwidth=None):
@ -41,7 +43,7 @@ def get_art(log, item):
try:
mf = mediafile.MediaFile(syspath(item.path))
except mediafile.UnreadableFileError as exc:
log.warning('Could not extract art from {0}: {1}',
log.warning(u'Could not extract art from {0}: {1}',
displayable_path(item.path), exc)
return
@ -49,27 +51,26 @@ def get_art(log, item):
def embed_item(log, item, imagepath, maxwidth=None, itempath=None,
compare_threshold=0, ifempty=False, as_album=False, id3v23=None,
quality=0):
compare_threshold=0, ifempty=False, as_album=False):
"""Embed an image into the item's media file.
"""
# Conditions and filters.
if compare_threshold:
if not check_art_similarity(log, item, imagepath, compare_threshold):
log.info('Image not similar; skipping.')
log.info(u'Image not similar; skipping.')
return
if ifempty and get_art(log, item):
log.info('media file already contained art')
return
log.info(u'media file already contained art')
return
if maxwidth and not as_album:
imagepath = resize_image(log, imagepath, maxwidth, quality)
imagepath = resize_image(log, imagepath, maxwidth)
# Get the `Image` object from the file.
try:
log.debug('embedding {0}', displayable_path(imagepath))
log.debug(u'embedding {0}', displayable_path(imagepath))
image = mediafile_image(imagepath, maxwidth)
except OSError as exc:
log.warning('could not read image file: {0}', exc)
except IOError as exc:
log.warning(u'could not read image file: {0}', exc)
return
# Make sure the image kind is safe (some formats only support PNG
@ -79,39 +80,36 @@ def embed_item(log, item, imagepath, maxwidth=None, itempath=None,
image.mime_type)
return
item.try_write(path=itempath, tags={'images': [image]}, id3v23=id3v23)
item.try_write(path=itempath, tags={'images': [image]})
def embed_album(log, album, maxwidth=None, quiet=False, compare_threshold=0,
ifempty=False, quality=0):
def embed_album(log, album, maxwidth=None, quiet=False,
compare_threshold=0, ifempty=False):
"""Embed album art into all of the album's items.
"""
imagepath = album.artpath
if not imagepath:
log.info('No album art present for {0}', album)
log.info(u'No album art present for {0}', album)
return
if not os.path.isfile(syspath(imagepath)):
log.info('Album art not found at {0} for {1}',
log.info(u'Album art not found at {0} for {1}',
displayable_path(imagepath), album)
return
if maxwidth:
imagepath = resize_image(log, imagepath, maxwidth, quality)
imagepath = resize_image(log, imagepath, maxwidth)
log.info('Embedding album art into {0}', album)
log.info(u'Embedding album art into {0}', album)
for item in album.items():
embed_item(log, item, imagepath, maxwidth, None, compare_threshold,
ifempty, as_album=True, quality=quality)
embed_item(log, item, imagepath, maxwidth, None,
compare_threshold, ifempty, as_album=True)
def resize_image(log, imagepath, maxwidth, quality):
"""Returns path to an image resized to maxwidth and encoded with the
specified quality level.
def resize_image(log, imagepath, maxwidth):
"""Returns path to an image resized to maxwidth.
"""
log.debug('Resizing album art to {0} pixels wide and encoding at quality \
level {1}', maxwidth, quality)
imagepath = ArtResizer.shared.resize(maxwidth, syspath(imagepath),
quality=quality)
log.debug(u'Resizing album art to {0} pixels wide', maxwidth)
imagepath = ArtResizer.shared.resize(maxwidth, syspath(imagepath))
return imagepath
@ -133,7 +131,7 @@ def check_art_similarity(log, item, imagepath, compare_threshold):
syspath(art, prefix=False),
'-colorspace', 'gray', 'MIFF:-']
compare_cmd = ['compare', '-metric', 'PHASH', '-', 'null:']
log.debug('comparing images with pipeline {} | {}',
log.debug(u'comparing images with pipeline {} | {}',
convert_cmd, compare_cmd)
convert_proc = subprocess.Popen(
convert_cmd,
@ -157,7 +155,7 @@ def check_art_similarity(log, item, imagepath, compare_threshold):
convert_proc.wait()
if convert_proc.returncode:
log.debug(
'ImageMagick convert failed with status {}: {!r}',
u'ImageMagick convert failed with status {}: {!r}',
convert_proc.returncode,
convert_stderr,
)
@ -167,7 +165,7 @@ def check_art_similarity(log, item, imagepath, compare_threshold):
stdout, stderr = compare_proc.communicate()
if compare_proc.returncode:
if compare_proc.returncode != 1:
log.debug('ImageMagick compare failed: {0}, {1}',
log.debug(u'ImageMagick compare failed: {0}, {1}',
displayable_path(imagepath),
displayable_path(art))
return
@ -178,10 +176,10 @@ def check_art_similarity(log, item, imagepath, compare_threshold):
try:
phash_diff = float(out_str)
except ValueError:
log.debug('IM output is not a number: {0!r}', out_str)
log.debug(u'IM output is not a number: {0!r}', out_str)
return
log.debug('ImageMagick compare score: {0}', phash_diff)
log.debug(u'ImageMagick compare score: {0}', phash_diff)
return phash_diff <= compare_threshold
return True
@ -191,18 +189,18 @@ def extract(log, outpath, item):
art = get_art(log, item)
outpath = bytestring_path(outpath)
if not art:
log.info('No album art present in {0}, skipping.', item)
log.info(u'No album art present in {0}, skipping.', item)
return
# Add an extension to the filename.
ext = mediafile.image_extension(art)
if not ext:
log.warning('Unknown image type in {0}.',
log.warning(u'Unknown image type in {0}.',
displayable_path(item.path))
return
outpath += bytestring_path('.' + ext)
log.info('Extracting album art from: {0} to: {1}',
log.info(u'Extracting album art from: {0} to: {1}',
item, displayable_path(outpath))
with open(syspath(outpath), 'wb') as f:
f.write(art)
@ -218,7 +216,7 @@ def extract_first(log, outpath, items):
def clear(log, lib, query):
items = lib.items(query)
log.info('Clearing album art from {0} items', len(items))
log.info(u'Clearing album art from {0} items', len(items))
for item in items:
log.debug('Clearing art for {0}', item)
log.debug(u'Clearing art for {0}', item)
item.try_write(tags={'images': None})

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -15,59 +16,19 @@
"""Facilities for automatically determining files' correct metadata.
"""
from __future__ import division, absolute_import, print_function
from beets import logging
from beets import config
# Parts of external interface.
from .hooks import ( # noqa
AlbumInfo,
TrackInfo,
AlbumMatch,
TrackMatch,
Distance,
)
from .hooks import AlbumInfo, TrackInfo, AlbumMatch, TrackMatch # noqa
from .match import tag_item, tag_album, Proposal # noqa
from .match import Recommendation # noqa
# Global logger.
log = logging.getLogger('beets')
# Metadata fields that are already hardcoded, or where the tag name changes.
SPECIAL_FIELDS = {
'album': (
'va',
'releasegroup_id',
'artist_id',
'album_id',
'mediums',
'tracks',
'year',
'month',
'day',
'artist',
'artist_credit',
'artist_sort',
'data_url'
),
'track': (
'track_alt',
'artist_id',
'release_track_id',
'medium',
'index',
'medium_index',
'title',
'artist_credit',
'artist_sort',
'artist',
'track_id',
'medium_total',
'data_url',
'length'
)
}
# Additional utilities for the main interface.
@ -82,14 +43,17 @@ def apply_item_metadata(item, track_info):
item.mb_releasetrackid = track_info.release_track_id
if track_info.artist_id:
item.mb_artistid = track_info.artist_id
if track_info.data_source:
item.data_source = track_info.data_source
for field, value in track_info.items():
# We only overwrite fields that are not already hardcoded.
if field in SPECIAL_FIELDS['track']:
continue
if value is None:
continue
item[field] = value
if track_info.lyricist is not None:
item.lyricist = track_info.lyricist
if track_info.composer is not None:
item.composer = track_info.composer
if track_info.composer_sort is not None:
item.composer_sort = track_info.composer_sort
if track_info.arranger is not None:
item.arranger = track_info.arranger
# At the moment, the other metadata is left intact (including album
# and track number). Perhaps these should be emptied?
@ -178,24 +142,33 @@ def apply_metadata(album_info, mapping):
# Compilation flag.
item.comp = album_info.va
# Track alt.
# Miscellaneous metadata.
for field in ('albumtype',
'label',
'asin',
'catalognum',
'script',
'language',
'country',
'albumstatus',
'albumdisambig',
'data_source',):
value = getattr(album_info, field)
if value is not None:
item[field] = value
if track_info.disctitle is not None:
item.disctitle = track_info.disctitle
if track_info.media is not None:
item.media = track_info.media
if track_info.lyricist is not None:
item.lyricist = track_info.lyricist
if track_info.composer is not None:
item.composer = track_info.composer
if track_info.composer_sort is not None:
item.composer_sort = track_info.composer_sort
if track_info.arranger is not None:
item.arranger = track_info.arranger
item.track_alt = track_info.track_alt
# Don't overwrite fields with empty values unless the
# field is explicitly allowed to be overwritten
for field, value in album_info.items():
if field in SPECIAL_FIELDS['album']:
continue
clobber = field in config['overwrite_null']['album'].as_str_seq()
if value is None and not clobber:
continue
item[field] = value
for field, value in track_info.items():
if field in SPECIAL_FIELDS['track']:
continue
clobber = field in config['overwrite_null']['track'].as_str_seq()
value = getattr(track_info, field)
if value is None and not clobber:
continue
item[field] = value

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -13,6 +14,7 @@
# included in all copies or substantial portions of the Software.
"""Glue between metadata sources and the matching logic."""
from __future__ import division, absolute_import, print_function
from collections import namedtuple
from functools import total_ordering
@ -25,36 +27,14 @@ from beets.util import as_string
from beets.autotag import mb
from jellyfish import levenshtein_distance
from unidecode import unidecode
import six
log = logging.getLogger('beets')
# The name of the type for patterns in re changed in Python 3.7.
try:
Pattern = re._pattern_type
except AttributeError:
Pattern = re.Pattern
# Classes used to represent candidate options.
class AttrDict(dict):
"""A dictionary that supports attribute ("dot") access, so `d.field`
is equivalent to `d['field']`.
"""
def __getattr__(self, attr):
if attr in self:
return self.get(attr)
else:
raise AttributeError
def __setattr__(self, key, value):
self.__setitem__(key, value)
def __hash__(self):
return id(self)
class AlbumInfo(AttrDict):
class AlbumInfo(object):
"""Describes a canonical release that may be used to match a release
in the library. Consists of these data members:
@ -63,22 +43,38 @@ class AlbumInfo(AttrDict):
- ``artist``: name of the release's primary artist
- ``artist_id``
- ``tracks``: list of TrackInfo objects making up the release
- ``asin``: Amazon ASIN
- ``albumtype``: string describing the kind of release
- ``va``: boolean: whether the release has "various artists"
- ``year``: release year
- ``month``: release month
- ``day``: release day
- ``label``: music label responsible for the release
- ``mediums``: the number of discs in this release
- ``artist_sort``: name of the release's artist for sorting
- ``releasegroup_id``: MBID for the album's release group
- ``catalognum``: the label's catalog number for the release
- ``script``: character set used for metadata
- ``language``: human language of the metadata
- ``country``: the release country
- ``albumstatus``: MusicBrainz release status (Official, etc.)
- ``media``: delivery mechanism (Vinyl, etc.)
- ``albumdisambig``: MusicBrainz release disambiguation comment
- ``artist_credit``: Release-specific artist name
- ``data_source``: The original data source (MusicBrainz, Discogs, etc.)
- ``data_url``: The data source release URL.
``mediums`` along with the fields up through ``tracks`` are required.
The others are optional and may be None.
The fields up through ``tracks`` are required. The others are
optional and may be None.
"""
def __init__(self, tracks, album=None, album_id=None, artist=None,
artist_id=None, asin=None, albumtype=None, va=False,
year=None, month=None, day=None, label=None, mediums=None,
artist_sort=None, releasegroup_id=None, catalognum=None,
script=None, language=None, country=None, style=None,
genre=None, albumstatus=None, media=None, albumdisambig=None,
releasegroupdisambig=None, artist_credit=None,
original_year=None, original_month=None,
original_day=None, data_source=None, data_url=None,
discogs_albumid=None, discogs_labelid=None,
discogs_artistid=None, **kwargs):
def __init__(self, album, album_id, artist, artist_id, tracks, asin=None,
albumtype=None, va=False, year=None, month=None, day=None,
label=None, mediums=None, artist_sort=None,
releasegroup_id=None, catalognum=None, script=None,
language=None, country=None, albumstatus=None, media=None,
albumdisambig=None, artist_credit=None, original_year=None,
original_month=None, original_day=None, data_source=None,
data_url=None):
self.album = album
self.album_id = album_id
self.artist = artist
@ -98,22 +94,15 @@ class AlbumInfo(AttrDict):
self.script = script
self.language = language
self.country = country
self.style = style
self.genre = genre
self.albumstatus = albumstatus
self.media = media
self.albumdisambig = albumdisambig
self.releasegroupdisambig = releasegroupdisambig
self.artist_credit = artist_credit
self.original_year = original_year
self.original_month = original_month
self.original_day = original_day
self.data_source = data_source
self.data_url = data_url
self.discogs_albumid = discogs_albumid
self.discogs_labelid = discogs_labelid
self.discogs_artistid = discogs_artistid
self.update(kwargs)
# Work around a bug in python-musicbrainz-ngs that causes some
# strings to be bytes rather than Unicode.
@ -123,46 +112,54 @@ class AlbumInfo(AttrDict):
constituent `TrackInfo` objects, are decoded to Unicode.
"""
for fld in ['album', 'artist', 'albumtype', 'label', 'artist_sort',
'catalognum', 'script', 'language', 'country', 'style',
'genre', 'albumstatus', 'albumdisambig',
'releasegroupdisambig', 'artist_credit',
'media', 'discogs_albumid', 'discogs_labelid',
'discogs_artistid']:
'catalognum', 'script', 'language', 'country',
'albumstatus', 'albumdisambig', 'artist_credit', 'media']:
value = getattr(self, fld)
if isinstance(value, bytes):
setattr(self, fld, value.decode(codec, 'ignore'))
for track in self.tracks:
track.decode(codec)
def copy(self):
dupe = AlbumInfo([])
dupe.update(self)
dupe.tracks = [track.copy() for track in self.tracks]
return dupe
if self.tracks:
for track in self.tracks:
track.decode(codec)
class TrackInfo(AttrDict):
class TrackInfo(object):
"""Describes a canonical track present on a release. Appears as part
of an AlbumInfo's ``tracks`` list. Consists of these data members:
- ``title``: name of the track
- ``track_id``: MusicBrainz ID; UUID fragment only
- ``release_track_id``: MusicBrainz ID respective to a track on a
particular release; UUID fragment only
- ``artist``: individual track artist name
- ``artist_id``
- ``length``: float: duration of the track in seconds
- ``index``: position on the entire release
- ``media``: delivery mechanism (Vinyl, etc.)
- ``medium``: the disc number this track appears on in the album
- ``medium_index``: the track's position on the disc
- ``medium_total``: the number of tracks on the item's disc
- ``artist_sort``: name of the track artist for sorting
- ``disctitle``: name of the individual medium (subtitle)
- ``artist_credit``: Recording-specific artist name
- ``data_source``: The original data source (MusicBrainz, Discogs, etc.)
- ``data_url``: The data source release URL.
- ``lyricist``: individual track lyricist name
- ``composer``: individual track composer name
- ``composer_sort``: individual track composer sort name
- ``arranger`: individual track arranger name
- ``track_alt``: alternative track number (tape, vinyl, etc.)
Only ``title`` and ``track_id`` are required. The rest of the fields
may be None. The indices ``index``, ``medium``, and ``medium_index``
are all 1-based.
"""
def __init__(self, title=None, track_id=None, release_track_id=None,
artist=None, artist_id=None, length=None, index=None,
medium=None, medium_index=None, medium_total=None,
artist_sort=None, disctitle=None, artist_credit=None,
data_source=None, data_url=None, media=None, lyricist=None,
composer=None, composer_sort=None, arranger=None,
track_alt=None, work=None, mb_workid=None,
work_disambig=None, bpm=None, initial_key=None, genre=None,
**kwargs):
def __init__(self, title, track_id, release_track_id=None, artist=None,
artist_id=None, length=None, index=None, medium=None,
medium_index=None, medium_total=None, artist_sort=None,
disctitle=None, artist_credit=None, data_source=None,
data_url=None, media=None, lyricist=None, composer=None,
composer_sort=None, arranger=None, track_alt=None):
self.title = title
self.track_id = track_id
self.release_track_id = release_track_id
@ -184,13 +181,6 @@ class TrackInfo(AttrDict):
self.composer_sort = composer_sort
self.arranger = arranger
self.track_alt = track_alt
self.work = work
self.mb_workid = mb_workid
self.work_disambig = work_disambig
self.bpm = bpm
self.initial_key = initial_key
self.genre = genre
self.update(kwargs)
# As above, work around a bug in python-musicbrainz-ngs.
def decode(self, codec='utf-8'):
@ -203,11 +193,6 @@ class TrackInfo(AttrDict):
if isinstance(value, bytes):
setattr(self, fld, value.decode(codec, 'ignore'))
def copy(self):
dupe = TrackInfo()
dupe.update(self)
return dupe
# Candidate distance scoring.
@ -235,8 +220,8 @@ def _string_dist_basic(str1, str2):
transliteration/lowering to ASCII characters. Normalized by string
length.
"""
assert isinstance(str1, str)
assert isinstance(str2, str)
assert isinstance(str1, six.text_type)
assert isinstance(str2, six.text_type)
str1 = as_string(unidecode(str1))
str2 = as_string(unidecode(str2))
str1 = re.sub(r'[^a-z0-9]', '', str1.lower())
@ -264,9 +249,9 @@ def string_dist(str1, str2):
# "something, the".
for word in SD_END_WORDS:
if str1.endswith(', %s' % word):
str1 = '{} {}'.format(word, str1[:-len(word) - 2])
str1 = '%s %s' % (word, str1[:-len(word) - 2])
if str2.endswith(', %s' % word):
str2 = '{} {}'.format(word, str2[:-len(word) - 2])
str2 = '%s %s' % (word, str2[:-len(word) - 2])
# Perform a couple of basic normalizing substitutions.
for pat, repl in SD_REPLACE:
@ -304,12 +289,11 @@ def string_dist(str1, str2):
return base_dist + penalty
class LazyClassProperty:
class LazyClassProperty(object):
"""A decorator implementing a read-only property that is *lazy* in
the sense that the getter is only invoked once. Subsequent accesses
through *any* instance use the cached result.
"""
def __init__(self, getter):
self.getter = getter
self.computed = False
@ -322,17 +306,17 @@ class LazyClassProperty:
@total_ordering
class Distance:
@six.python_2_unicode_compatible
class Distance(object):
"""Keeps track of multiple distance penalties. Provides a single
weighted distance for all penalties as well as a weighted distance
for each individual penalty.
"""
def __init__(self):
self._penalties = {}
@LazyClassProperty
def _weights(cls): # noqa: N805
def _weights(cls): # noqa
"""A dictionary from keys to floating-point weights.
"""
weights_view = config['match']['distance_weights']
@ -410,7 +394,7 @@ class Distance:
return other - self.distance
def __str__(self):
return f"{self.distance:.2f}"
return "{0:.2f}".format(self.distance)
# Behave like a dict.
@ -437,7 +421,7 @@ class Distance:
"""
if not isinstance(dist, Distance):
raise ValueError(
'`dist` must be a Distance object, not {}'.format(type(dist))
u'`dist` must be a Distance object, not {0}'.format(type(dist))
)
for key, penalties in dist._penalties.items():
self._penalties.setdefault(key, []).extend(penalties)
@ -449,7 +433,7 @@ class Distance:
be a compiled regular expression, in which case it will be
matched against `value2`.
"""
if isinstance(value1, Pattern):
if isinstance(value1, re._pattern_type):
return bool(value1.match(value2))
return value1 == value2
@ -461,7 +445,7 @@ class Distance:
"""
if not 0.0 <= dist <= 1.0:
raise ValueError(
f'`dist` must be between 0.0 and 1.0, not {dist}'
u'`dist` must be between 0.0 and 1.0, not {0}'.format(dist)
)
self._penalties.setdefault(key, []).append(dist)
@ -557,7 +541,7 @@ def album_for_mbid(release_id):
try:
album = mb.album_for_id(release_id)
if album:
plugins.send('albuminfo_received', info=album)
plugins.send(u'albuminfo_received', info=album)
return album
except mb.MusicBrainzAPIError as exc:
exc.log(log)
@ -570,7 +554,7 @@ def track_for_mbid(recording_id):
try:
track = mb.track_for_id(recording_id)
if track:
plugins.send('trackinfo_received', info=track)
plugins.send(u'trackinfo_received', info=track)
return track
except mb.MusicBrainzAPIError as exc:
exc.log(log)
@ -583,7 +567,7 @@ def albums_for_id(album_id):
yield a
for a in plugins.album_for_id(album_id):
if a:
plugins.send('albuminfo_received', info=a)
plugins.send(u'albuminfo_received', info=a)
yield a
@ -594,43 +578,40 @@ def tracks_for_id(track_id):
yield t
for t in plugins.track_for_id(track_id):
if t:
plugins.send('trackinfo_received', info=t)
plugins.send(u'trackinfo_received', info=t)
yield t
@plugins.notify_info_yielded('albuminfo_received')
def album_candidates(items, artist, album, va_likely, extra_tags):
@plugins.notify_info_yielded(u'albuminfo_received')
def album_candidates(items, artist, album, va_likely):
"""Search for album matches. ``items`` is a list of Item objects
that make up the album. ``artist`` and ``album`` are the respective
names (strings), which may be derived from the item list or may be
entered by the user. ``va_likely`` is a boolean indicating whether
the album is likely to be a "various artists" release. ``extra_tags``
is an optional dictionary of additional tags used to further
constrain the search.
the album is likely to be a "various artists" release.
"""
# Base candidates if we have album and artist to match.
if artist and album:
try:
yield from mb.match_album(artist, album, len(items),
extra_tags)
for candidate in mb.match_album(artist, album, len(items)):
yield candidate
except mb.MusicBrainzAPIError as exc:
exc.log(log)
# Also add VA matches from MusicBrainz where appropriate.
if va_likely and album:
try:
yield from mb.match_album(None, album, len(items),
extra_tags)
for candidate in mb.match_album(None, album, len(items)):
yield candidate
except mb.MusicBrainzAPIError as exc:
exc.log(log)
# Candidates from plugins.
yield from plugins.candidates(items, artist, album, va_likely,
extra_tags)
for candidate in plugins.candidates(items, artist, album, va_likely):
yield candidate
@plugins.notify_info_yielded('trackinfo_received')
@plugins.notify_info_yielded(u'trackinfo_received')
def item_candidates(item, artist, title):
"""Search for item matches. ``item`` is the Item to be matched.
``artist`` and ``title`` are strings and either reflect the item or
@ -640,9 +621,11 @@ def item_candidates(item, artist, title):
# MusicBrainz candidates.
if artist and title:
try:
yield from mb.match_track(artist, title)
for candidate in mb.match_track(artist, title):
yield candidate
except mb.MusicBrainzAPIError as exc:
exc.log(log)
# Plugin candidates.
yield from plugins.item_candidates(item, artist, title)
for candidate in plugins.item_candidates(item, artist, title):
yield candidate

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -16,6 +17,7 @@
releases and tracks.
"""
from __future__ import division, absolute_import, print_function
import datetime
import re
@ -33,7 +35,7 @@ from beets.util.enumeration import OrderedEnum
# album level to determine whether a given release is likely a VA
# release and also on the track level to to remove the penalty for
# differing artists.
VA_ARTISTS = ('', 'various artists', 'various', 'va', 'unknown')
VA_ARTISTS = (u'', u'various artists', u'various', u'va', u'unknown')
# Global logger.
log = logging.getLogger('beets')
@ -106,7 +108,7 @@ def assign_items(items, tracks):
log.debug('...done.')
# Produce the output matching.
mapping = {items[i]: tracks[j] for (i, j) in matching}
mapping = dict((items[i], tracks[j]) for (i, j) in matching)
extra_items = list(set(items) - set(mapping.keys()))
extra_items.sort(key=lambda i: (i.disc, i.track, i.title))
extra_tracks = list(set(tracks) - set(mapping.values()))
@ -274,16 +276,16 @@ def match_by_id(items):
try:
first = next(albumids)
except StopIteration:
log.debug('No album ID found.')
log.debug(u'No album ID found.')
return None
# Is there a consensus on the MB album ID?
for other in albumids:
if other != first:
log.debug('No album ID consensus.')
log.debug(u'No album ID consensus.')
return None
# If all album IDs are equal, look up the album.
log.debug('Searching for discovered album ID: {0}', first)
log.debug(u'Searching for discovered album ID: {0}', first)
return hooks.album_for_mbid(first)
@ -349,23 +351,23 @@ def _add_candidate(items, results, info):
checking the track count, ordering the items, checking for
duplicates, and calculating the distance.
"""
log.debug('Candidate: {0} - {1} ({2})',
log.debug(u'Candidate: {0} - {1} ({2})',
info.artist, info.album, info.album_id)
# Discard albums with zero tracks.
if not info.tracks:
log.debug('No tracks.')
log.debug(u'No tracks.')
return
# Don't duplicate.
if info.album_id in results:
log.debug('Duplicate.')
log.debug(u'Duplicate.')
return
# Discard matches without required tags.
for req_tag in config['match']['required'].as_str_seq():
if getattr(info, req_tag) is None:
log.debug('Ignored. Missing required tag: {0}', req_tag)
log.debug(u'Ignored. Missing required tag: {0}', req_tag)
return
# Find mapping between the items and the track info.
@ -378,10 +380,10 @@ def _add_candidate(items, results, info):
penalties = [key for key, _ in dist]
for penalty in config['match']['ignored'].as_str_seq():
if penalty in penalties:
log.debug('Ignored. Penalty: {0}', penalty)
log.debug(u'Ignored. Penalty: {0}', penalty)
return
log.debug('Success. Distance: {0}', dist)
log.debug(u'Success. Distance: {0}', dist)
results[info.album_id] = hooks.AlbumMatch(dist, info, mapping,
extra_items, extra_tracks)
@ -409,7 +411,7 @@ def tag_album(items, search_artist=None, search_album=None,
likelies, consensus = current_metadata(items)
cur_artist = likelies['artist']
cur_album = likelies['album']
log.debug('Tagging {0} - {1}', cur_artist, cur_album)
log.debug(u'Tagging {0} - {1}', cur_artist, cur_album)
# The output result (distance, AlbumInfo) tuples (keyed by MB album
# ID).
@ -418,7 +420,7 @@ def tag_album(items, search_artist=None, search_album=None,
# Search by explicit ID.
if search_ids:
for search_id in search_ids:
log.debug('Searching for album ID: {0}', search_id)
log.debug(u'Searching for album ID: {0}', search_id)
for id_candidate in hooks.albums_for_id(search_id):
_add_candidate(items, candidates, id_candidate)
@ -429,13 +431,13 @@ def tag_album(items, search_artist=None, search_album=None,
if id_info:
_add_candidate(items, candidates, id_info)
rec = _recommendation(list(candidates.values()))
log.debug('Album ID match recommendation is {0}', rec)
log.debug(u'Album ID match recommendation is {0}', rec)
if candidates and not config['import']['timid']:
# If we have a very good MBID match, return immediately.
# Otherwise, this match will compete against metadata-based
# matches.
if rec == Recommendation.strong:
log.debug('ID match.')
log.debug(u'ID match.')
return cur_artist, cur_album, \
Proposal(list(candidates.values()), rec)
@ -443,29 +445,22 @@ def tag_album(items, search_artist=None, search_album=None,
if not (search_artist and search_album):
# No explicit search terms -- use current metadata.
search_artist, search_album = cur_artist, cur_album
log.debug('Search terms: {0} - {1}', search_artist, search_album)
extra_tags = None
if config['musicbrainz']['extra_tags']:
tag_list = config['musicbrainz']['extra_tags'].get()
extra_tags = {k: v for (k, v) in likelies.items() if k in tag_list}
log.debug('Additional search terms: {0}', extra_tags)
log.debug(u'Search terms: {0} - {1}', search_artist, search_album)
# Is this album likely to be a "various artist" release?
va_likely = ((not consensus['artist']) or
(search_artist.lower() in VA_ARTISTS) or
any(item.comp for item in items))
log.debug('Album might be VA: {0}', va_likely)
log.debug(u'Album might be VA: {0}', va_likely)
# Get the results from the data sources.
for matched_candidate in hooks.album_candidates(items,
search_artist,
search_album,
va_likely,
extra_tags):
va_likely):
_add_candidate(items, candidates, matched_candidate)
log.debug('Evaluating {0} candidates.', len(candidates))
log.debug(u'Evaluating {0} candidates.', len(candidates))
# Sort and get the recommendation.
candidates = _sort_candidates(candidates.values())
rec = _recommendation(candidates)
@ -490,7 +485,7 @@ def tag_item(item, search_artist=None, search_title=None,
trackids = search_ids or [t for t in [item.mb_trackid] if t]
if trackids:
for trackid in trackids:
log.debug('Searching for track ID: {0}', trackid)
log.debug(u'Searching for track ID: {0}', trackid)
for track_info in hooks.tracks_for_id(trackid):
dist = track_distance(item, track_info, incl_artist=True)
candidates[track_info.track_id] = \
@ -499,7 +494,7 @@ def tag_item(item, search_artist=None, search_title=None,
rec = _recommendation(_sort_candidates(candidates.values()))
if rec == Recommendation.strong and \
not config['import']['timid']:
log.debug('Track ID match.')
log.debug(u'Track ID match.')
return Proposal(_sort_candidates(candidates.values()), rec)
# If we're searching by ID, don't proceed.
@ -512,7 +507,7 @@ def tag_item(item, search_artist=None, search_title=None,
# Search terms.
if not (search_artist and search_title):
search_artist, search_title = item.artist, item.title
log.debug('Item search terms: {0} - {1}', search_artist, search_title)
log.debug(u'Item search terms: {0} - {1}', search_artist, search_title)
# Get and evaluate candidate metadata.
for track_info in hooks.item_candidates(item, search_artist, search_title):
@ -520,7 +515,7 @@ def tag_item(item, search_artist=None, search_title=None,
candidates[track_info.track_id] = hooks.TrackMatch(dist, track_info)
# Sort by distance and return with recommendation.
log.debug('Found {0} candidates.', len(candidates))
log.debug(u'Found {0} candidates.', len(candidates))
candidates = _sort_candidates(candidates.values())
rec = _recommendation(candidates)
return Proposal(candidates, rec)

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -14,72 +15,57 @@
"""Searches for albums in the MusicBrainz database.
"""
from __future__ import division, absolute_import, print_function
import musicbrainzngs
import re
import traceback
from six.moves.urllib.parse import urljoin
from beets import logging
from beets import plugins
import beets.autotag.hooks
import beets
from beets import util
from beets import config
from collections import Counter
from urllib.parse import urljoin
import six
VARIOUS_ARTISTS_ID = '89ad4ac3-39f7-470e-963a-56509c546377'
BASE_URL = 'https://musicbrainz.org/'
if util.SNI_SUPPORTED:
BASE_URL = 'https://musicbrainz.org/'
else:
BASE_URL = 'http://musicbrainz.org/'
SKIPPED_TRACKS = ['[data track]']
FIELDS_TO_MB_KEYS = {
'catalognum': 'catno',
'country': 'country',
'label': 'label',
'media': 'format',
'year': 'date',
}
musicbrainzngs.set_useragent('beets', beets.__version__,
'https://beets.io/')
'http://beets.io/')
class MusicBrainzAPIError(util.HumanReadableException):
"""An error while talking to MusicBrainz. The `query` field is the
parameter to the action and may have any type.
"""
def __init__(self, reason, verb, query, tb=None):
self.query = query
if isinstance(reason, musicbrainzngs.WebServiceError):
reason = 'MusicBrainz not reachable'
super().__init__(reason, verb, tb)
reason = u'MusicBrainz not reachable'
super(MusicBrainzAPIError, self).__init__(reason, verb, tb)
def get_message(self):
return '{} in {} with query {}'.format(
return u'{0} in {1} with query {2}'.format(
self._reasonstr(), self.verb, repr(self.query)
)
log = logging.getLogger('beets')
RELEASE_INCLUDES = ['artists', 'media', 'recordings', 'release-groups',
'labels', 'artist-credits', 'aliases',
'recording-level-rels', 'work-rels',
'work-level-rels', 'artist-rels', 'isrcs']
BROWSE_INCLUDES = ['artist-credits', 'work-rels',
'artist-rels', 'recording-rels', 'release-rels']
if "work-level-rels" in musicbrainzngs.VALID_BROWSE_INCLUDES['recording']:
BROWSE_INCLUDES.append("work-level-rels")
BROWSE_CHUNKSIZE = 100
BROWSE_MAXTRACKS = 500
TRACK_INCLUDES = ['artists', 'aliases', 'isrcs']
'work-level-rels', 'artist-rels']
TRACK_INCLUDES = ['artists', 'aliases']
if 'work-level-rels' in musicbrainzngs.VALID_INCLUDES['recording']:
TRACK_INCLUDES += ['work-level-rels', 'artist-rels']
if 'genres' in musicbrainzngs.VALID_INCLUDES['recording']:
RELEASE_INCLUDES += ['genres']
def track_url(trackid):
@ -95,11 +81,7 @@ def configure():
from the beets configuration. This should be called at startup.
"""
hostname = config['musicbrainz']['host'].as_str()
https = config['musicbrainz']['https'].get(bool)
# Only call set_hostname when a custom server is configured. Since
# musicbrainz-ngs connects to musicbrainz.org with HTTPS by default
if hostname != "musicbrainz.org":
musicbrainzngs.set_hostname(hostname, https)
musicbrainzngs.set_hostname(hostname)
musicbrainzngs.set_rate_limit(
config['musicbrainz']['ratelimit_interval'].as_number(),
config['musicbrainz']['ratelimit'].get(int),
@ -156,7 +138,7 @@ def _flatten_artist_credit(credit):
artist_sort_parts = []
artist_credit_parts = []
for el in credit:
if isinstance(el, str):
if isinstance(el, six.string_types):
# Join phrase.
artist_parts.append(el)
artist_credit_parts.append(el)
@ -203,13 +185,13 @@ def track_info(recording, index=None, medium=None, medium_index=None,
the number of tracks on the medium. Each number is a 1-based index.
"""
info = beets.autotag.hooks.TrackInfo(
title=recording['title'],
track_id=recording['id'],
recording['title'],
recording['id'],
index=index,
medium=medium,
medium_index=medium_index,
medium_total=medium_total,
data_source='MusicBrainz',
data_source=u'MusicBrainz',
data_url=track_url(recording['id']),
)
@ -225,22 +207,12 @@ def track_info(recording, index=None, medium=None, medium_index=None,
if recording.get('length'):
info.length = int(recording['length']) / (1000.0)
info.trackdisambig = recording.get('disambiguation')
if recording.get('isrc-list'):
info.isrc = ';'.join(recording['isrc-list'])
lyricist = []
composer = []
composer_sort = []
for work_relation in recording.get('work-relation-list', ()):
if work_relation['type'] != 'performance':
continue
info.work = work_relation['work']['title']
info.mb_workid = work_relation['work']['id']
if 'disambiguation' in work_relation['work']:
info.work_disambig = work_relation['work']['disambiguation']
for artist_relation in work_relation['work'].get(
'artist-relation-list', ()):
if 'type' in artist_relation:
@ -252,10 +224,10 @@ def track_info(recording, index=None, medium=None, medium_index=None,
composer_sort.append(
artist_relation['artist']['sort-name'])
if lyricist:
info.lyricist = ', '.join(lyricist)
info.lyricist = u', '.join(lyricist)
if composer:
info.composer = ', '.join(composer)
info.composer_sort = ', '.join(composer_sort)
info.composer = u', '.join(composer)
info.composer_sort = u', '.join(composer_sort)
arranger = []
for artist_relation in recording.get('artist-relation-list', ()):
@ -264,12 +236,7 @@ def track_info(recording, index=None, medium=None, medium_index=None,
if type == 'arranger':
arranger.append(artist_relation['artist']['name'])
if arranger:
info.arranger = ', '.join(arranger)
# Supplementary fields provided by plugins
extra_trackdatas = plugins.send('mb_track_extract', data=recording)
for extra_trackdata in extra_trackdatas:
info.update(extra_trackdata)
info.arranger = u', '.join(arranger)
info.decode()
return info
@ -303,26 +270,6 @@ def album_info(release):
artist_name, artist_sort_name, artist_credit_name = \
_flatten_artist_credit(release['artist-credit'])
ntracks = sum(len(m['track-list']) for m in release['medium-list'])
# The MusicBrainz API omits 'artist-relation-list' and 'work-relation-list'
# when the release has more than 500 tracks. So we use browse_recordings
# on chunks of tracks to recover the same information in this case.
if ntracks > BROWSE_MAXTRACKS:
log.debug('Album {} has too many tracks', release['id'])
recording_list = []
for i in range(0, ntracks, BROWSE_CHUNKSIZE):
log.debug('Retrieving tracks starting at {}', i)
recording_list.extend(musicbrainzngs.browse_recordings(
release=release['id'], limit=BROWSE_CHUNKSIZE,
includes=BROWSE_INCLUDES,
offset=i)['recording-list'])
track_map = {r['id']: r for r in recording_list}
for medium in release['medium-list']:
for recording in medium['track-list']:
recording_info = track_map[recording['recording']['id']]
recording['recording'] = recording_info
# Basic info.
track_infos = []
index = 0
@ -334,8 +281,7 @@ def album_info(release):
continue
all_tracks = medium['track-list']
if ('data-track-list' in medium
and not config['match']['ignore_data_tracks']):
if 'data-track-list' in medium:
all_tracks += medium['data-track-list']
track_count = len(all_tracks)
@ -381,15 +327,15 @@ def album_info(release):
track_infos.append(ti)
info = beets.autotag.hooks.AlbumInfo(
album=release['title'],
album_id=release['id'],
artist=artist_name,
artist_id=release['artist-credit'][0]['artist']['id'],
tracks=track_infos,
release['title'],
release['id'],
artist_name,
release['artist-credit'][0]['artist']['id'],
track_infos,
mediums=len(release['medium-list']),
artist_sort=artist_sort_name,
artist_credit=artist_credit_name,
data_source='MusicBrainz',
data_source=u'MusicBrainz',
data_url=album_url(release['id']),
)
info.va = info.artist_id == VARIOUS_ARTISTS_ID
@ -399,12 +345,13 @@ def album_info(release):
info.releasegroup_id = release['release-group']['id']
info.albumstatus = release.get('status')
# Get the disambiguation strings at the release and release group level.
# Build up the disambiguation string from the release group and release.
disambig = []
if release['release-group'].get('disambiguation'):
info.releasegroupdisambig = \
release['release-group'].get('disambiguation')
disambig.append(release['release-group'].get('disambiguation'))
if release.get('disambiguation'):
info.albumdisambig = release.get('disambiguation')
disambig.append(release.get('disambiguation'))
info.albumdisambig = u', '.join(disambig)
# Get the "classic" Release type. This data comes from a legacy API
# feature before MusicBrainz supported multiple release types.
@ -413,17 +360,18 @@ def album_info(release):
if reltype:
info.albumtype = reltype.lower()
# Set the new-style "primary" and "secondary" release types.
albumtypes = []
# Log the new-style "primary" and "secondary" release types.
# Eventually, we'd like to actually store this data, but we just log
# it for now to help understand the differences.
if 'primary-type' in release['release-group']:
rel_primarytype = release['release-group']['primary-type']
if rel_primarytype:
albumtypes.append(rel_primarytype.lower())
log.debug('primary MB release type: ' + rel_primarytype.lower())
if 'secondary-type-list' in release['release-group']:
if release['release-group']['secondary-type-list']:
for sec_type in release['release-group']['secondary-type-list']:
albumtypes.append(sec_type.lower())
info.albumtypes = '; '.join(albumtypes)
log.debug('secondary MB release type(s): ' + ', '.join(
[secondarytype.lower() for secondarytype in
release['release-group']['secondary-type-list']]))
# Release events.
info.country, release_date = _preferred_release_event(release)
@ -454,33 +402,17 @@ def album_info(release):
first_medium = release['medium-list'][0]
info.media = first_medium.get('format')
if config['musicbrainz']['genres']:
sources = [
release['release-group'].get('genre-list', []),
release.get('genre-list', []),
]
genres = Counter()
for source in sources:
for genreitem in source:
genres[genreitem['name']] += int(genreitem['count'])
info.genre = '; '.join(g[0] for g in sorted(genres.items(),
key=lambda g: -g[1]))
extra_albumdatas = plugins.send('mb_album_extract', data=release)
for extra_albumdata in extra_albumdatas:
info.update(extra_albumdata)
info.decode()
return info
def match_album(artist, album, tracks=None, extra_tags=None):
def match_album(artist, album, tracks=None):
"""Searches for a single album ("release" in MusicBrainz parlance)
and returns an iterator over AlbumInfo objects. May raise a
MusicBrainzAPIError.
The query consists of an artist name, an album name, and,
optionally, a number of tracks on the album and any other extra tags.
optionally, a number of tracks on the album.
"""
# Build search criteria.
criteria = {'release': album.lower().strip()}
@ -490,24 +422,14 @@ def match_album(artist, album, tracks=None, extra_tags=None):
# Various Artists search.
criteria['arid'] = VARIOUS_ARTISTS_ID
if tracks is not None:
criteria['tracks'] = str(tracks)
# Additional search cues from existing metadata.
if extra_tags:
for tag in extra_tags:
key = FIELDS_TO_MB_KEYS[tag]
value = str(extra_tags.get(tag, '')).lower().strip()
if key == 'catno':
value = value.replace(' ', '')
if value:
criteria[key] = value
criteria['tracks'] = six.text_type(tracks)
# Abort if we have no search terms.
if not any(criteria.values()):
return
try:
log.debug('Searching for MusicBrainz releases with: {!r}', criteria)
log.debug(u'Searching for MusicBrainz releases with: {!r}', criteria)
res = musicbrainzngs.search_releases(
limit=config['musicbrainz']['searchlimit'].get(int), **criteria)
except musicbrainzngs.MusicBrainzError as exc:
@ -548,7 +470,7 @@ def _parse_id(s):
no ID can be found, return None.
"""
# Find the first thing that looks like a UUID/MBID.
match = re.search('[a-f0-9]{8}(-[a-f0-9]{4}){3}-[a-f0-9]{12}', s)
match = re.search(u'[a-f0-9]{8}(-[a-f0-9]{4}){3}-[a-f0-9]{12}', s)
if match:
return match.group()
@ -558,19 +480,19 @@ def album_for_id(releaseid):
object or None if the album is not found. May raise a
MusicBrainzAPIError.
"""
log.debug('Requesting MusicBrainz release {}', releaseid)
log.debug(u'Requesting MusicBrainz release {}', releaseid)
albumid = _parse_id(releaseid)
if not albumid:
log.debug('Invalid MBID ({0}).', releaseid)
log.debug(u'Invalid MBID ({0}).', releaseid)
return
try:
res = musicbrainzngs.get_release_by_id(albumid,
RELEASE_INCLUDES)
except musicbrainzngs.ResponseError:
log.debug('Album ID match failed.')
log.debug(u'Album ID match failed.')
return None
except musicbrainzngs.MusicBrainzError as exc:
raise MusicBrainzAPIError(exc, 'get release by ID', albumid,
raise MusicBrainzAPIError(exc, u'get release by ID', albumid,
traceback.format_exc())
return album_info(res['release'])
@ -581,14 +503,14 @@ def track_for_id(releaseid):
"""
trackid = _parse_id(releaseid)
if not trackid:
log.debug('Invalid MBID ({0}).', releaseid)
log.debug(u'Invalid MBID ({0}).', releaseid)
return
try:
res = musicbrainzngs.get_recording_by_id(trackid, TRACK_INCLUDES)
except musicbrainzngs.ResponseError:
log.debug('Track ID match failed.')
log.debug(u'Track ID match failed.')
return None
except musicbrainzngs.MusicBrainzError as exc:
raise MusicBrainzAPIError(exc, 'get recording by ID', trackid,
raise MusicBrainzAPIError(exc, u'get recording by ID', trackid,
traceback.format_exc())
return track_info(res['recording'])

View file

@ -7,7 +7,6 @@ import:
move: no
link: no
hardlink: no
reflink: no
delete: no
resume: ask
incremental: no
@ -45,20 +44,10 @@ replace:
'^\s+': ''
'^-': _
path_sep_replace: _
drive_sep_replace: _
asciify_paths: false
art_filename: cover
max_filename_length: 0
aunique:
keys: albumartist album
disambiguators: albumtype year label catalognum albumdisambig releasegroupdisambig
bracket: '[]'
overwrite_null:
album: []
track: []
plugins: []
pluginpath: []
threaded: yes
@ -102,12 +91,9 @@ statefile: state.pickle
musicbrainz:
host: musicbrainz.org
https: no
ratelimit: 1
ratelimit_interval: 1.0
searchlimit: 5
extra_tags: []
genres: no
match:
strong_rec_thresh: 0.04
@ -143,7 +129,6 @@ match:
ignored: []
required: []
ignored_media: []
ignore_data_tracks: yes
ignore_video_tracks: yes
track_length_grace: 10
track_length_max: 30

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -15,6 +16,7 @@
"""DBCore is an abstract database package that forms the basis for beets'
Library.
"""
from __future__ import division, absolute_import, print_function
from .db import Model, Database
from .query import Query, FieldQuery, MatchQuery, AndQuery, OrQuery

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -14,21 +15,22 @@
"""The central Model and Database constructs for DBCore.
"""
from __future__ import division, absolute_import, print_function
import time
import os
import re
from collections import defaultdict
import threading
import sqlite3
import contextlib
import collections
import beets
from beets.util import functemplate
from beets.util.functemplate import Template
from beets.util import py3_path
from beets.dbcore import types
from .query import MatchQuery, NullSort, TrueQuery
from collections.abc import Mapping
import six
class DBAccessError(Exception):
@ -40,30 +42,20 @@ class DBAccessError(Exception):
"""
class FormattedMapping(Mapping):
class FormattedMapping(collections.Mapping):
"""A `dict`-like formatted view of a model.
The accessor `mapping[key]` returns the formatted version of
`model[key]` as a unicode string.
The `included_keys` parameter allows filtering the fields that are
returned. By default all fields are returned. Limiting to specific keys can
avoid expensive per-item database queries.
If `for_path` is true, all path separators in the formatted values
are replaced.
"""
ALL_KEYS = '*'
def __init__(self, model, included_keys=ALL_KEYS, for_path=False):
def __init__(self, model, for_path=False):
self.for_path = for_path
self.model = model
if included_keys == self.ALL_KEYS:
# Performance note: this triggers a database query.
self.model_keys = self.model.keys(True)
else:
self.model_keys = included_keys
self.model_keys = model.keys(True)
def __getitem__(self, key):
if key in self.model_keys:
@ -80,7 +72,7 @@ class FormattedMapping(Mapping):
def get(self, key, default=None):
if default is None:
default = self.model._type(key).format(None)
return super().get(key, default)
return super(FormattedMapping, self).get(key, default)
def _get_formatted(self, model, key):
value = model._type(key).format(model.get(key))
@ -89,11 +81,6 @@ class FormattedMapping(Mapping):
if self.for_path:
sep_repl = beets.config['path_sep_replace'].as_str()
sep_drive = beets.config['drive_sep_replace'].as_str()
if re.match(r'^\w:', value):
value = re.sub(r'(?<=^\w):', sep_drive, value)
for sep in (os.path.sep, os.path.altsep):
if sep:
value = value.replace(sep, sep_repl)
@ -101,105 +88,11 @@ class FormattedMapping(Mapping):
return value
class LazyConvertDict:
"""Lazily convert types for attributes fetched from the database
"""
def __init__(self, model_cls):
"""Initialize the object empty
"""
self.data = {}
self.model_cls = model_cls
self._converted = {}
def init(self, data):
"""Set the base data that should be lazily converted
"""
self.data = data
def _convert(self, key, value):
"""Convert the attribute type according the the SQL type
"""
return self.model_cls._type(key).from_sql(value)
def __setitem__(self, key, value):
"""Set an attribute value, assume it's already converted
"""
self._converted[key] = value
def __getitem__(self, key):
"""Get an attribute value, converting the type on demand
if needed
"""
if key in self._converted:
return self._converted[key]
elif key in self.data:
value = self._convert(key, self.data[key])
self._converted[key] = value
return value
def __delitem__(self, key):
"""Delete both converted and base data
"""
if key in self._converted:
del self._converted[key]
if key in self.data:
del self.data[key]
def keys(self):
"""Get a list of available field names for this object.
"""
return list(self._converted.keys()) + list(self.data.keys())
def copy(self):
"""Create a copy of the object.
"""
new = self.__class__(self.model_cls)
new.data = self.data.copy()
new._converted = self._converted.copy()
return new
# Act like a dictionary.
def update(self, values):
"""Assign all values in the given dict.
"""
for key, value in values.items():
self[key] = value
def items(self):
"""Iterate over (key, value) pairs that this object contains.
Computed fields are not included.
"""
for key in self:
yield key, self[key]
def get(self, key, default=None):
"""Get the value for a given key or `default` if it does not
exist.
"""
if key in self:
return self[key]
else:
return default
def __contains__(self, key):
"""Determine whether `key` is an attribute on this object.
"""
return key in self.keys()
def __iter__(self):
"""Iterate over the available field names (excluding computed
fields).
"""
return iter(self.keys())
# Abstract base for model classes.
class Model:
class Model(object):
"""An abstract object representing an object in the database. Model
objects act like dictionaries (i.e., they allow subscript access like
objects act like dictionaries (i.e., the allow subscript access like
``obj['field']``). The same field set is available via attribute
access as a shortcut (i.e., ``obj.field``). Three kinds of attributes are
available:
@ -250,22 +143,12 @@ class Model:
are subclasses of `Sort`.
"""
_queries = {}
"""Named queries that use a field-like `name:value` syntax but which
do not relate to any specific field.
"""
_always_dirty = False
"""By default, fields only become "dirty" when their value actually
changes. Enabling this flag marks fields as dirty even when the new
value is the same as the old value (e.g., `o.f = o.f`).
"""
_revision = -1
"""A revision number from when the model was loaded from or written
to the database.
"""
@classmethod
def _getters(cls):
"""Return a mapping from field names to getter functions.
@ -289,8 +172,8 @@ class Model:
"""
self._db = db
self._dirty = set()
self._values_fixed = LazyConvertDict(self)
self._values_flex = LazyConvertDict(self)
self._values_fixed = {}
self._values_flex = {}
# Initial contents.
self.update(values)
@ -304,25 +187,23 @@ class Model:
ordinary construction are bypassed.
"""
obj = cls(db)
obj._values_fixed.init(fixed_values)
obj._values_flex.init(flex_values)
for key, value in fixed_values.items():
obj._values_fixed[key] = cls._type(key).from_sql(value)
for key, value in flex_values.items():
obj._values_flex[key] = cls._type(key).from_sql(value)
return obj
def __repr__(self):
return '{}({})'.format(
return '{0}({1})'.format(
type(self).__name__,
', '.join(f'{k}={v!r}' for k, v in dict(self).items()),
', '.join('{0}={1!r}'.format(k, v) for k, v in dict(self).items()),
)
def clear_dirty(self):
"""Mark all fields as *clean* (i.e., not needing to be stored to
the database). Also update the revision.
the database).
"""
self._dirty = set()
if self._db:
self._revision = self._db.revision
def _check_db(self, need_id=True):
"""Ensure that this object is associated with a database row: it
@ -331,10 +212,10 @@ class Model:
"""
if not self._db:
raise ValueError(
'{} has no database'.format(type(self).__name__)
u'{0} has no database'.format(type(self).__name__)
)
if need_id and not self.id:
raise ValueError('{} has no id'.format(type(self).__name__))
raise ValueError(u'{0} has no id'.format(type(self).__name__))
def copy(self):
"""Create a copy of the model object.
@ -362,32 +243,19 @@ class Model:
"""
return cls._fields.get(key) or cls._types.get(key) or types.DEFAULT
def _get(self, key, default=None, raise_=False):
"""Get the value for a field, or `default`. Alternatively,
raise a KeyError if the field is not available.
def __getitem__(self, key):
"""Get the value for a field. Raise a KeyError if the field is
not available.
"""
getters = self._getters()
if key in getters: # Computed.
return getters[key](self)
elif key in self._fields: # Fixed.
if key in self._values_fixed:
return self._values_fixed[key]
else:
return self._type(key).null
return self._values_fixed.get(key, self._type(key).null)
elif key in self._values_flex: # Flexible.
return self._values_flex[key]
elif raise_:
raise KeyError(key)
else:
return default
get = _get
def __getitem__(self, key):
"""Get the value for a field. Raise a KeyError if the field is
not available.
"""
return self._get(key, raise_=True)
raise KeyError(key)
def _setitem(self, key, value):
"""Assign the value for a field, return whether new and old value
@ -422,12 +290,12 @@ class Model:
if key in self._values_flex: # Flexible.
del self._values_flex[key]
self._dirty.add(key) # Mark for dropping on store.
elif key in self._fields: # Fixed
setattr(self, key, self._type(key).null)
elif key in self._getters(): # Computed.
raise KeyError(f'computed field {key} cannot be deleted')
raise KeyError(u'computed field {0} cannot be deleted'.format(key))
elif key in self._fields: # Fixed.
raise KeyError(u'fixed field {0} cannot be deleted'.format(key))
else:
raise KeyError(f'no such field {key}')
raise KeyError(u'no such field {0}'.format(key))
def keys(self, computed=False):
"""Get a list of available field names for this object. The
@ -462,10 +330,19 @@ class Model:
for key in self:
yield key, self[key]
def get(self, key, default=None):
"""Get the value for a given key or `default` if it does not
exist.
"""
if key in self:
return self[key]
else:
return default
def __contains__(self, key):
"""Determine whether `key` is an attribute on this object.
"""
return key in self.keys(computed=True)
return key in self.keys(True)
def __iter__(self):
"""Iterate over the available field names (excluding computed
@ -477,22 +354,22 @@ class Model:
def __getattr__(self, key):
if key.startswith('_'):
raise AttributeError(f'model has no attribute {key!r}')
raise AttributeError(u'model has no attribute {0!r}'.format(key))
else:
try:
return self[key]
except KeyError:
raise AttributeError(f'no such field {key!r}')
raise AttributeError(u'no such field {0!r}'.format(key))
def __setattr__(self, key, value):
if key.startswith('_'):
super().__setattr__(key, value)
super(Model, self).__setattr__(key, value)
else:
self[key] = value
def __delattr__(self, key):
if key.startswith('_'):
super().__delattr__(key)
super(Model, self).__delattr__(key)
else:
del self[key]
@ -521,7 +398,7 @@ class Model:
with self._db.transaction() as tx:
# Main table update.
if assignments:
query = 'UPDATE {} SET {} WHERE id=?'.format(
query = 'UPDATE {0} SET {1} WHERE id=?'.format(
self._table, assignments
)
subvars.append(self.id)
@ -532,7 +409,7 @@ class Model:
if key in self._dirty:
self._dirty.remove(key)
tx.mutate(
'INSERT INTO {} '
'INSERT INTO {0} '
'(entity_id, key, value) '
'VALUES (?, ?, ?);'.format(self._flex_table),
(self.id, key, value),
@ -541,7 +418,7 @@ class Model:
# Deleted flexible attributes.
for key in self._dirty:
tx.mutate(
'DELETE FROM {} '
'DELETE FROM {0} '
'WHERE entity_id=? AND key=?'.format(self._flex_table),
(self.id, key)
)
@ -550,18 +427,12 @@ class Model:
def load(self):
"""Refresh the object's metadata from the library database.
If check_revision is true, the database is only queried loaded when a
transaction has been committed since the item was last loaded.
"""
self._check_db()
if not self._dirty and self._db.revision == self._revision:
# Exit early
return
stored_obj = self._db._get(type(self), self.id)
assert stored_obj is not None, f"object {self.id} not in DB"
self._values_fixed = LazyConvertDict(self)
self._values_flex = LazyConvertDict(self)
assert stored_obj is not None, u"object {0} not in DB".format(self.id)
self._values_fixed = {}
self._values_flex = {}
self.update(dict(stored_obj))
self.clear_dirty()
@ -571,11 +442,11 @@ class Model:
self._check_db()
with self._db.transaction() as tx:
tx.mutate(
f'DELETE FROM {self._table} WHERE id=?',
'DELETE FROM {0} WHERE id=?'.format(self._table),
(self.id,)
)
tx.mutate(
f'DELETE FROM {self._flex_table} WHERE entity_id=?',
'DELETE FROM {0} WHERE entity_id=?'.format(self._flex_table),
(self.id,)
)
@ -593,7 +464,7 @@ class Model:
with self._db.transaction() as tx:
new_id = tx.mutate(
f'INSERT INTO {self._table} DEFAULT VALUES'
'INSERT INTO {0} DEFAULT VALUES'.format(self._table)
)
self.id = new_id
self.added = time.time()
@ -608,11 +479,11 @@ class Model:
_formatter = FormattedMapping
def formatted(self, included_keys=_formatter.ALL_KEYS, for_path=False):
def formatted(self, for_path=False):
"""Get a mapping containing all values on this object formatted
as human-readable unicode strings.
"""
return self._formatter(self, included_keys, for_path)
return self._formatter(self, for_path)
def evaluate_template(self, template, for_path=False):
"""Evaluate a template (a string or a `Template` object) using
@ -620,9 +491,9 @@ class Model:
separators will be added to the template.
"""
# Perform substitution.
if isinstance(template, str):
template = functemplate.template(template)
return template.substitute(self.formatted(for_path=for_path),
if isinstance(template, six.string_types):
template = Template(template)
return template.substitute(self.formatted(for_path),
self._template_funcs())
# Parsing.
@ -631,8 +502,8 @@ class Model:
def _parse(cls, key, string):
"""Parse a string as a value for the given key.
"""
if not isinstance(string, str):
raise TypeError("_parse() argument must be a string")
if not isinstance(string, six.string_types):
raise TypeError(u"_parse() argument must be a string")
return cls._type(key).parse(string)
@ -644,13 +515,11 @@ class Model:
# Database controller and supporting interfaces.
class Results:
class Results(object):
"""An item query result set. Iterating over the collection lazily
constructs LibModel objects that reflect database rows.
"""
def __init__(self, model_class, rows, db, flex_rows,
query=None, sort=None):
def __init__(self, model_class, rows, db, query=None, sort=None):
"""Create a result set that will construct objects of type
`model_class`.
@ -670,7 +539,6 @@ class Results:
self.db = db
self.query = query
self.sort = sort
self.flex_rows = flex_rows
# We keep a queue of rows we haven't yet consumed for
# materialization. We preserve the original total number of
@ -692,10 +560,6 @@ class Results:
a `Results` object a second time should be much faster than the
first.
"""
# Index flexible attributes by the item ID, so we have easier access
flex_attrs = self._get_indexed_flex_attrs()
index = 0 # Position in the materialized objects.
while index < len(self._objects) or self._rows:
# Are there previously-materialized objects to produce?
@ -708,7 +572,7 @@ class Results:
else:
while self._rows:
row = self._rows.pop(0)
obj = self._make_model(row, flex_attrs.get(row['id'], {}))
obj = self._make_model(row)
# If there is a slow-query predicate, ensurer that the
# object passes it.
if not self.query or self.query.match(obj):
@ -730,24 +594,20 @@ class Results:
# Objects are pre-sorted (i.e., by the database).
return self._get_objects()
def _get_indexed_flex_attrs(self):
""" Index flexible attributes by the entity id they belong to
"""
flex_values = {}
for row in self.flex_rows:
if row['entity_id'] not in flex_values:
flex_values[row['entity_id']] = {}
def _make_model(self, row):
# Get the flexible attributes for the object.
with self.db.transaction() as tx:
flex_rows = tx.query(
'SELECT * FROM {0} WHERE entity_id=?'.format(
self.model_class._flex_table
),
(row['id'],)
)
flex_values[row['entity_id']][row['key']] = row['value']
return flex_values
def _make_model(self, row, flex_values={}):
""" Create a Model object for the given row
"""
cols = dict(row)
values = {k: v for (k, v) in cols.items()
if not k[:4] == 'flex'}
values = dict((k, v) for (k, v) in cols.items()
if not k[:4] == 'flex')
flex_values = dict((row['key'], row['value']) for row in flex_rows)
# Construct the Python object
obj = self.model_class._awaken(self.db, values, flex_values)
@ -796,7 +656,7 @@ class Results:
next(it)
return next(it)
except StopIteration:
raise IndexError(f'result index {n} out of range')
raise IndexError(u'result index {0} out of range'.format(n))
def get(self):
"""Return the first matching object, or None if no objects
@ -809,16 +669,10 @@ class Results:
return None
class Transaction:
class Transaction(object):
"""A context manager for safe, concurrent access to the database.
All SQL commands should be executed through a transaction.
"""
_mutated = False
"""A flag storing whether a mutation has been executed in the
current transaction.
"""
def __init__(self, db):
self.db = db
@ -840,15 +694,12 @@ class Transaction:
entered but not yet exited transaction. If it is the last active
transaction, the database updates are committed.
"""
# Beware of races; currently secured by db._db_lock
self.db.revision += self._mutated
with self.db._tx_stack() as stack:
assert stack.pop() is self
empty = not stack
if empty:
# Ending a "root" transaction. End the SQLite transaction.
self.db._connection().commit()
self._mutated = False
self.db._db_lock.release()
def query(self, statement, subvals=()):
@ -864,6 +715,7 @@ class Transaction:
"""
try:
cursor = self.db._connection().execute(statement, subvals)
return cursor.lastrowid
except sqlite3.OperationalError as e:
# In two specific cases, SQLite reports an error while accessing
# the underlying database file. We surface these exceptions as
@ -873,41 +725,26 @@ class Transaction:
raise DBAccessError(e.args[0])
else:
raise
else:
self._mutated = True
return cursor.lastrowid
def script(self, statements):
"""Execute a string containing multiple SQL statements."""
# We don't know whether this mutates, but quite likely it does.
self._mutated = True
self.db._connection().executescript(statements)
class Database:
class Database(object):
"""A container for Model objects that wraps an SQLite database as
the backend.
"""
_models = ()
"""The Model subclasses representing tables in this database.
"""
supports_extensions = hasattr(sqlite3.Connection, 'enable_load_extension')
"""Whether or not the current version of SQLite supports extensions"""
revision = 0
"""The current revision of the database. To be increased whenever
data is written in a transaction.
"""
def __init__(self, path, timeout=5.0):
self.path = path
self.timeout = timeout
self._connections = {}
self._tx_stacks = defaultdict(list)
self._extensions = []
# A lock to protect the _connections and _tx_stacks maps, which
# both map thread IDs to private resources.
@ -957,13 +794,6 @@ class Database:
py3_path(self.path), timeout=self.timeout
)
if self.supports_extensions:
conn.enable_load_extension(True)
# Load any extension that are already loaded for other connections.
for path in self._extensions:
conn.load_extension(path)
# Access SELECT results like dictionaries.
conn.row_factory = sqlite3.Row
return conn
@ -992,18 +822,6 @@ class Database:
"""
return Transaction(self)
def load_extension(self, path):
"""Load an SQLite extension into all open connections."""
if not self.supports_extensions:
raise ValueError(
'this sqlite3 installation does not support extensions')
self._extensions.append(path)
# Load the extension into every open connection.
for conn in self._connections.values():
conn.load_extension(path)
# Schema setup and migration.
def _make_table(self, table, fields):
@ -1013,7 +831,7 @@ class Database:
# Get current schema.
with self.transaction() as tx:
rows = tx.query('PRAGMA table_info(%s)' % table)
current_fields = {row[1] for row in rows}
current_fields = set([row[1] for row in rows])
field_names = set(fields.keys())
if current_fields.issuperset(field_names):
@ -1024,9 +842,9 @@ class Database:
# No table exists.
columns = []
for name, typ in fields.items():
columns.append(f'{name} {typ.sql}')
setup_sql = 'CREATE TABLE {} ({});\n'.format(table,
', '.join(columns))
columns.append('{0} {1}'.format(name, typ.sql))
setup_sql = 'CREATE TABLE {0} ({1});\n'.format(table,
', '.join(columns))
else:
# Table exists does not match the field set.
@ -1034,7 +852,7 @@ class Database:
for name, typ in fields.items():
if name in current_fields:
continue
setup_sql += 'ALTER TABLE {} ADD COLUMN {} {};\n'.format(
setup_sql += 'ALTER TABLE {0} ADD COLUMN {1} {2};\n'.format(
table, name, typ.sql
)
@ -1070,31 +888,17 @@ class Database:
where, subvals = query.clause()
order_by = sort.order_clause()
sql = ("SELECT * FROM {} WHERE {} {}").format(
sql = ("SELECT * FROM {0} WHERE {1} {2}").format(
model_cls._table,
where or '1',
f"ORDER BY {order_by}" if order_by else '',
)
# Fetch flexible attributes for items matching the main query.
# Doing the per-item filtering in python is faster than issuing
# one query per item to sqlite.
flex_sql = ("""
SELECT * FROM {} WHERE entity_id IN
(SELECT id FROM {} WHERE {});
""".format(
model_cls._flex_table,
model_cls._table,
where or '1',
)
"ORDER BY {0}".format(order_by) if order_by else '',
)
with self.transaction() as tx:
rows = tx.query(sql, subvals)
flex_rows = tx.query(flex_sql, subvals)
return Results(
model_cls, rows, self, flex_rows,
model_cls, rows, self,
None if where else query, # Slow query component.
sort if sort.is_slow() else None, # Slow sort component.
)

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -14,6 +15,7 @@
"""The Query type hierarchy for DBCore.
"""
from __future__ import division, absolute_import, print_function
import re
from operator import mul
@ -21,6 +23,10 @@ from beets import util
from datetime import datetime, timedelta
import unicodedata
from functools import reduce
import six
if not six.PY2:
buffer = memoryview # sqlite won't accept memoryview in python 2
class ParsingError(ValueError):
@ -38,8 +44,8 @@ class InvalidQueryError(ParsingError):
def __init__(self, query, explanation):
if isinstance(query, list):
query = " ".join(query)
message = f"'{query}': {explanation}"
super().__init__(message)
message = u"'{0}': {1}".format(query, explanation)
super(InvalidQueryError, self).__init__(message)
class InvalidQueryArgumentValueError(ParsingError):
@ -50,13 +56,13 @@ class InvalidQueryArgumentValueError(ParsingError):
"""
def __init__(self, what, expected, detail=None):
message = f"'{what}' is not {expected}"
message = u"'{0}' is not {1}".format(what, expected)
if detail:
message = f"{message}: {detail}"
super().__init__(message)
message = u"{0}: {1}".format(message, detail)
super(InvalidQueryArgumentValueError, self).__init__(message)
class Query:
class Query(object):
"""An abstract class representing a query into the item database.
"""
@ -76,7 +82,7 @@ class Query:
raise NotImplementedError
def __repr__(self):
return f"{self.__class__.__name__}()"
return "{0.__class__.__name__}()".format(self)
def __eq__(self, other):
return type(self) == type(other)
@ -123,7 +129,7 @@ class FieldQuery(Query):
"{0.fast})".format(self))
def __eq__(self, other):
return super().__eq__(other) and \
return super(FieldQuery, self).__eq__(other) and \
self.field == other.field and self.pattern == other.pattern
def __hash__(self):
@ -145,13 +151,17 @@ class NoneQuery(FieldQuery):
"""A query that checks whether a field is null."""
def __init__(self, field, fast=True):
super().__init__(field, None, fast)
super(NoneQuery, self).__init__(field, None, fast)
def col_clause(self):
return self.field + " IS NULL", ()
def match(self, item):
return item.get(self.field) is None
@classmethod
def match(cls, item):
try:
return item[cls.field] is None
except KeyError:
return True
def __repr__(self):
return "{0.__class__.__name__}({0.field!r}, {0.fast})".format(self)
@ -204,14 +214,14 @@ class RegexpQuery(StringFieldQuery):
"""
def __init__(self, field, pattern, fast=True):
super().__init__(field, pattern, fast)
super(RegexpQuery, self).__init__(field, pattern, fast)
pattern = self._normalize(pattern)
try:
self.pattern = re.compile(self.pattern)
except re.error as exc:
# Invalid regular expression.
raise InvalidQueryArgumentValueError(pattern,
"a regular expression",
u"a regular expression",
format(exc))
@staticmethod
@ -232,8 +242,8 @@ class BooleanQuery(MatchQuery):
"""
def __init__(self, field, pattern, fast=True):
super().__init__(field, pattern, fast)
if isinstance(pattern, str):
super(BooleanQuery, self).__init__(field, pattern, fast)
if isinstance(pattern, six.string_types):
self.pattern = util.str2bool(pattern)
self.pattern = int(self.pattern)
@ -246,16 +256,16 @@ class BytesQuery(MatchQuery):
"""
def __init__(self, field, pattern):
super().__init__(field, pattern)
super(BytesQuery, self).__init__(field, pattern)
# Use a buffer/memoryview representation of the pattern for SQLite
# matching. This instructs SQLite to treat the blob as binary
# rather than encoded Unicode.
if isinstance(self.pattern, (str, bytes)):
if isinstance(self.pattern, str):
if isinstance(self.pattern, (six.text_type, bytes)):
if isinstance(self.pattern, six.text_type):
self.pattern = self.pattern.encode('utf-8')
self.buf_pattern = memoryview(self.pattern)
elif isinstance(self.pattern, memoryview):
self.buf_pattern = buffer(self.pattern)
elif isinstance(self.pattern, buffer):
self.buf_pattern = self.pattern
self.pattern = bytes(self.pattern)
@ -287,10 +297,10 @@ class NumericQuery(FieldQuery):
try:
return float(s)
except ValueError:
raise InvalidQueryArgumentValueError(s, "an int or a float")
raise InvalidQueryArgumentValueError(s, u"an int or a float")
def __init__(self, field, pattern, fast=True):
super().__init__(field, pattern, fast)
super(NumericQuery, self).__init__(field, pattern, fast)
parts = pattern.split('..', 1)
if len(parts) == 1:
@ -308,7 +318,7 @@ class NumericQuery(FieldQuery):
if self.field not in item:
return False
value = item[self.field]
if isinstance(value, str):
if isinstance(value, six.string_types):
value = self._convert(value)
if self.point is not None:
@ -325,14 +335,14 @@ class NumericQuery(FieldQuery):
return self.field + '=?', (self.point,)
else:
if self.rangemin is not None and self.rangemax is not None:
return ('{0} >= ? AND {0} <= ?'.format(self.field),
return (u'{0} >= ? AND {0} <= ?'.format(self.field),
(self.rangemin, self.rangemax))
elif self.rangemin is not None:
return f'{self.field} >= ?', (self.rangemin,)
return u'{0} >= ?'.format(self.field), (self.rangemin,)
elif self.rangemax is not None:
return f'{self.field} <= ?', (self.rangemax,)
return u'{0} <= ?'.format(self.field), (self.rangemax,)
else:
return '1', ()
return u'1', ()
class CollectionQuery(Query):
@ -377,7 +387,7 @@ class CollectionQuery(Query):
return "{0.__class__.__name__}({0.subqueries!r})".format(self)
def __eq__(self, other):
return super().__eq__(other) and \
return super(CollectionQuery, self).__eq__(other) and \
self.subqueries == other.subqueries
def __hash__(self):
@ -401,7 +411,7 @@ class AnyFieldQuery(CollectionQuery):
subqueries = []
for field in self.fields:
subqueries.append(cls(field, pattern, True))
super().__init__(subqueries)
super(AnyFieldQuery, self).__init__(subqueries)
def clause(self):
return self.clause_with_joiner('or')
@ -417,7 +427,7 @@ class AnyFieldQuery(CollectionQuery):
"{0.query_class.__name__})".format(self))
def __eq__(self, other):
return super().__eq__(other) and \
return super(AnyFieldQuery, self).__eq__(other) and \
self.query_class == other.query_class
def __hash__(self):
@ -443,7 +453,7 @@ class AndQuery(MutableCollectionQuery):
return self.clause_with_joiner('and')
def match(self, item):
return all(q.match(item) for q in self.subqueries)
return all([q.match(item) for q in self.subqueries])
class OrQuery(MutableCollectionQuery):
@ -453,7 +463,7 @@ class OrQuery(MutableCollectionQuery):
return self.clause_with_joiner('or')
def match(self, item):
return any(q.match(item) for q in self.subqueries)
return any([q.match(item) for q in self.subqueries])
class NotQuery(Query):
@ -467,7 +477,7 @@ class NotQuery(Query):
def clause(self):
clause, subvals = self.subquery.clause()
if clause:
return f'not ({clause})', subvals
return 'not ({0})'.format(clause), subvals
else:
# If there is no clause, there is nothing to negate. All the logic
# is handled by match() for slow queries.
@ -480,7 +490,7 @@ class NotQuery(Query):
return "{0.__class__.__name__}({0.subquery!r})".format(self)
def __eq__(self, other):
return super().__eq__(other) and \
return super(NotQuery, self).__eq__(other) and \
self.subquery == other.subquery
def __hash__(self):
@ -536,7 +546,7 @@ def _parse_periods(pattern):
return (start, end)
class Period:
class Period(object):
"""A period of time given by a date, time and precision.
Example: 2014-01-01 10:50:30 with precision 'month' represents all
@ -562,7 +572,7 @@ class Period:
or "second").
"""
if precision not in Period.precisions:
raise ValueError(f'Invalid precision {precision}')
raise ValueError(u'Invalid precision {0}'.format(precision))
self.date = date
self.precision = precision
@ -643,10 +653,10 @@ class Period:
elif 'second' == precision:
return date + timedelta(seconds=1)
else:
raise ValueError(f'unhandled precision {precision}')
raise ValueError(u'unhandled precision {0}'.format(precision))
class DateInterval:
class DateInterval(object):
"""A closed-open interval of dates.
A left endpoint of None means since the beginning of time.
@ -655,7 +665,7 @@ class DateInterval:
def __init__(self, start, end):
if start is not None and end is not None and not start < end:
raise ValueError("start date {} is not before end date {}"
raise ValueError(u"start date {0} is not before end date {1}"
.format(start, end))
self.start = start
self.end = end
@ -676,7 +686,7 @@ class DateInterval:
return True
def __str__(self):
return f'[{self.start}, {self.end})'
return '[{0}, {1})'.format(self.start, self.end)
class DateQuery(FieldQuery):
@ -690,7 +700,7 @@ class DateQuery(FieldQuery):
"""
def __init__(self, field, pattern, fast=True):
super().__init__(field, pattern, fast)
super(DateQuery, self).__init__(field, pattern, fast)
start, end = _parse_periods(pattern)
self.interval = DateInterval.from_periods(start, end)
@ -749,12 +759,12 @@ class DurationQuery(NumericQuery):
except ValueError:
raise InvalidQueryArgumentValueError(
s,
"a M:SS string or a float")
u"a M:SS string or a float")
# Sorting.
class Sort:
class Sort(object):
"""An abstract class representing a sort operation for a query into
the item database.
"""
@ -841,13 +851,13 @@ class MultipleSort(Sort):
return items
def __repr__(self):
return f'MultipleSort({self.sorts!r})'
return 'MultipleSort({!r})'.format(self.sorts)
def __hash__(self):
return hash(tuple(self.sorts))
def __eq__(self, other):
return super().__eq__(other) and \
return super(MultipleSort, self).__eq__(other) and \
self.sorts == other.sorts
@ -868,14 +878,14 @@ class FieldSort(Sort):
def key(item):
field_val = item.get(self.field, '')
if self.case_insensitive and isinstance(field_val, str):
if self.case_insensitive and isinstance(field_val, six.text_type):
field_val = field_val.lower()
return field_val
return sorted(objs, key=key, reverse=not self.ascending)
def __repr__(self):
return '<{}: {}{}>'.format(
return '<{0}: {1}{2}>'.format(
type(self).__name__,
self.field,
'+' if self.ascending else '-',
@ -885,7 +895,7 @@ class FieldSort(Sort):
return hash((self.field, self.ascending))
def __eq__(self, other):
return super().__eq__(other) and \
return super(FieldSort, self).__eq__(other) and \
self.field == other.field and \
self.ascending == other.ascending
@ -903,7 +913,7 @@ class FixedFieldSort(FieldSort):
'ELSE {0} END)'.format(self.field)
else:
field = self.field
return f"{field} {order}"
return "{0} {1}".format(field, order)
class SlowFieldSort(FieldSort):

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -14,10 +15,12 @@
"""Parsing of strings into DBCore queries.
"""
from __future__ import division, absolute_import, print_function
import re
import itertools
from . import query
import beets
PARSE_QUERY_PART_REGEX = re.compile(
# Non-capturing optional segment for the keyword.
@ -86,7 +89,7 @@ def parse_query_part(part, query_classes={}, prefixes={},
assert match # Regex should always match
negate = bool(match.group(1))
key = match.group(2)
term = match.group(3).replace('\\:', ':')
term = match.group(3).replace('\:', ':')
# Check whether there's a prefix in the query and use the
# corresponding query type.
@ -116,13 +119,12 @@ def construct_query_part(model_cls, prefixes, query_part):
if not query_part:
return query.TrueQuery()
# Use `model_cls` to build up a map from field (or query) names to
# `Query` classes.
# Use `model_cls` to build up a map from field names to `Query`
# classes.
query_classes = {}
for k, t in itertools.chain(model_cls._fields.items(),
model_cls._types.items()):
query_classes[k] = t.query
query_classes.update(model_cls._queries) # Non-field queries.
# Parse the string.
key, pattern, query_class, negate = \
@ -135,27 +137,26 @@ def construct_query_part(model_cls, prefixes, query_part):
# The query type matches a specific field, but none was
# specified. So we use a version of the query that matches
# any field.
out_query = query.AnyFieldQuery(pattern, model_cls._search_fields,
query_class)
q = query.AnyFieldQuery(pattern, model_cls._search_fields,
query_class)
if negate:
return query.NotQuery(q)
else:
return q
else:
# Non-field query type.
out_query = query_class(pattern)
if negate:
return query.NotQuery(query_class(pattern))
else:
return query_class(pattern)
# Field queries get constructed according to the name of the field
# they are querying.
elif issubclass(query_class, query.FieldQuery):
key = key.lower()
out_query = query_class(key.lower(), pattern, key in model_cls._fields)
# Non-field (named) query.
else:
out_query = query_class(pattern)
# Apply negation.
# Otherwise, this must be a `FieldQuery`. Use the field name to
# construct the query object.
key = key.lower()
q = query_class(key.lower(), pattern, key in model_cls._fields)
if negate:
return query.NotQuery(out_query)
else:
return out_query
return query.NotQuery(q)
return q
def query_from_strings(query_cls, model_cls, prefixes, query_parts):
@ -171,13 +172,11 @@ def query_from_strings(query_cls, model_cls, prefixes, query_parts):
return query_cls(subqueries)
def construct_sort_part(model_cls, part, case_insensitive=True):
def construct_sort_part(model_cls, part):
"""Create a `Sort` from a single string criterion.
`model_cls` is the `Model` being queried. `part` is a single string
ending in ``+`` or ``-`` indicating the sort. `case_insensitive`
indicates whether or not the sort should be performed in a case
sensitive manner.
ending in ``+`` or ``-`` indicating the sort.
"""
assert part, "part must be a field name and + or -"
field = part[:-1]
@ -186,6 +185,7 @@ def construct_sort_part(model_cls, part, case_insensitive=True):
assert direction in ('+', '-'), "part must end with + or -"
is_ascending = direction == '+'
case_insensitive = beets.config['sort_case_insensitive'].get(bool)
if field in model_cls._sorts:
sort = model_cls._sorts[field](model_cls, is_ascending,
case_insensitive)
@ -197,23 +197,21 @@ def construct_sort_part(model_cls, part, case_insensitive=True):
return sort
def sort_from_strings(model_cls, sort_parts, case_insensitive=True):
def sort_from_strings(model_cls, sort_parts):
"""Create a `Sort` from a list of sort criteria (strings).
"""
if not sort_parts:
sort = query.NullSort()
elif len(sort_parts) == 1:
sort = construct_sort_part(model_cls, sort_parts[0], case_insensitive)
sort = construct_sort_part(model_cls, sort_parts[0])
else:
sort = query.MultipleSort()
for part in sort_parts:
sort.add_sort(construct_sort_part(model_cls, part,
case_insensitive))
sort.add_sort(construct_sort_part(model_cls, part))
return sort
def parse_sorted_query(model_cls, parts, prefixes={},
case_insensitive=True):
def parse_sorted_query(model_cls, parts, prefixes={}):
"""Given a list of strings, create the `Query` and `Sort` that they
represent.
"""
@ -224,8 +222,8 @@ def parse_sorted_query(model_cls, parts, prefixes={},
# Split up query in to comma-separated subqueries, each representing
# an AndQuery, which need to be joined together in one OrQuery
subquery_parts = []
for part in parts + [',']:
if part.endswith(','):
for part in parts + [u',']:
if part.endswith(u','):
# Ensure we can catch "foo, bar" as well as "foo , bar"
last_subquery_part = part[:-1]
if last_subquery_part:
@ -239,8 +237,8 @@ def parse_sorted_query(model_cls, parts, prefixes={},
else:
# Sort parts (1) end in + or -, (2) don't have a field, and
# (3) consist of more than just the + or -.
if part.endswith(('+', '-')) \
and ':' not in part \
if part.endswith((u'+', u'-')) \
and u':' not in part \
and len(part) > 1:
sort_parts.append(part)
else:
@ -248,5 +246,5 @@ def parse_sorted_query(model_cls, parts, prefixes={},
# Avoid needlessly wrapping single statements in an OR
q = query.OrQuery(query_parts) if len(query_parts) > 1 else query_parts[0]
s = sort_from_strings(model_cls, sort_parts, case_insensitive)
s = sort_from_strings(model_cls, sort_parts)
return q, s

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -14,20 +15,25 @@
"""Representation of type information for DBCore model fields.
"""
from __future__ import division, absolute_import, print_function
from . import query
from beets.util import str2bool
import six
if not six.PY2:
buffer = memoryview # sqlite won't accept memoryview in python 2
# Abstract base.
class Type:
class Type(object):
"""An object encapsulating the type of a model field. Includes
information about how to store, query, format, and parse a given
field.
"""
sql = 'TEXT'
sql = u'TEXT'
"""The SQLite column type for the value.
"""
@ -35,7 +41,7 @@ class Type:
"""The `Query` subclass to be used when querying the field.
"""
model_type = str
model_type = six.text_type
"""The Python type that is used to represent the value in the model.
The model is guaranteed to return a value of this type if the field
@ -57,11 +63,11 @@ class Type:
value = self.null
# `self.null` might be `None`
if value is None:
value = ''
value = u''
if isinstance(value, bytes):
value = value.decode('utf-8', 'ignore')
return str(value)
return six.text_type(value)
def parse(self, string):
"""Parse a (possibly human-written) string and return the
@ -91,16 +97,16 @@ class Type:
For fixed fields the type of `value` is determined by the column
type affinity given in the `sql` property and the SQL to Python
mapping of the database adapter. For more information see:
https://www.sqlite.org/datatype3.html
http://www.sqlite.org/datatype3.html
https://docs.python.org/2/library/sqlite3.html#sqlite-and-python-types
Flexible fields have the type affinity `TEXT`. This means the
`sql_value` is either a `memoryview` or a `unicode` object`
`sql_value` is either a `buffer`/`memoryview` or a `unicode` object`
and the method must handle these in addition.
"""
if isinstance(sql_value, memoryview):
if isinstance(sql_value, buffer):
sql_value = bytes(sql_value).decode('utf-8', 'ignore')
if isinstance(sql_value, str):
if isinstance(sql_value, six.text_type):
return self.parse(sql_value)
else:
return self.normalize(sql_value)
@ -121,18 +127,10 @@ class Default(Type):
class Integer(Type):
"""A basic integer type.
"""
sql = 'INTEGER'
sql = u'INTEGER'
query = query.NumericQuery
model_type = int
def normalize(self, value):
try:
return self.model_type(round(float(value)))
except ValueError:
return self.null
except TypeError:
return self.null
class PaddedInt(Integer):
"""An integer field that is formatted with a given number of digits,
@ -142,25 +140,19 @@ class PaddedInt(Integer):
self.digits = digits
def format(self, value):
return '{0:0{1}d}'.format(value or 0, self.digits)
class NullPaddedInt(PaddedInt):
"""Same as `PaddedInt`, but does not normalize `None` to `0.0`.
"""
null = None
return u'{0:0{1}d}'.format(value or 0, self.digits)
class ScaledInt(Integer):
"""An integer whose formatting operation scales the number by a
constant and adds a suffix. Good for units with large magnitudes.
"""
def __init__(self, unit, suffix=''):
def __init__(self, unit, suffix=u''):
self.unit = unit
self.suffix = suffix
def format(self, value):
return '{}{}'.format((value or 0) // self.unit, self.suffix)
return u'{0}{1}'.format((value or 0) // self.unit, self.suffix)
class Id(Integer):
@ -171,22 +163,18 @@ class Id(Integer):
def __init__(self, primary=True):
if primary:
self.sql = 'INTEGER PRIMARY KEY'
self.sql = u'INTEGER PRIMARY KEY'
class Float(Type):
"""A basic floating-point type. The `digits` parameter specifies how
many decimal places to use in the human-readable representation.
"""A basic floating-point type.
"""
sql = 'REAL'
sql = u'REAL'
query = query.NumericQuery
model_type = float
def __init__(self, digits=1):
self.digits = digits
def format(self, value):
return '{0:.{1}f}'.format(value or 0, self.digits)
return u'{0:.1f}'.format(value or 0.0)
class NullFloat(Float):
@ -198,25 +186,19 @@ class NullFloat(Float):
class String(Type):
"""A Unicode string type.
"""
sql = 'TEXT'
sql = u'TEXT'
query = query.SubstringQuery
def normalize(self, value):
if value is None:
return self.null
else:
return self.model_type(value)
class Boolean(Type):
"""A boolean type.
"""
sql = 'INTEGER'
sql = u'INTEGER'
query = query.BooleanQuery
model_type = bool
def format(self, value):
return str(bool(value))
return six.text_type(bool(value))
def parse(self, string):
return str2bool(string)

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -12,6 +13,7 @@
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
from __future__ import division, absolute_import, print_function
"""Provides the basic, interface-agnostic workflow for importing and
autotagging music files.
@ -38,7 +40,7 @@ from beets import config
from beets.util import pipeline, sorted_walk, ancestry, MoveOperation
from beets.util import syspath, normpath, displayable_path
from enum import Enum
import mediafile
from beets import mediafile
action = Enum('action',
['SKIP', 'ASIS', 'TRACKS', 'APPLY', 'ALBUMS', 'RETAG'])
@ -73,7 +75,7 @@ def _open_state():
# unpickling, including ImportError. We use a catch-all
# exception to avoid enumerating them all (the docs don't even have a
# full list!).
log.debug('state file could not be read: {0}', exc)
log.debug(u'state file could not be read: {0}', exc)
return {}
@ -82,8 +84,8 @@ def _save_state(state):
try:
with open(config['statefile'].as_filename(), 'wb') as f:
pickle.dump(state, f)
except OSError as exc:
log.error('state file could not be written: {0}', exc)
except IOError as exc:
log.error(u'state file could not be written: {0}', exc)
# Utilities for reading and writing the beets progress file, which
@ -172,11 +174,10 @@ def history_get():
# Abstract session class.
class ImportSession:
class ImportSession(object):
"""Controls an import action. Subclasses should implement methods to
communicate with the user or otherwise make decisions.
"""
def __init__(self, lib, loghandler, paths, query):
"""Create a session. `lib` is a Library object. `loghandler` is a
logging.Handler. Either `paths` or `query` is non-null and indicates
@ -186,7 +187,7 @@ class ImportSession:
self.logger = self._setup_logging(loghandler)
self.paths = paths
self.query = query
self._is_resuming = {}
self._is_resuming = dict()
self._merged_items = set()
self._merged_dirs = set()
@ -221,31 +222,19 @@ class ImportSession:
iconfig['resume'] = False
iconfig['incremental'] = False
if iconfig['reflink']:
iconfig['reflink'] = iconfig['reflink'] \
.as_choice(['auto', True, False])
# Copy, move, reflink, link, and hardlink are mutually exclusive.
# Copy, move, link, and hardlink are mutually exclusive.
if iconfig['move']:
iconfig['copy'] = False
iconfig['link'] = False
iconfig['hardlink'] = False
iconfig['reflink'] = False
elif iconfig['link']:
iconfig['copy'] = False
iconfig['move'] = False
iconfig['hardlink'] = False
iconfig['reflink'] = False
elif iconfig['hardlink']:
iconfig['copy'] = False
iconfig['move'] = False
iconfig['link'] = False
iconfig['reflink'] = False
elif iconfig['reflink']:
iconfig['copy'] = False
iconfig['move'] = False
iconfig['link'] = False
iconfig['hardlink'] = False
# Only delete when copying.
if not iconfig['copy']:
@ -257,7 +246,7 @@ class ImportSession:
"""Log a message about a given album to the importer log. The status
should reflect the reason the album couldn't be tagged.
"""
self.logger.info('{0} {1}', status, displayable_path(paths))
self.logger.info(u'{0} {1}', status, displayable_path(paths))
def log_choice(self, task, duplicate=False):
"""Logs the task's current choice if it should be logged. If
@ -268,17 +257,17 @@ class ImportSession:
if duplicate:
# Duplicate: log all three choices (skip, keep both, and trump).
if task.should_remove_duplicates:
self.tag_log('duplicate-replace', paths)
self.tag_log(u'duplicate-replace', paths)
elif task.choice_flag in (action.ASIS, action.APPLY):
self.tag_log('duplicate-keep', paths)
self.tag_log(u'duplicate-keep', paths)
elif task.choice_flag is (action.SKIP):
self.tag_log('duplicate-skip', paths)
self.tag_log(u'duplicate-skip', paths)
else:
# Non-duplicate: log "skip" and "asis" choices.
if task.choice_flag is action.ASIS:
self.tag_log('asis', paths)
self.tag_log(u'asis', paths)
elif task.choice_flag is action.SKIP:
self.tag_log('skip', paths)
self.tag_log(u'skip', paths)
def should_resume(self, path):
raise NotImplementedError
@ -295,7 +284,7 @@ class ImportSession:
def run(self):
"""Run the import task.
"""
self.logger.info('import started {0}', time.asctime())
self.logger.info(u'import started {0}', time.asctime())
self.set_config(config['import'])
# Set up the pipeline.
@ -379,8 +368,8 @@ class ImportSession:
"""Mark paths and directories as merged for future reimport tasks.
"""
self._merged_items.update(paths)
dirs = {os.path.dirname(path) if os.path.isfile(path) else path
for path in paths}
dirs = set([os.path.dirname(path) if os.path.isfile(path) else path
for path in paths])
self._merged_dirs.update(dirs)
def is_resuming(self, toppath):
@ -400,7 +389,7 @@ class ImportSession:
# Either accept immediately or prompt for input to decide.
if self.want_resume is True or \
self.should_resume(toppath):
log.warning('Resuming interrupted import of {0}',
log.warning(u'Resuming interrupted import of {0}',
util.displayable_path(toppath))
self._is_resuming[toppath] = True
else:
@ -410,12 +399,11 @@ class ImportSession:
# The importer task class.
class BaseImportTask:
class BaseImportTask(object):
"""An abstract base class for importer tasks.
Tasks flow through the importer pipeline. Each stage can update
them. """
def __init__(self, toppath, paths, items):
"""Create a task. The primary fields that define a task are:
@ -469,9 +457,8 @@ class ImportTask(BaseImportTask):
* `finalize()` Update the import progress and cleanup the file
system.
"""
def __init__(self, toppath, paths, items):
super().__init__(toppath, paths, items)
super(ImportTask, self).__init__(toppath, paths, items)
self.choice_flag = None
self.cur_album = None
self.cur_artist = None
@ -563,34 +550,28 @@ class ImportTask(BaseImportTask):
def remove_duplicates(self, lib):
duplicate_items = self.duplicate_items(lib)
log.debug('removing {0} old duplicated items', len(duplicate_items))
log.debug(u'removing {0} old duplicated items', len(duplicate_items))
for item in duplicate_items:
item.remove()
if lib.directory in util.ancestry(item.path):
log.debug('deleting duplicate {0}',
log.debug(u'deleting duplicate {0}',
util.displayable_path(item.path))
util.remove(item.path)
util.prune_dirs(os.path.dirname(item.path),
lib.directory)
def set_fields(self, lib):
def set_fields(self):
"""Sets the fields given at CLI or configuration to the specified
values, for both the album and all its items.
values.
"""
items = self.imported_items()
for field, view in config['import']['set_fields'].items():
value = view.get()
log.debug('Set field {1}={2} for {0}',
log.debug(u'Set field {1}={2} for {0}',
displayable_path(self.paths),
field,
value)
self.album[field] = value
for item in items:
item[field] = value
with lib.transaction():
for item in items:
item.store()
self.album.store()
self.album.store()
def finalize(self, session):
"""Save progress, clean up files, and emit plugin event.
@ -674,7 +655,7 @@ class ImportTask(BaseImportTask):
return []
duplicates = []
task_paths = {i.path for i in self.items if i}
task_paths = set(i.path for i in self.items if i)
duplicate_query = dbcore.AndQuery((
dbcore.MatchQuery('albumartist', artist),
dbcore.MatchQuery('album', album),
@ -684,7 +665,7 @@ class ImportTask(BaseImportTask):
# Check whether the album paths are all present in the task
# i.e. album is being completely re-imported by the task,
# in which case it is not a duplicate (will be replaced).
album_paths = {i.path for i in album.items()}
album_paths = set(i.path for i in album.items())
if not (album_paths <= task_paths):
duplicates.append(album)
return duplicates
@ -726,7 +707,7 @@ class ImportTask(BaseImportTask):
item.update(changes)
def manipulate_files(self, operation=None, write=False, session=None):
""" Copy, move, link, hardlink or reflink (depending on `operation`) the files
""" Copy, move, link or hardlink (depending on `operation`) the files
as well as write metadata.
`operation` should be an instance of `util.MoveOperation`.
@ -773,8 +754,6 @@ class ImportTask(BaseImportTask):
self.record_replaced(lib)
self.remove_replaced(lib)
self.album = lib.add_album(self.imported_items())
if 'data_source' in self.imported_items()[0]:
self.album.data_source = self.imported_items()[0].data_source
self.reimport_metadata(lib)
def record_replaced(self, lib):
@ -793,7 +772,7 @@ class ImportTask(BaseImportTask):
if (not dup_item.album_id or
dup_item.album_id in replaced_album_ids):
continue
replaced_album = dup_item._cached_album
replaced_album = dup_item.get_album()
if replaced_album:
replaced_album_ids.add(dup_item.album_id)
self.replaced_albums[replaced_album.path] = replaced_album
@ -810,8 +789,8 @@ class ImportTask(BaseImportTask):
self.album.artpath = replaced_album.artpath
self.album.store()
log.debug(
'Reimported album: added {0}, flexible '
'attributes {1} from album {2} for {3}',
u'Reimported album: added {0}, flexible '
u'attributes {1} from album {2} for {3}',
self.album.added,
replaced_album._values_flex.keys(),
replaced_album.id,
@ -824,16 +803,16 @@ class ImportTask(BaseImportTask):
if dup_item.added and dup_item.added != item.added:
item.added = dup_item.added
log.debug(
'Reimported item added {0} '
'from item {1} for {2}',
u'Reimported item added {0} '
u'from item {1} for {2}',
item.added,
dup_item.id,
displayable_path(item.path)
)
item.update(dup_item._values_flex)
log.debug(
'Reimported item flexible attributes {0} '
'from item {1} for {2}',
u'Reimported item flexible attributes {0} '
u'from item {1} for {2}',
dup_item._values_flex.keys(),
dup_item.id,
displayable_path(item.path)
@ -846,10 +825,10 @@ class ImportTask(BaseImportTask):
"""
for item in self.imported_items():
for dup_item in self.replaced_items[item]:
log.debug('Replacing item {0}: {1}',
log.debug(u'Replacing item {0}: {1}',
dup_item.id, displayable_path(item.path))
dup_item.remove()
log.debug('{0} of {1} items replaced',
log.debug(u'{0} of {1} items replaced',
sum(bool(l) for l in self.replaced_items.values()),
len(self.imported_items()))
@ -887,7 +866,7 @@ class SingletonImportTask(ImportTask):
"""
def __init__(self, toppath, item):
super().__init__(toppath, [item.path], [item])
super(SingletonImportTask, self).__init__(toppath, [item.path], [item])
self.item = item
self.is_album = False
self.paths = [item.path]
@ -953,13 +932,13 @@ class SingletonImportTask(ImportTask):
def reload(self):
self.item.load()
def set_fields(self, lib):
def set_fields(self):
"""Sets the fields given at CLI or configuration to the specified
values, for the singleton item.
values.
"""
for field, view in config['import']['set_fields'].items():
value = view.get()
log.debug('Set field {1}={2} for {0}',
log.debug(u'Set field {1}={2} for {0}',
displayable_path(self.paths),
field,
value)
@ -980,7 +959,7 @@ class SentinelImportTask(ImportTask):
"""
def __init__(self, toppath, paths):
super().__init__(toppath, paths, ())
super(SentinelImportTask, self).__init__(toppath, paths, ())
# TODO Remove the remaining attributes eventually
self.should_remove_duplicates = False
self.is_album = True
@ -1024,7 +1003,7 @@ class ArchiveImportTask(SentinelImportTask):
"""
def __init__(self, toppath):
super().__init__(toppath, ())
super(ArchiveImportTask, self).__init__(toppath, ())
self.extracted = False
@classmethod
@ -1053,20 +1032,14 @@ class ArchiveImportTask(SentinelImportTask):
cls._handlers = []
from zipfile import is_zipfile, ZipFile
cls._handlers.append((is_zipfile, ZipFile))
import tarfile
cls._handlers.append((tarfile.is_tarfile, tarfile.open))
from tarfile import is_tarfile, TarFile
cls._handlers.append((is_tarfile, TarFile))
try:
from rarfile import is_rarfile, RarFile
except ImportError:
pass
else:
cls._handlers.append((is_rarfile, RarFile))
try:
from py7zr import is_7zfile, SevenZipFile
except ImportError:
pass
else:
cls._handlers.append((is_7zfile, SevenZipFile))
return cls._handlers
@ -1074,7 +1047,7 @@ class ArchiveImportTask(SentinelImportTask):
"""Removes the temporary directory the archive was extracted to.
"""
if self.extracted:
log.debug('Removing extracted directory: {0}',
log.debug(u'Removing extracted directory: {0}',
displayable_path(self.toppath))
shutil.rmtree(self.toppath)
@ -1086,9 +1059,9 @@ class ArchiveImportTask(SentinelImportTask):
if path_test(util.py3_path(self.toppath)):
break
extract_to = mkdtemp()
archive = handler_class(util.py3_path(self.toppath), mode='r')
try:
extract_to = mkdtemp()
archive = handler_class(util.py3_path(self.toppath), mode='r')
archive.extractall(extract_to)
finally:
archive.close()
@ -1096,11 +1069,10 @@ class ArchiveImportTask(SentinelImportTask):
self.toppath = extract_to
class ImportTaskFactory:
class ImportTaskFactory(object):
"""Generate album and singleton import tasks for all media files
indicated by a path.
"""
def __init__(self, toppath, session):
"""Create a new task factory.
@ -1138,12 +1110,14 @@ class ImportTaskFactory:
if self.session.config['singletons']:
for path in paths:
tasks = self._create(self.singleton(path))
yield from tasks
for task in tasks:
yield task
yield self.sentinel(dirs)
else:
tasks = self._create(self.album(paths, dirs))
yield from tasks
for task in tasks:
yield task
# Produce the final sentinel for this toppath to indicate that
# it is finished. This is usually just a SentinelImportTask, but
@ -1191,7 +1165,7 @@ class ImportTaskFactory:
"""Return a `SingletonImportTask` for the music file.
"""
if self.session.already_imported(self.toppath, [path]):
log.debug('Skipping previously-imported path: {0}',
log.debug(u'Skipping previously-imported path: {0}',
displayable_path(path))
self.skipped += 1
return None
@ -1212,10 +1186,10 @@ class ImportTaskFactory:
return None
if dirs is None:
dirs = list({os.path.dirname(p) for p in paths})
dirs = list(set(os.path.dirname(p) for p in paths))
if self.session.already_imported(self.toppath, dirs):
log.debug('Skipping previously-imported path: {0}',
log.debug(u'Skipping previously-imported path: {0}',
displayable_path(dirs))
self.skipped += 1
return None
@ -1245,22 +1219,22 @@ class ImportTaskFactory:
if not (self.session.config['move'] or
self.session.config['copy']):
log.warning("Archive importing requires either "
"'copy' or 'move' to be enabled.")
log.warning(u"Archive importing requires either "
u"'copy' or 'move' to be enabled.")
return
log.debug('Extracting archive: {0}',
log.debug(u'Extracting archive: {0}',
displayable_path(self.toppath))
archive_task = ArchiveImportTask(self.toppath)
try:
archive_task.extract()
except Exception as exc:
log.error('extraction failed: {0}', exc)
log.error(u'extraction failed: {0}', exc)
return
# Now read albums from the extracted directory.
self.toppath = archive_task.toppath
log.debug('Archive extracted to: {0}', self.toppath)
log.debug(u'Archive extracted to: {0}', self.toppath)
return archive_task
def read_item(self, path):
@ -1276,9 +1250,9 @@ class ImportTaskFactory:
# Silently ignore non-music files.
pass
elif isinstance(exc.reason, mediafile.UnreadableFileError):
log.warning('unreadable file: {0}', displayable_path(path))
log.warning(u'unreadable file: {0}', displayable_path(path))
else:
log.error('error reading {0}: {1}',
log.error(u'error reading {0}: {1}',
displayable_path(path), exc)
@ -1317,16 +1291,17 @@ def read_tasks(session):
# Generate tasks.
task_factory = ImportTaskFactory(toppath, session)
yield from task_factory.tasks()
for t in task_factory.tasks():
yield t
skipped += task_factory.skipped
if not task_factory.imported:
log.warning('No files imported from {0}',
log.warning(u'No files imported from {0}',
displayable_path(toppath))
# Show skipped directories (due to incremental/resume).
if skipped:
log.info('Skipped {0} paths.', skipped)
log.info(u'Skipped {0} paths.', skipped)
def query_tasks(session):
@ -1344,7 +1319,7 @@ def query_tasks(session):
else:
# Search for albums.
for album in session.lib.albums(session.query):
log.debug('yielding album {0}: {1} - {2}',
log.debug(u'yielding album {0}: {1} - {2}',
album.id, album.albumartist, album.album)
items = list(album.items())
_freshen_items(items)
@ -1367,7 +1342,7 @@ def lookup_candidates(session, task):
return
plugins.send('import_task_start', session=session, task=task)
log.debug('Looking up: {0}', displayable_path(task.paths))
log.debug(u'Looking up: {0}', displayable_path(task.paths))
# Restrict the initial lookup to IDs specified by the user via the -m
# option. Currently all the IDs are passed onto the tasks directly.
@ -1406,7 +1381,8 @@ def user_query(session, task):
def emitter(task):
for item in task.items:
task = SingletonImportTask(task.toppath, item)
yield from task.handle_created(session)
for new_task in task.handle_created(session):
yield new_task
yield SentinelImportTask(task.toppath, task.paths)
return _extend_pipeline(emitter(task),
@ -1452,30 +1428,30 @@ def resolve_duplicates(session, task):
if task.choice_flag in (action.ASIS, action.APPLY, action.RETAG):
found_duplicates = task.find_duplicates(session.lib)
if found_duplicates:
log.debug('found duplicates: {}'.format(
log.debug(u'found duplicates: {}'.format(
[o.id for o in found_duplicates]
))
# Get the default action to follow from config.
duplicate_action = config['import']['duplicate_action'].as_choice({
'skip': 's',
'keep': 'k',
'remove': 'r',
'merge': 'm',
'ask': 'a',
u'skip': u's',
u'keep': u'k',
u'remove': u'r',
u'merge': u'm',
u'ask': u'a',
})
log.debug('default action for duplicates: {0}', duplicate_action)
log.debug(u'default action for duplicates: {0}', duplicate_action)
if duplicate_action == 's':
if duplicate_action == u's':
# Skip new.
task.set_choice(action.SKIP)
elif duplicate_action == 'k':
elif duplicate_action == u'k':
# Keep both. Do nothing; leave the choice intact.
pass
elif duplicate_action == 'r':
elif duplicate_action == u'r':
# Remove old.
task.should_remove_duplicates = True
elif duplicate_action == 'm':
elif duplicate_action == u'm':
# Merge duplicates together
task.should_merge_duplicates = True
else:
@ -1495,7 +1471,7 @@ def import_asis(session, task):
if task.skip:
return
log.info('{}', displayable_path(task.paths))
log.info(u'{}', displayable_path(task.paths))
task.set_choice(action.ASIS)
apply_choice(session, task)
@ -1520,7 +1496,7 @@ def apply_choice(session, task):
# because then the ``ImportTask`` won't have an `album` for which
# it can set the fields.
if config['import']['set_fields']:
task.set_fields(session.lib)
task.set_fields()
@pipeline.mutator_stage
@ -1558,8 +1534,6 @@ def manipulate_files(session, task):
operation = MoveOperation.LINK
elif session.config['hardlink']:
operation = MoveOperation.HARDLINK
elif session.config['reflink']:
operation = MoveOperation.REFLINK
else:
operation = None
@ -1578,11 +1552,11 @@ def log_files(session, task):
"""A coroutine (pipeline stage) to log each file to be imported.
"""
if isinstance(task, SingletonImportTask):
log.info('Singleton: {0}', displayable_path(task.item['path']))
log.info(u'Singleton: {0}', displayable_path(task.item['path']))
elif task.items:
log.info('Album: {0}', displayable_path(task.paths[0]))
log.info(u'Album: {0}', displayable_path(task.paths[0]))
for item in task.items:
log.info(' {0}', displayable_path(item['path']))
log.info(u' {0}', displayable_path(item['path']))
def group_albums(session):

File diff suppressed because it is too large Load diff

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -20,11 +21,13 @@ that when getLogger(name) instantiates a logger that logger uses
{}-style formatting.
"""
from __future__ import division, absolute_import, print_function
from copy import copy
from logging import * # noqa
import subprocess
import threading
import six
def logsafe(val):
@ -40,7 +43,7 @@ def logsafe(val):
example.
"""
# Already Unicode.
if isinstance(val, str):
if isinstance(val, six.text_type):
return val
# Bytestring: needs decoding.
@ -54,7 +57,7 @@ def logsafe(val):
# A "problem" object: needs a workaround.
elif isinstance(val, subprocess.CalledProcessError):
try:
return str(val)
return six.text_type(val)
except UnicodeDecodeError:
# An object with a broken __unicode__ formatter. Use __str__
# instead.
@ -71,7 +74,7 @@ class StrFormatLogger(Logger):
instead of %-style formatting.
"""
class _LogMessage:
class _LogMessage(object):
def __init__(self, msg, args, kwargs):
self.msg = msg
self.args = args
@ -79,23 +82,22 @@ class StrFormatLogger(Logger):
def __str__(self):
args = [logsafe(a) for a in self.args]
kwargs = {k: logsafe(v) for (k, v) in self.kwargs.items()}
kwargs = dict((k, logsafe(v)) for (k, v) in self.kwargs.items())
return self.msg.format(*args, **kwargs)
def _log(self, level, msg, args, exc_info=None, extra=None, **kwargs):
"""Log msg.format(*args, **kwargs)"""
m = self._LogMessage(msg, args, kwargs)
return super()._log(level, m, (), exc_info, extra)
return super(StrFormatLogger, self)._log(level, m, (), exc_info, extra)
class ThreadLocalLevelLogger(Logger):
"""A version of `Logger` whose level is thread-local instead of shared.
"""
def __init__(self, name, level=NOTSET):
self._thread_level = threading.local()
self.default_level = NOTSET
super().__init__(name, level)
super(ThreadLocalLevelLogger, self).__init__(name, level)
@property
def level(self):

File diff suppressed because it is too large Load diff

View file

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
# This file is part of beets.
# Copyright 2016, Adrian Sampson.
#
@ -14,19 +15,19 @@
"""Support for beets plugins."""
from __future__ import division, absolute_import, print_function
import inspect
import traceback
import re
import inspect
import abc
from collections import defaultdict
from functools import wraps
import beets
from beets import logging
import mediafile
from beets import mediafile
import six
PLUGIN_NAMESPACE = 'beetsplug'
@ -49,28 +50,26 @@ class PluginLogFilter(logging.Filter):
"""A logging filter that identifies the plugin that emitted a log
message.
"""
def __init__(self, plugin):
self.prefix = f'{plugin.name}: '
self.prefix = u'{0}: '.format(plugin.name)
def filter(self, record):
if hasattr(record.msg, 'msg') and isinstance(record.msg.msg,
str):
six.string_types):
# A _LogMessage from our hacked-up Logging replacement.
record.msg.msg = self.prefix + record.msg.msg
elif isinstance(record.msg, str):
elif isinstance(record.msg, six.string_types):
record.msg = self.prefix + record.msg
return True
# Managing the plugins themselves.
class BeetsPlugin:
class BeetsPlugin(object):
"""The base class for all beets plugins. Plugins provide
functionality by defining a subclass of BeetsPlugin and overriding
the abstract methods defined here.
"""
def __init__(self, name=None):
"""Perform one-time plugin setup.
"""
@ -128,24 +127,27 @@ class BeetsPlugin:
value after the function returns). Also determines which params may not
be sent for backwards-compatibility.
"""
argspec = inspect.getfullargspec(func)
argspec = inspect.getargspec(func)
@wraps(func)
def wrapper(*args, **kwargs):
assert self._log.level == logging.NOTSET
verbosity = beets.config['verbose'].get(int)
log_level = max(logging.DEBUG, base_log_level - 10 * verbosity)
self._log.setLevel(log_level)
if argspec.varkw is None:
kwargs = {k: v for k, v in kwargs.items()
if k in argspec.args}
try:
return func(*args, **kwargs)
try:
return func(*args, **kwargs)
except TypeError as exc:
if exc.args[0].startswith(func.__name__):
# caused by 'func' and not stuff internal to 'func'
kwargs = dict((arg, val) for arg, val in kwargs.items()
if arg in argspec.args)
return func(*args, **kwargs)
else:
raise
finally:
self._log.setLevel(logging.NOTSET)
return wrapper
def queries(self):
@ -165,7 +167,7 @@ class BeetsPlugin:
"""
return beets.autotag.hooks.Distance()
def candidates(self, items, artist, album, va_likely, extra_tags=None):
def candidates(self, items, artist, album, va_likely):
"""Should return a sequence of AlbumInfo objects that match the
album whose items are provided.
"""
@ -199,7 +201,7 @@ class BeetsPlugin:
``descriptor`` must be an instance of ``mediafile.MediaField``.
"""
# Defer import to prevent circular dependency
# Defer impor to prevent circular dependency
from beets import library
mediafile.MediaFile.add_field(name, descriptor)
library.Item._media_fields.add(name)
@ -262,14 +264,14 @@ def load_plugins(names=()):
BeetsPlugin subclasses desired.
"""
for name in names:
modname = f'{PLUGIN_NAMESPACE}.{name}'
modname = '{0}.{1}'.format(PLUGIN_NAMESPACE, name)
try:
try:
namespace = __import__(modname, None, None)
except ImportError as exc:
# Again, this is hacky:
if exc.args[0].endswith(' ' + name):
log.warning('** plugin {0} not found', name)
log.warning(u'** plugin {0} not found', name)
else:
raise
else:
@ -280,7 +282,7 @@ def load_plugins(names=()):
except Exception:
log.warning(
'** error loading plugin {}:\n{}',
u'** error loading plugin {}:\n{}',
name,
traceback.format_exc(),
)
@ -294,11 +296,6 @@ def find_plugins():
currently loaded beets plugins. Loads the default plugin set
first.
"""
if _instances:
# After the first call, use cached instances for performance reasons.
# See https://github.com/beetbox/beets/pull/3810
return list(_instances.values())
load_plugins()
plugins = []
for cls in _classes:
@ -332,31 +329,21 @@ def queries():
def types(model_cls):
# Gives us `item_types` and `album_types`
attr_name = f'{model_cls.__name__.lower()}_types'
attr_name = '{0}_types'.format(model_cls.__name__.lower())
types = {}
for plugin in find_plugins():
plugin_types = getattr(plugin, attr_name, {})
for field in plugin_types:
if field in types and plugin_types[field] != types[field]:
raise PluginConflictException(
'Plugin {} defines flexible field {} '
'which has already been defined with '
'another type.'.format(plugin.name, field)
u'Plugin {0} defines flexible field {1} '
u'which has already been defined with '
u'another type.'.format(plugin.name, field)
)
types.update(plugin_types)
return types
def named_queries(model_cls):
# Gather `item_queries` and `album_queries` from the plugins.
attr_name = f'{model_cls.__name__.lower()}_queries'
queries = {}
for plugin in find_plugins():
plugin_queries = getattr(plugin, attr_name, {})
queries.update(plugin_queries)
return queries
def track_distance(item, info):
"""Gets the track distance calculated by all loaded plugins.
Returns a Distance object.
@ -377,19 +364,20 @@ def album_distance(items, album_info, mapping):
return dist
def candidates(items, artist, album, va_likely, extra_tags=None):
def candidates(items, artist, album, va_likely):
"""Gets MusicBrainz candidates for an album from each plugin.
"""
for plugin in find_plugins():
yield from plugin.candidates(items, artist, album, va_likely,
extra_tags)
for candidate in plugin.candidates(items, artist, album, va_likely):
yield candidate
def item_candidates(item, artist, title):
"""Gets MusicBrainz candidates for an item from the plugins.
"""
for plugin in find_plugins():
yield from plugin.item_candidates(item, artist, title)
for item_candidate in plugin.item_candidates(item, artist, title):
yield item_candidate
def album_for_id(album_id):
@ -482,7 +470,7 @@ def send(event, **arguments):
Return a list of non-None values returned from the handlers.
"""
log.debug('Sending event: {0}', event)
log.debug(u'Sending event: {0}', event)
results = []
for handler in event_handlers()[event]:
result = handler(**arguments)
@ -500,7 +488,7 @@ def feat_tokens(for_artist=True):
feat_words = ['ft', 'featuring', 'feat', 'feat.', 'ft.']
if for_artist:
feat_words += ['with', 'vs', 'and', 'con', '&']
return r'(?<=\s)(?:{})(?=\s)'.format(
return '(?<=\s)(?:{0})(?=\s)'.format(
'|'.join(re.escape(x) for x in feat_words)
)
@ -525,7 +513,7 @@ def sanitize_choices(choices, choices_all):
def sanitize_pairs(pairs, pairs_all):
"""Clean up a single-element mapping configuration attribute as returned
by Confuse's `Pairs` template: keep only two-element tuples present in
by `confit`'s `Pairs` template: keep only two-element tuples present in
pairs_all, remove duplicate elements, expand ('str', '*') and ('*', '*')
wildcards while keeping the original order. Note that ('*', '*') and
('*', 'whatever') have the same effect.
@ -575,188 +563,3 @@ def notify_info_yielded(event):
yield v
return decorated
return decorator
def get_distance(config, data_source, info):
"""Returns the ``data_source`` weight and the maximum source weight
for albums or individual tracks.
"""
dist = beets.autotag.Distance()
if info.data_source == data_source:
dist.add('source', config['source_weight'].as_number())
return dist
def apply_item_changes(lib, item, move, pretend, write):
"""Store, move, and write the item according to the arguments.
:param lib: beets library.
:type lib: beets.library.Library
:param item: Item whose changes to apply.
:type item: beets.library.Item
:param move: Move the item if it's in the library.
:type move: bool
:param pretend: Return without moving, writing, or storing the item's
metadata.
:type pretend: bool
:param write: Write the item's metadata to its media file.
:type write: bool
"""
if pretend:
return
from beets import util
# Move the item if it's in the library.
if move and lib.directory in util.ancestry(item.path):
item.move(with_album=False)
if write:
item.try_write()
item.store()
class MetadataSourcePlugin(metaclass=abc.ABCMeta):
def __init__(self):
super().__init__()
self.config.add({'source_weight': 0.5})
@abc.abstractproperty
def id_regex(self):
raise NotImplementedError
@abc.abstractproperty
def data_source(self):
raise NotImplementedError
@abc.abstractproperty
def search_url(self):
raise NotImplementedError
@abc.abstractproperty
def album_url(self):
raise NotImplementedError
@abc.abstractproperty
def track_url(self):
raise NotImplementedError
@abc.abstractmethod
def _search_api(self, query_type, filters, keywords=''):
raise NotImplementedError
@abc.abstractmethod
def album_for_id(self, album_id):
raise NotImplementedError
@abc.abstractmethod
def track_for_id(self, track_id=None, track_data=None):
raise NotImplementedError
@staticmethod
def get_artist(artists, id_key='id', name_key='name'):
"""Returns an artist string (all artists) and an artist_id (the main
artist) for a list of artist object dicts.
For each artist, this function moves articles (such as 'a', 'an',
and 'the') to the front and strips trailing disambiguation numbers. It
returns a tuple containing the comma-separated string of all
normalized artists and the ``id`` of the main/first artist.
:param artists: Iterable of artist dicts or lists returned by API.
:type artists: list[dict] or list[list]
:param id_key: Key or index corresponding to the value of ``id`` for
the main/first artist. Defaults to 'id'.
:type id_key: str or int
:param name_key: Key or index corresponding to values of names
to concatenate for the artist string (containing all artists).
Defaults to 'name'.
:type name_key: str or int
:return: Normalized artist string.
:rtype: str
"""
artist_id = None
artist_names = []
for artist in artists:
if not artist_id:
artist_id = artist[id_key]
name = artist[name_key]
# Strip disambiguation number.
name = re.sub(r' \(\d+\)$', '', name)
# Move articles to the front.
name = re.sub(r'^(.*?), (a|an|the)$', r'\2 \1', name, flags=re.I)
artist_names.append(name)
artist = ', '.join(artist_names).replace(' ,', ',') or None
return artist, artist_id
def _get_id(self, url_type, id_):
"""Parse an ID from its URL if necessary.
:param url_type: Type of URL. Either 'album' or 'track'.
:type url_type: str
:param id_: Album/track ID or URL.
:type id_: str
:return: Album/track ID.
:rtype: str
"""
self._log.debug(
"Searching {} for {} '{}'", self.data_source, url_type, id_
)
match = re.search(self.id_regex['pattern'].format(url_type), str(id_))
if match:
id_ = match.group(self.id_regex['match_group'])
if id_:
return id_
return None
def candidates(self, items, artist, album, va_likely, extra_tags=None):
"""Returns a list of AlbumInfo objects for Search API results
matching an ``album`` and ``artist`` (if not various).
:param items: List of items comprised by an album to be matched.
:type items: list[beets.library.Item]
:param artist: The artist of the album to be matched.
:type artist: str
:param album: The name of the album to be matched.
:type album: str
:param va_likely: True if the album to be matched likely has
Various Artists.
:type va_likely: bool
:return: Candidate AlbumInfo objects.
:rtype: list[beets.autotag.hooks.AlbumInfo]
"""
query_filters = {'album': album}
if not va_likely:
query_filters['artist'] = artist
results = self._search_api(query_type='album', filters=query_filters)
albums = [self.album_for_id(album_id=r['id']) for r in results]
return [a for a in albums if a is not None]
def item_candidates(self, item, artist, title):
"""Returns a list of TrackInfo objects for Search API results
matching ``title`` and ``artist``.
:param item: Singleton item to be matched.
:type item: beets.library.Item
:param artist: The artist of the track to be matched.
:type artist: str
:param title: The title of the track to be matched.
:type title: str
:return: Candidate TrackInfo objects.
:rtype: list[beets.autotag.hooks.TrackInfo]
"""
tracks = self._search_api(
query_type='track', keywords=title, filters={'artist': artist}
)
return [self.track_for_id(track_data=track) for track in tracks]
def album_distance(self, items, album_info, mapping):
return get_distance(
data_source=self.data_source, info=album_info, config=self.config
)
def track_distance(self, item, track_info):
return get_distance(
data_source=self.data_source, info=track_info, config=self.config
)

View file

@ -1,113 +0,0 @@
# This file is part of beets.
# Copyright 2016, Philippe Mongeau.
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
"""Get a random song or album from the library.
"""
import random
from operator import attrgetter
from itertools import groupby
def _length(obj, album):
"""Get the duration of an item or album.
"""
if album:
return sum(i.length for i in obj.items())
else:
return obj.length
def _equal_chance_permutation(objs, field='albumartist', random_gen=None):
"""Generate (lazily) a permutation of the objects where every group
with equal values for `field` have an equal chance of appearing in
any given position.
"""
rand = random_gen or random
# Group the objects by artist so we can sample from them.
key = attrgetter(field)
objs.sort(key=key)
objs_by_artists = {}
for artist, v in groupby(objs, key):
objs_by_artists[artist] = list(v)
# While we still have artists with music to choose from, pick one
# randomly and pick a track from that artist.
while objs_by_artists:
# Choose an artist and an object for that artist, removing
# this choice from the pool.
artist = rand.choice(list(objs_by_artists.keys()))
objs_from_artist = objs_by_artists[artist]
i = rand.randint(0, len(objs_from_artist) - 1)
yield objs_from_artist.pop(i)
# Remove the artist if we've used up all of its objects.
if not objs_from_artist:
del objs_by_artists[artist]
def _take(iter, num):
"""Return a list containing the first `num` values in `iter` (or
fewer, if the iterable ends early).
"""
out = []
for val in iter:
out.append(val)
num -= 1
if num <= 0:
break
return out
def _take_time(iter, secs, album):
"""Return a list containing the first values in `iter`, which should
be Item or Album objects, that add up to the given amount of time in
seconds.
"""
out = []
total_time = 0.0
for obj in iter:
length = _length(obj, album)
if total_time + length <= secs:
out.append(obj)
total_time += length
return out
def random_objs(objs, album, number=1, time=None, equal_chance=False,
random_gen=None):
"""Get a random subset of the provided `objs`.
If `number` is provided, produce that many matches. Otherwise, if
`time` is provided, instead select a list whose total time is close
to that number of minutes. If `equal_chance` is true, give each
artist an equal chance of being included so that artists with more
songs are not represented disproportionately.
"""
rand = random_gen or random
# Permute the objects either in a straightforward way or an
# artist-balanced way.
if equal_chance:
perm = _equal_chance_permutation(objs)
else:
perm = objs
rand.shuffle(perm) # N.B. This shuffles the original list.
# Select objects by time our count.
if time:
return _take_time(perm, time * 60, album)
else:
return _take(perm, number)

Some files were not shown because too many files have changed in this diff Show more