Upgrade to Python 3.7+ and pre-commit linting

This commit is contained in:
Labrys of Knossos 2022-12-09 15:15:05 -05:00
commit 820267a1d2
66 changed files with 530 additions and 719 deletions

2
.gitattributes vendored
View file

@ -1,7 +1,7 @@
# Set default behaviour, in case users don't have core.autocrlf set.
* text=auto
# Explicitly declare text files we want to always be normalized and converted
# Explicitly declare text files we want to always be normalized and converted
# to native line endings on checkout.
*.txt text

View file

@ -11,4 +11,4 @@ Please note we have a code of conduct, please follow it in all your interactions
1. Please base all pull requests on the current nightly branch.
2. Include a description to explain what is achieved with a pull request.
3. Link any relevant issues that are closed or impacted by the pull request.
4. Please update the FAQ to reflect any new parameters, changed behaviour, or suggested configurations relevant to the changes.
4. Please update the FAQ to reflect any new parameters, changed behaviour, or suggested configurations relevant to the changes.

14
.github/README.md vendored
View file

@ -2,7 +2,7 @@ nzbToMedia
==========
Provides an [efficient](https://github.com/clinton-hall/nzbToMedia/wiki/Efficient-on-demand-post-processing) way to handle postprocessing for [CouchPotatoServer](https://couchpota.to/ "CouchPotatoServer") and [SickBeard](http://sickbeard.com/ "SickBeard") (and its [forks](https://github.com/clinton-hall/nzbToMedia/wiki/Failed-Download-Handling-%28FDH%29#sick-beard-and-its-forks))
when using one of the popular NZB download clients like [SABnzbd](http://sabnzbd.org/ "SABnzbd") and [NZBGet](http://nzbget.sourceforge.net/ "NZBGet") on low performance systems like a NAS.
when using one of the popular NZB download clients like [SABnzbd](http://sabnzbd.org/ "SABnzbd") and [NZBGet](http://nzbget.sourceforge.net/ "NZBGet") on low performance systems like a NAS.
This script is based on sabToSickBeard (written by Nic Wolfe and supplied with SickBeard), with the support for NZBGet being added by [thorli](https://github.com/thorli "thorli") and further contributions by [schumi2004](https://github.com/schumi2004 "schumi2004") and [hugbug](https://sourceforge.net/apps/phpbb/nzbget/memberlist.php?mode=viewprofile&u=67 "hugbug").
Torrent suport added by [jkaberg](https://github.com/jkaberg "jkaberg") and [berkona](https://github.com/berkona "berkona")
Corrupt video checking, auto SickBeard fork determination and a whole lot of code improvement was done by [echel0n](https://github.com/echel0n "echel0n")
@ -11,7 +11,7 @@ Python3 compatibility, and much cleaner code base has been contributed by [Labry
Introduction
------------
Originally this was modified from the SickBeard version to allow for ["on-demand" renaming](https://github.com/clinton-hall/nzbToMedia/wiki/Efficient-on-demand-post-processing) and not have My QNAP TS-412 NAS constantly scanning the download directory.
Originally this was modified from the SickBeard version to allow for ["on-demand" renaming](https://github.com/clinton-hall/nzbToMedia/wiki/Efficient-on-demand-post-processing) and not have My QNAP TS-412 NAS constantly scanning the download directory.
Later, a few failed downloads prompted me to incorporate ["failed download" handling](https://github.com/clinton-hall/nzbToMedia/wiki/Failed-Download-Handling-%28FDH%29).
Failed download handling is now provided for SABnzbd, by CouchPotatoServer; however on arm processors (e.g. small NAS systems) this can be un-reliable.
@ -23,13 +23,13 @@ Full support is provided for [SickChill](https://github.com/SickChill/SickChill)
Torrent support has been added with the assistance of jkaberg and berkona. Currently supports uTorrent, Transmission, Deluge and possibly more.
To enable Torrent extraction, on Windows, you need to install [7-zip](http://www.7-zip.org/ "7-zip") or on *nix you need to install the following packages/commands.
"unrar", "unzip", "tar", "7zr"
note: "7zr" is available from the p7zip package. Available on Optware.
In order to use the transcoding option, and corrupt video checking you will need to install ffmpeg (and ffprobe).
Installation instructions for this are available in the [wiki](https://github.com/clinton-hall/nzbToMedia/wiki/Transcoder "wiki")
Contribution
------------
We who have developed nzbToMedia believe in the openness of open-source, and as such we hope that any modifications will lead back to the [original repo](https://github.com/clinton-hall/nzbToMedia "orignal repo") via pull requests.
@ -42,7 +42,7 @@ Contributors: Can be viewed [here](https://github.com/clinton-hall/nzbToMedia/co
Installation
------------
**See more detailed instructions in the [wiki](https://github.com/clinton-hall/nzbToMedia/wiki "wiki")**
**See more detailed instructions in the [wiki](https://github.com/clinton-hall/nzbToMedia/wiki "wiki")**
### Windows
@ -56,9 +56,9 @@ Sorry for any inconvenience caused here.
1. Install `pywin32`
1. Clone or copy all files into a directory wherever you want to keep them (eg. /scripts/ in the home directory of your download client)
1. Clone or copy all files into a directory wherever you want to keep them (eg. /scripts/ in the home directory of your download client)
and change the permission accordingly so the download client can access these files.
`git clone git://github.com/clinton-hall/nzbToMedia.git`
### Configuration

View file

@ -1,12 +1,5 @@
#!/usr/bin/env python
# coding=utf-8
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import datetime
import os
@ -37,7 +30,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
found_file = 0
if client_agent != 'manual' and not core.DOWNLOAD_INFO:
logger.debug('Adding TORRENT download info for directory {0} to database'.format(input_directory))
logger.debug(f'Adding TORRENT download info for directory {input_directory} to database')
my_db = main_db.DBConnection()
@ -61,7 +54,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
}
my_db.upsert('downloads', new_value_dict, control_value_dict)
logger.debug('Received Directory: {0} | Name: {1} | Category: {2}'.format(input_directory, input_name, input_category))
logger.debug(f'Received Directory: {input_directory} | Name: {input_name} | Category: {input_category}')
# Confirm the category by parsing directory structure
input_directory, input_name, input_category, root = core.category_search(input_directory, input_name, input_category,
@ -71,8 +64,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
usercat = input_category
logger.debug('Determined Directory: {0} | Name: {1} | Category: {2}'.format
(input_directory, input_name, input_category))
logger.debug(f'Determined Directory: {input_directory} | Name: {input_name} | Category: {input_category}')
# auto-detect section
section = core.CFG.findsection(input_category).isenabled()
@ -84,26 +76,18 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
section = core.CFG.findsection('UNCAT').isenabled()
usercat = 'UNCAT'
if section is None: # We haven't found any categories to process.
logger.error('Category:[{0}] is not defined or is not enabled. '
'Please rename it or ensure it is enabled for the appropriate section '
'in your autoProcessMedia.cfg and try again.'.format
(input_category))
logger.error(f'Category:[{input_category}] is not defined or is not enabled. Please rename it or ensure it is enabled for the appropriate section in your autoProcessMedia.cfg and try again.')
return [-1, '']
if len(section) > 1:
logger.error('Category:[{0}] is not unique, {1} are using it. '
'Please rename it or disable all other sections using the same category name '
'in your autoProcessMedia.cfg and try again.'.format
(usercat, section.keys()))
logger.error(f'Category:[{usercat}] is not unique, {section.keys()} are using it. Please rename it or disable all other sections using the same category name in your autoProcessMedia.cfg and try again.')
return [-1, '']
if section:
section_name = section.keys()[0]
logger.info('Auto-detected SECTION:{0}'.format(section_name))
logger.info(f'Auto-detected SECTION:{section_name}')
else:
logger.error('Unable to locate a section with subsection:{0} '
'enabled in your autoProcessMedia.cfg, exiting!'.format
(input_category))
logger.error(f'Unable to locate a section with subsection:{input_category} enabled in your autoProcessMedia.cfg, exiting!')
return [-1, '']
section = dict(section[section_name][usercat]) # Type cast to dict() to allow effective usage of .get()
@ -134,15 +118,13 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
if output_destination in input_directory:
output_destination = input_directory
logger.info('Output directory set to: {0}'.format(output_destination))
logger.info(f'Output directory set to: {output_destination}')
if core.SAFE_MODE and output_destination == core.TORRENT_DEFAULT_DIRECTORY:
logger.error('The output directory:[{0}] is the Download Directory. '
'Edit outputDirectory in autoProcessMedia.cfg. Exiting'.format
(input_directory))
logger.error(f'The output directory:[{input_directory}] is the Download Directory. Edit outputDirectory in autoProcessMedia.cfg. Exiting')
return [-1, '']
logger.debug('Scanning files in directory: {0}'.format(input_directory))
logger.debug(f'Scanning files in directory: {input_directory}')
if section_name in ['HeadPhones', 'Lidarr']:
core.NOFLATTEN.extend(
@ -156,9 +138,9 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
input_files = core.list_media_files(input_directory, other=True, otherext=extensions)
if len(input_files) == 0 and os.path.isfile(input_directory):
input_files = [input_directory]
logger.debug('Found 1 file to process: {0}'.format(input_directory))
logger.debug(f'Found 1 file to process: {input_directory}')
else:
logger.debug('Found {0} files in {1}'.format(len(input_files), input_directory))
logger.debug(f'Found {len(input_files)} files in {input_directory}')
for inputFile in input_files:
file_path = os.path.dirname(inputFile)
file_name, file_ext = os.path.splitext(os.path.basename(inputFile))
@ -169,16 +151,14 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
if not os.path.basename(file_path) in output_destination:
target_file = core.os.path.join(
core.os.path.join(output_destination, os.path.basename(file_path)), full_file_name)
logger.debug('Setting outputDestination to {0} to preserve folder structure'.format
(os.path.dirname(target_file)))
logger.debug(f'Setting outputDestination to {os.path.dirname(target_file)} to preserve folder structure')
if root == 1:
if not found_file:
logger.debug('Looking for {0} in: {1}'.format(input_name, inputFile))
logger.debug(f'Looking for {input_name} in: {inputFile}')
if any([core.sanitize_name(input_name) in core.sanitize_name(inputFile),
core.sanitize_name(file_name) in core.sanitize_name(input_name)]):
found_file = True
logger.debug('Found file {0} that matches Torrent Name {1}'.format
(full_file_name, input_name))
logger.debug(f'Found file {full_file_name} that matches Torrent Name {input_name}')
else:
continue
@ -190,8 +170,7 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
logger.debug('Looking for files with modified/created dates less than 5 minutes old.')
if (mtime_lapse < datetime.timedelta(minutes=5)) or (ctime_lapse < datetime.timedelta(minutes=5)):
found_file = True
logger.debug('Found file {0} with date modified/created less than 5 minutes ago.'.format
(full_file_name))
logger.debug(f'Found file {full_file_name} with date modified/created less than 5 minutes ago.')
else:
continue # This file has not been recently moved or created, skip it
@ -200,12 +179,12 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
core.copy_link(inputFile, target_file, core.USE_LINK)
core.remove_read_only(target_file)
except Exception:
logger.error('Failed to link: {0} to {1}'.format(inputFile, target_file))
logger.error(f'Failed to link: {inputFile} to {target_file}')
input_name, output_destination = convert_to_ascii(input_name, output_destination)
if extract == 1:
logger.debug('Checking for archives to extract in directory: {0}'.format(input_directory))
logger.debug(f'Checking for archives to extract in directory: {input_directory}')
core.extract_files(input_directory, output_destination, keep_archive)
if input_category not in core.NOFLATTEN:
@ -217,20 +196,20 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
num_videos = len(
core.list_media_files(output_destination, media=True, audio=False, meta=False, archives=False))
if num_videos > 0:
logger.info('Found {0} media files in {1}'.format(num_videos, output_destination))
logger.info(f'Found {num_videos} media files in {output_destination}')
status = 0
elif extract != 1:
logger.info('Found no media files in {0}. Sending to {1} to process'.format(output_destination, section_name))
logger.info(f'Found no media files in {output_destination}. Sending to {section_name} to process')
status = 0
else:
logger.warning('Found no media files in {0}'.format(output_destination))
logger.warning(f'Found no media files in {output_destination}')
# Only these sections can handling failed downloads
# so make sure everything else gets through without the check for failed
if section_name not in ['CouchPotato', 'Radarr', 'SickBeard', 'SiCKRAGE', 'NzbDrone', 'Sonarr', 'Watcher3']:
status = 0
logger.info('Calling {0}:{1} to post-process:{2}'.format(section_name, usercat, input_name))
logger.info(f'Calling {section_name}:{usercat} to post-process:{input_name}')
if core.TORRENT_CHMOD_DIRECTORY:
core.rchmod(output_destination, core.TORRENT_CHMOD_DIRECTORY)
@ -283,10 +262,10 @@ def process_torrent(input_directory, input_name, input_category, input_hash, inp
# remove torrent
if core.USE_LINK == 'move-sym' and not core.DELETE_ORIGINAL == 1:
logger.debug('Checking for sym-links to re-direct in: {0}'.format(input_directory))
logger.debug(f'Checking for sym-links to re-direct in: {input_directory}')
for dirpath, _, files in os.walk(input_directory):
for file in files:
logger.debug('Checking symlink: {0}'.format(os.path.join(dirpath, file)))
logger.debug(f'Checking symlink: {os.path.join(dirpath, file)}')
replace_links(os.path.join(dirpath, file))
core.remove_torrent(client_agent, input_hash, input_id, input_name)
@ -306,11 +285,11 @@ def main(args):
client_agent = core.TORRENT_CLIENT_AGENT
logger.info('#########################################################')
logger.info('## ..::[{0}]::.. ##'.format(os.path.basename(__file__)))
logger.info(f'## ..::[{os.path.basename(__file__)}]::.. ##')
logger.info('#########################################################')
# debug command line options
logger.debug('Options passed into TorrentToMedia: {0}'.format(args))
logger.debug(f'Options passed into TorrentToMedia: {args}')
# Post-Processing Result
result = ProcessResult(
@ -337,22 +316,17 @@ def main(args):
if not core.CFG[section][subsection].isenabled():
continue
for dir_name in core.get_dirs(section, subsection, link='hard'):
logger.info('Starting manual run for {0}:{1} - Folder:{2}'.format
(section, subsection, dir_name))
logger.info(f'Starting manual run for {section}:{subsection} - Folder:{dir_name}')
logger.info('Checking database for download info for {0} ...'.format
(os.path.basename(dir_name)))
logger.info(f'Checking database for download info for {os.path.basename(dir_name)} ...')
core.DOWNLOAD_INFO = core.get_download_info(os.path.basename(dir_name), 0)
if core.DOWNLOAD_INFO:
client_agent = text_type(core.DOWNLOAD_INFO[0]['client_agent']) or 'manual'
input_hash = text_type(core.DOWNLOAD_INFO[0]['input_hash']) or ''
input_id = text_type(core.DOWNLOAD_INFO[0]['input_id']) or ''
logger.info('Found download info for {0}, '
'setting variables now ...'.format(os.path.basename(dir_name)))
logger.info(f'Found download info for {os.path.basename(dir_name)}, setting variables now ...')
else:
logger.info('Unable to locate download info for {0}, '
'continuing to try and process this release ...'.format
(os.path.basename(dir_name)))
logger.info(f'Unable to locate download info for {os.path.basename(dir_name)}, continuing to try and process this release ...')
client_agent = 'manual'
input_hash = ''
input_id = ''
@ -365,14 +339,13 @@ def main(args):
results = process_torrent(dir_name, input_name, subsection, input_hash or None, input_id or None,
client_agent)
if results.status_code != 0:
logger.error('A problem was reported when trying to perform a manual run for {0}:{1}.'.format
(section, subsection))
logger.error(f'A problem was reported when trying to perform a manual run for {section}:{subsection}.')
result = results
if result.status_code == 0:
logger.info('The {0} script completed successfully.'.format(args[0]))
logger.info(f'The {args[0]} script completed successfully.')
else:
logger.error('A problem was reported in the {0} script.'.format(args[0]))
logger.error(f'A problem was reported in the {args[0]} script.')
del core.MYAPP
return result.status_code

View file

@ -1 +1 @@
theme: jekyll-theme-cayman
theme: jekyll-theme-cayman

View file

@ -29,7 +29,7 @@
# Enable/Disable media file checking using ffprobe.
check_media = 1
# Required media audio language for media to be deemed valid. Leave blank to disregard media audio language check.
require_lan =
require_lan =
# Enable/Disable a safety check to ensure we don't process all downloads in the default_downloadDirectories by mistake.
safe_mode = 1
# Turn this on to disable additional extraction attempts for failed downloads. Default = 0 will attempt to extract and verify if media is present.

View file

@ -88,7 +88,7 @@ def git_clean(remove_directories=False, force=False, dry_run=False, interactive=
except AttributeError:
pass
for exclusion in exclude:
command.append('--exclude={pattern}'.format(pattern=exclusion))
command.append(f'--exclude={exclusion}')
if ignore_rules:
command.append('-x')
if clean_ignored:
@ -116,9 +116,9 @@ def clean_bytecode():
)
print(result)
except subprocess.CalledProcessError as error:
sys.exit('Error Code: {}'.format(error.returncode))
except (IOError, OSError) as error:
sys.exit('Error: {}'.format(error))
sys.exit(f'Error Code: {error.returncode}')
except OSError as error:
sys.exit(f'Error: {error}')
else:
return result
@ -133,9 +133,9 @@ def clean_folders(*paths):
paths=paths,
)
except subprocess.CalledProcessError as error:
sys.exit('Error Code: {}'.format(error.returncode))
except (IOError, OSError) as error:
sys.exit('Error: {}'.format(error))
sys.exit(f'Error Code: {error.returncode}')
except OSError as error:
sys.exit(f'Error: {error}')
else:
return result
@ -166,7 +166,7 @@ def clean(paths):
def _report_error(msg):
print('WARNING: Automatic cleanup could not be executed.')
print(' If errors occur, manual cleanup may be required.')
print('REASON : {}'.format(msg))
print(f'REASON : {msg}')
with WorkingDirectory(module_path()) as cwd:
if cwd.working_directory != cwd.original_directory:
@ -181,7 +181,7 @@ def clean(paths):
print(result or 'No bytecode to clean')
if paths and os.path.exists('.git'):
print('\n-- Cleaning folders: {} --'.format(list(paths)))
print(f'\n-- Cleaning folders: {list(paths)} --')
try:
result = clean_folders(*paths)
except SystemExit as error:

View file

@ -295,7 +295,7 @@ def configure_locale():
try:
locale.setlocale(locale.LC_ALL, '')
SYS_ENCODING = locale.getpreferredencoding()
except (locale.Error, IOError):
except (locale.Error, OSError):
pass
# For OSes that are poorly configured I'll just randomly force UTF-8
@ -309,7 +309,7 @@ def configure_migration():
# run migrate to convert old cfg to new style cfg plus fix any cfg missing values/options.
if not config.migrate():
logger.error('Unable to migrate config file {0}, exiting ...'.format(CONFIG_FILE))
logger.error(f'Unable to migrate config file {CONFIG_FILE}, exiting ...')
if 'NZBOP_SCRIPTDIR' in os.environ:
pass # We will try and read config from Environment.
else:
@ -320,7 +320,7 @@ def configure_migration():
CFG = config.addnzbget()
else: # load newly migrated config
logger.info('Loading config from [{0}]'.format(CONFIG_FILE))
logger.info(f'Loading config from [{CONFIG_FILE}]')
CFG = config()
@ -338,7 +338,7 @@ def configure_logging_part_2():
if LOG_ENV:
for item in os.environ:
logger.info('{0}: {1}'.format(item, os.environ[item]), 'ENVIRONMENT')
logger.info(f'{item}: {os.environ[item]}', 'ENVIRONMENT')
def configure_general():
@ -441,23 +441,26 @@ def configure_niceness():
with open(os.devnull, 'w') as devnull:
try:
subprocess.Popen(['nice'], stdout=devnull, stderr=devnull).communicate()
if len(CFG['Posix']['niceness'].split(',')) > 1: #Allow passing of absolute command, not just value.
NICENESS.extend(CFG['Posix']['niceness'].split(','))
niceness = CFG['Posix']['niceness']
if len(niceness.split(',')) > 1: #Allow passing of absolute command, not just value.
NICENESS.extend(niceness.split(','))
else:
NICENESS.extend(['nice', '-n{0}'.format(int(CFG['Posix']['niceness']))])
NICENESS.extend(['nice', f'-n{int(niceness)}'])
except Exception:
pass
try:
subprocess.Popen(['ionice'], stdout=devnull, stderr=devnull).communicate()
try:
NICENESS.extend(['ionice', '-c{0}'.format(int(CFG['Posix']['ionice_class']))])
ionice = CFG['Posix']['ionice_class']
NICENESS.extend(['ionice', f'-c{int(ionice)}'])
except Exception:
pass
try:
if 'ionice' in NICENESS:
NICENESS.extend(['-n{0}'.format(int(CFG['Posix']['ionice_classdata']))])
ionice = CFG['Posix']['ionice_classdata']
NICENESS.extend([f'-n{int(ionice)}'])
else:
NICENESS.extend(['ionice', '-n{0}'.format(int(CFG['Posix']['ionice_classdata']))])
NICENESS.extend(['ionice', f'-n{int(ionice)}'])
except Exception:
pass
except Exception:
@ -473,7 +476,7 @@ def configure_containers():
COMPRESSED_CONTAINER = [re.compile(r'.r\d{2}$', re.I),
re.compile(r'.part\d+.rar$', re.I),
re.compile('.rar$', re.I)]
COMPRESSED_CONTAINER += [re.compile('{0}$'.format(ext), re.I) for ext in
COMPRESSED_CONTAINER += [re.compile(f'{ext}$', re.I) for ext in
CFG['Extensions']['compressedExtensions']]
MEDIA_CONTAINER = CFG['Extensions']['mediaExtensions']
AUDIO_CONTAINER = CFG['Extensions']['audioExtensions']

View file

@ -81,7 +81,7 @@ def process(
'dir': remote_dir(dir_name) if remote_path else dir_name,
}
logger.debug('Opening URL: {0} with params: {1}'.format(url, params), section)
logger.debug(f'Opening URL: {url} with params: {params}', section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
@ -92,21 +92,21 @@ def process(
f'{section}'
)
logger.postprocess('{0}'.format(r.text), section)
logger.postprocess(f'{r.text}', section)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Server returned status '
f'{r.status_code}'
)
elif r.text == 'OK':
logger.postprocess('SUCCESS: ForceProcess for {0} has been started in LazyLibrarian'.format(dir_name), section)
logger.postprocess(f'SUCCESS: ForceProcess for {dir_name} has been started in LazyLibrarian', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
else:
logger.error('FAILED: ForceProcess of {0} has Failed in LazyLibrarian'.format(dir_name), section)
logger.error(f'FAILED: ForceProcess of {dir_name} has Failed in LazyLibrarian', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Returned log from {section} '
f'was not as expected.'

View file

@ -94,7 +94,7 @@ def process(
success = False
logger.debug('Opening URL: {0}'.format(url), section)
logger.debug(f'Opening URL: {url}', section)
try:
r = requests.post(url, params=params, stream=True, verify=False, timeout=(30, 300))
except requests.ConnectionError:
@ -104,7 +104,7 @@ def process(
f'{section}'
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Server returned status '
f'{r.status_code}'
@ -115,7 +115,7 @@ def process(
result = result.split('\n')
for line in result:
if line:
logger.postprocess('{0}'.format(line), section)
logger.postprocess(line, section)
if 'Post Processing SUCCESSFUL' in line:
success = True

View file

@ -32,17 +32,17 @@ def command_complete(url, params, headers, section):
try:
r = requests.get(url, params=params, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), section)
logger.error(f'Unable to open URL: {url}', section)
return None
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return None
else:
try:
return r.json()['status']
except (ValueError, KeyError):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error('{0} did not return expected json data.'.format(section), section)
logger.error(f'{section} did not return expected json data.', section)
return None
@ -50,10 +50,10 @@ def completed_download_handling(url2, headers, section='MAIN'):
try:
r = requests.get(url2, params={}, headers=headers, stream=True, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url2), section)
logger.error(f'Unable to open URL: {url2}', section)
return False
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return False
else:
try:

View file

@ -88,7 +88,7 @@ def process(
'status': download_status,
}
logger.debug('Opening URL: {0}'.format(url), section)
logger.debug(f'Opening URL: {url}', section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
@ -100,13 +100,13 @@ def process(
)
result = r.json()
logger.postprocess('{0}'.format(result), section)
logger.postprocess(result, section)
if library:
logger.postprocess('moving files to library: {0}'.format(library), section)
logger.postprocess(f'moving files to library: {library}', section)
try:
shutil.move(dir_name, os.path.join(library, input_name))
except Exception:
logger.error('Unable to move {0} to {1}'.format(dir_name, os.path.join(library, input_name)), section)
logger.error(f'Unable to move {dir_name} to {os.path.join(library, input_name)}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Unable to move files'
)
@ -118,18 +118,18 @@ def process(
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Server returned status '
f'{r.status_code}'
)
elif result['success']:
logger.postprocess('SUCCESS: Status for {0} has been set to {1} in Gamez'.format(gamez_id, download_status), section)
logger.postprocess(f'SUCCESS: Status for {gamez_id} has been set to {download_status} in Gamez', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
else:
logger.error('FAILED: Status for {0} has NOT been updated in Gamez'.format(gamez_id), section)
logger.error(f'FAILED: Status for {gamez_id} has NOT been updated in Gamez', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Returned log from {section} '
f'was not as expected.'

View file

@ -137,7 +137,7 @@ def process(
input_name, dir_name = convert_to_ascii(input_name, dir_name)
if not list_media_files(dir_name, media=True, audio=False, meta=False, archives=False) and list_media_files(dir_name, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dir_name))
logger.debug(f'Checking for archives to extract in directory: {dir_name}')
core.extract_files(dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
@ -155,7 +155,7 @@ def process(
rename_subs(dir_name)
if num_files and valid_files == num_files:
if status:
logger.info('Status shown as failed from Downloader, but {0} valid video files found. Setting as success.'.format(good_files), section)
logger.info(f'Status shown as failed from Downloader, but {good_files} valid video files found. Setting as success.', section)
status = 0
elif num_files and valid_files < num_files:
logger.info('Status shown as success from Downloader, but corrupt video files found. Setting as failed.', section)
@ -163,19 +163,19 @@ def process(
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if good_files == num_files:
logger.debug('Video marked as failed due to missing required language: {0}'.format(core.REQUIRE_LAN), section)
logger.debug(f'Video marked as failed due to missing required language: {core.REQUIRE_LAN}', section)
else:
logger.debug('Video marked as failed due to missing playable audio or video', section)
if good_files < num_files and failure_link: # only report corrupt files
failure_link += '&corrupt=true'
elif client_agent == 'manual':
logger.warning('No media files found in directory {0} to manually process.'.format(dir_name), section)
logger.warning(f'No media files found in directory {dir_name} to manually process.', section)
return ProcessResult(
message='',
status_code=0, # Success (as far as this script is concerned)
)
else:
logger.warning('No media files found in directory {0}. Processing this as a failed download'.format(dir_name), section)
logger.warning(f'No media files found in directory {dir_name}. Processing this as a failed download', section)
status = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
@ -184,31 +184,31 @@ def process(
if core.TRANSCODE == 1:
result, new_dir_name = transcoder.transcode_directory(dir_name)
if result == 0:
logger.debug('Transcoding succeeded for files in {0}'.format(dir_name), section)
logger.debug(f'Transcoding succeeded for files in {dir_name}', section)
dir_name = new_dir_name
logger.debug('Config setting \'chmodDirectory\' currently set to {0}'.format(oct(chmod_directory)), section)
logger.debug(f'Config setting \'chmodDirectory\' currently set to {oct(chmod_directory)}', section)
if chmod_directory:
logger.info('Attempting to set the octal permission of \'{0}\' on directory \'{1}\''.format(oct(chmod_directory), dir_name), section)
logger.info(f'Attempting to set the octal permission of \'{oct(chmod_directory)}\' on directory \'{dir_name}\'', section)
core.rchmod(dir_name, chmod_directory)
else:
logger.error('Transcoding failed for files in {0}'.format(dir_name), section)
logger.error(f'Transcoding failed for files in {dir_name}', section)
return ProcessResult(
message='{0}: Failed to post-process - Transcoding failed'.format(section),
message=f'{section}: Failed to post-process - Transcoding failed',
status_code=1,
)
for video in list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
if not release and '.cp(tt' not in video and imdbid:
video_name, video_ext = os.path.splitext(video)
video2 = '{0}.cp({1}){2}'.format(video_name, imdbid, video_ext)
video2 = f'{video_name}.cp({imdbid}){video_ext}'
if not (client_agent in [core.TORRENT_CLIENT_AGENT, 'manual'] and core.USE_LINK == 'move-sym'):
logger.debug('Renaming: {0} to: {1}'.format(video, video2))
logger.debug(f'Renaming: {video} to: {video2}')
os.rename(video, video2)
if not apikey: # If only using Transcoder functions, exit here.
logger.info('No CouchPotato or Radarr or Watcher3 apikey entered. Processing completed.')
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
message=f'{section}: Successfully post-processed {input_name}',
status_code=0,
)
@ -227,16 +227,16 @@ def process(
else:
command = 'renamer.scan'
url = '{0}{1}'.format(base_url, command)
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params), section)
logger.postprocess('Starting {0} scan for {1}'.format(method, input_name), section)
url = f'{base_url}{command}'
logger.debug(f'Opening URL: {url} with PARAMS: {params}', section)
logger.postprocess(f'Starting {method} scan for {input_name}', section)
if section == 'Radarr':
payload = {'name': 'DownloadedMoviesScan', 'path': params['media_folder'], 'downloadClientId': download_id, 'importMode': import_mode}
if not download_id:
payload.pop('downloadClientId')
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(base_url, payload), section)
logger.postprocess('Starting DownloadedMoviesScan scan for {0}'.format(input_name), section)
logger.debug(f'Opening URL: {base_url} with PARAMS: {payload}', section)
logger.postprocess(f'Starting DownloadedMoviesScan scan for {input_name}', section)
if section == 'Watcher3':
if input_name and os.path.isfile(os.path.join(dir_name, input_name)):
@ -244,8 +244,8 @@ def process(
payload = {'apikey': apikey, 'path': params['media_folder'], 'guid': download_id, 'mode': 'complete'}
if not download_id:
payload.pop('guid')
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(base_url, payload), section)
logger.postprocess('Starting postprocessing scan for {0}'.format(input_name), section)
logger.debug(f'Opening URL: {base_url} with PARAMS: {payload}', section)
logger.postprocess(f'Starting postprocessing scan for {input_name}', section)
try:
if section == 'CouchPotato':
@ -263,65 +263,66 @@ def process(
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult(
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
message=f'{section}: Failed to post-process - Server returned status {r.status_code}',
status_code=1,
)
elif section == 'CouchPotato' and result['success']:
logger.postprocess('SUCCESS: Finished {0} scan for folder {1}'.format(method, dir_name), section)
logger.postprocess(f'SUCCESS: Finished {method} scan for folder {dir_name}', section)
if method == 'manage':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
message=f'{section}: Successfully post-processed {input_name}',
status_code=0,
)
elif section == 'Radarr':
try:
scan_id = int(result['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
logger.debug(f'Scan started with id: {scan_id}', section)
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
logger.warning(f'No scan id was returned due to: {e}', section)
scan_id = None
elif section == 'Watcher3' and result['status'] == 'finished':
logger.postprocess('Watcher3 updated status to {0}'.format(result['tasks']['update_movie_status']))
if result['tasks']['update_movie_status'] == 'Finished':
update_movie_status = result['tasks']['update_movie_status']
logger.postprocess('Watcher3 updated status to {}'.format())
if update_movie_status == 'Finished':
return ProcessResult(
message='{0}: Successfully post-processed {1}'.format(section, input_name),
message=f'{section}: Successfully post-processed {input_name}',
status_code=status,
)
else:
return ProcessResult(
message='{0}: Failed to post-process - changed status to {1}'.format(section, result['tasks']['update_movie_status']),
message=f'{section}: Failed to post-process - changed status to {update_movie_status}',
status_code=1,
)
else:
logger.error('FAILED: {0} scan was unable to finish for folder {1}. exiting!'.format(method, dir_name),
logger.error(f'FAILED: {method} scan was unable to finish for folder {dir_name}. exiting!',
section)
return ProcessResult(
message='{0}: Failed to post-process - Server did not return success'.format(section),
message=f'{section}: Failed to post-process - Server did not return success',
status_code=1,
)
else:
core.FAILED = True
logger.postprocess('FAILED DOWNLOAD DETECTED FOR {0}'.format(input_name), section)
logger.postprocess(f'FAILED DOWNLOAD DETECTED FOR {input_name}', section)
if failure_link:
report_nzb(failure_link, client_agent)
if section == 'Radarr':
logger.postprocess('SUCCESS: Sending failed download to {0} for CDH processing'.format(section), section)
logger.postprocess(f'SUCCESS: Sending failed download to {section} for CDH processing', section)
return ProcessResult(
message='{0}: Sending failed download back to {0}'.format(section),
status_code=1, # Return as failed to flag this in the downloader.
) # Return failed flag, but log the event as successful.
elif section == 'Watcher3':
logger.postprocess('Sending failed download to {0} for CDH processing'.format(section), section)
logger.postprocess(f'Sending failed download to {section} for CDH processing', section)
path = remote_dir(dir_name) if remote_path else dir_name
if input_name and os.path.isfile(os.path.join(dir_name, input_name)):
path = os.path.join(path, input_name)
payload = {'apikey': apikey, 'path': path, 'guid': download_id, 'mode': 'failed'}
r = requests.post(base_url, data=payload, verify=False, timeout=(30, 1800))
result = r.json()
logger.postprocess('Watcher3 response: {0}'.format(result))
logger.postprocess(f'Watcher3 response: {result}')
if result['status'] == 'finished':
return ProcessResult(
message='{0}: Sending failed download back to {0}'.format(section),
@ -329,11 +330,11 @@ def process(
) # Return failed flag, but log the event as successful.
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
logger.postprocess(f'Deleting failed files and folder {dir_name}', section)
remove_dir(dir_name)
if not release_id and not media_id:
logger.error('Could not find a downloaded movie in the database matching {0}, exiting!'.format(input_name),
logger.error(f'Could not find a downloaded movie in the database matching {input_name}, exiting!',
section)
return ProcessResult(
message='{0}: Failed to post-process - Failed download not found in {0}'.format(section),
@ -341,17 +342,17 @@ def process(
)
if release_id:
logger.postprocess('Setting failed release {0} to ignored ...'.format(input_name), section)
logger.postprocess(f'Setting failed release {input_name} to ignored ...', section)
url = '{url}release.ignore'.format(url=base_url)
url = f'{base_url}release.ignore'
params = {'id': release_id}
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params), section)
logger.debug(f'Opening URL: {url} with PARAMS: {params}', section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url), section)
logger.error(f'Unable to open URL {url}', section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to connect to {0}'.format(section),
status_code=1,
@ -359,29 +360,29 @@ def process(
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult(
status_code=1,
message='{0}: Failed to post-process - Server returned status {1}'.format(section, r.status_code),
message=f'{section}: Failed to post-process - Server returned status {r.status_code}',
)
elif result['success']:
logger.postprocess('SUCCESS: {0} has been set to ignored ...'.format(input_name), section)
logger.postprocess(f'SUCCESS: {input_name} has been set to ignored ...', section)
else:
logger.warning('FAILED: Unable to set {0} to ignored!'.format(input_name), section)
logger.warning(f'FAILED: Unable to set {input_name} to ignored!', section)
return ProcessResult(
message='{0}: Failed to post-process - Unable to set {1} to ignored'.format(section, input_name),
message=f'{section}: Failed to post-process - Unable to set {input_name} to ignored',
status_code=1,
)
logger.postprocess('Trying to snatch the next highest ranked release.', section)
url = '{0}movie.searcher.try_next'.format(base_url)
logger.debug('Opening URL: {0}'.format(url), section)
url = f'{base_url}movie.searcher.try_next'
logger.debug(f'Opening URL: {url}', section)
try:
r = requests.get(url, params={'media_id': media_id}, verify=False, timeout=(30, 600))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url), section)
logger.error(f'Unable to open URL {url}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Unable to connect to '
f'{section}'
@ -389,7 +390,7 @@ def process(
result = r.json()
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Server returned status '
f'{r.status_code}'
@ -431,25 +432,23 @@ def process(
release_status_new = release[release_id]['status']
if release_status_old is None: # we didn't have a release before, but now we do.
title = release[release_id]['title']
logger.postprocess('SUCCESS: Movie {0} has now been added to CouchPotato with release status of [{1}]'.format(
title, str(release_status_new).upper()), section)
logger.postprocess(f'SUCCESS: Movie {title} has now been added to CouchPotato with release status of [{str(release_status_new).upper()}]', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
if release_status_new != release_status_old:
logger.postprocess('SUCCESS: Release {0} has now been marked with a status of [{1}]'.format(
release_id, str(release_status_new).upper()), section)
logger.postprocess(f'SUCCESS: Release {release_id} has now been marked with a status of [{str(release_status_new).upper()}]', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
except Exception:
pass
elif scan_id:
url = '{0}/{1}'.format(base_url, scan_id)
url = f'{base_url}/{scan_id}'
command_status = command_complete(url, params, headers, section)
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
logger.debug(f'The Scan command return status: {command_status}', section)
if command_status in ['completed']:
logger.debug('The Scan command has completed successfully. Renaming was successful.', section)
return ProcessResult.success(
@ -463,15 +462,13 @@ def process(
# )
if not os.path.isdir(dir_name):
logger.postprocess('SUCCESS: Input Directory [{0}] has been processed and removed'.format(
dir_name), section)
logger.postprocess(f'SUCCESS: Input Directory [{dir_name}] has been processed and removed', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
elif not list_media_files(dir_name, media=True, audio=False, meta=False, archives=True):
logger.postprocess('SUCCESS: Input Directory [{0}] has no remaining media files. This has been fully processed.'.format(
dir_name), section)
logger.postprocess(f'SUCCESS: Input Directory [{dir_name}] has no remaining media files. This has been fully processed.', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
@ -481,13 +478,13 @@ def process(
# The status hasn't changed. we have waited wait_for minutes which is more than enough. uTorrent can resume seeding now.
if section == 'Radarr' and completed_download_handling(url2, headers, section=section):
logger.debug('The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.'.format(section), section)
logger.debug(f'The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {section}.', section)
return ProcessResult.success(
f'{section}: Complete DownLoad Handling is enabled. Passing back '
f'to {section}'
)
logger.warning(
'{0} does not appear to have changed status after {1} minutes, Please check your logs.'.format(input_name, wait_for),
f'{input_name} does not appear to have changed status after {wait_for} minutes, Please check your logs.',
section,
)
@ -512,13 +509,13 @@ def get_release(base_url, imdb_id=None, download_id=None, release_id=None):
logger.debug('No information available to filter CP results')
return results
url = '{0}{1}'.format(base_url, cmd)
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params))
url = f'{base_url}{cmd}'
logger.debug(f'Opening URL: {url} with PARAMS: {params}')
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 60))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url))
logger.error(f'Unable to open URL {url}')
return results
try:
@ -527,14 +524,15 @@ def get_release(base_url, imdb_id=None, download_id=None, release_id=None):
# ValueError catches simplejson's JSONDecodeError and json's ValueError
logger.error('CouchPotato returned the following non-json data')
for line in r.iter_lines():
logger.error('{0}'.format(line))
logger.error(line)
return results
if not result['success']:
if 'error' in result:
logger.error('{0}'.format(result['error']))
logger.error(result['error'])
else:
logger.error('no media found for id {0}'.format(params['id']))
id_param = params['id']
logger.error(f'no media found for id {id_param}')
return results
# Gather release info and return it back, no need to narrow results

View file

@ -94,7 +94,7 @@ def process(
input_name, dir_name = convert_to_ascii(input_name, dir_name)
if not list_media_files(dir_name, media=False, audio=True, meta=False, archives=False) and list_media_files(dir_name, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dir_name))
logger.debug(f'Checking for archives to extract in directory: {dir_name}')
core.extract_files(dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
@ -125,7 +125,7 @@ def process(
return res
# The status hasn't changed. uTorrent can resume seeding now.
logger.warning('The music album does not appear to have changed status after {0} minutes. Please check your Logs'.format(wait_for), section)
logger.warning(f'The music album does not appear to have changed status after {wait_for} minutes. Please check your Logs', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - No change in wanted status'
)
@ -135,17 +135,17 @@ def process(
url = core.utils.common.create_url(scheme, host, port, route)
headers = {'X-Api-Key': apikey}
if remote_path:
logger.debug('remote_path: {0}'.format(remote_dir(dir_name)), section)
logger.debug(f'remote_path: {remote_dir(dir_name)}', section)
data = {'name': 'Rename', 'path': remote_dir(dir_name)}
else:
logger.debug('path: {0}'.format(dir_name), section)
logger.debug(f'path: {dir_name}', section)
data = {'name': 'Rename', 'path': dir_name}
data = json.dumps(data)
try:
logger.debug('Opening URL: {0} with data: {1}'.format(url, data), section)
logger.debug(f'Opening URL: {url} with data: {data}', section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), section)
logger.error(f'Unable to open URL: {url}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Unable to connect to '
f'{section}'
@ -154,16 +154,16 @@ def process(
try:
res = r.json()
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
logger.debug(f'Scan started with id: {scan_id}', section)
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
logger.warning(f'No scan id was returned due to: {e}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Unable to start scan'
)
n = 0
params = {}
url = '{0}/{1}'.format(url, scan_id)
url = f'{url}/{scan_id}'
while n < 6: # set up wait_for minutes to see if command completes..
time.sleep(10 * wait_for)
command_status = command_complete(url, params, headers, section)
@ -171,9 +171,9 @@ def process(
break
n += 1
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
logger.debug(f'The Scan command return status: {command_status}', section)
if not os.path.exists(dir_name):
logger.debug('The directory {0} has been removed. Renaming was successful.'.format(dir_name), section)
logger.debug(f'The directory {dir_name} has been removed. Renaming was successful.', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
@ -188,7 +188,7 @@ def process(
# f'{section}: Failed to post-process {input_name}'
# )
else:
logger.debug('The Scan command did not return status completed. Passing back to {0} to attempt complete download handling.'.format(section), section)
logger.debug(f'The Scan command did not return status completed. Passing back to {section} to attempt complete download handling.', section)
return ProcessResult(
message=f'{section}: Passing back to {section} to attempt '
f'Complete Download Handling',
@ -197,7 +197,7 @@ def process(
else:
if section == 'Lidarr':
logger.postprocess('FAILED: The download failed. Sending failed download to {0} for CDH processing'.format(section), section)
logger.postprocess(f'FAILED: The download failed. Sending failed download to {section} for CDH processing', section)
# Return as failed to flag this in the downloader.
return ProcessResult.failure(
f'{section}: Download Failed. Sending back to {section}'
@ -205,7 +205,7 @@ def process(
else:
logger.warning('FAILED DOWNLOAD DETECTED', section)
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
logger.postprocess(f'Deleting failed files and folder {dir_name}', section)
remove_dir(dir_name)
# Return as failed to flag this in the downloader.
return ProcessResult.failure(
@ -215,14 +215,14 @@ def process(
def get_status(url, apikey, dir_name):
logger.debug('Attempting to get current status for release:{0}'.format(os.path.basename(dir_name)))
logger.debug(f'Attempting to get current status for release:{os.path.basename(dir_name)}')
params = {
'apikey': apikey,
'cmd': 'getHistory',
}
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params))
logger.debug(f'Opening URL: {url} with PARAMS: {params}')
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 120))
@ -244,30 +244,30 @@ def get_status(url, apikey, dir_name):
def force_process(params, url, apikey, input_name, dir_name, section, wait_for):
release_status = get_status(url, apikey, dir_name)
if not release_status:
logger.error('Could not find a status for {0}, is it in the wanted list ?'.format(input_name), section)
logger.error(f'Could not find a status for {input_name}, is it in the wanted list ?', section)
logger.debug('Opening URL: {0} with PARAMS: {1}'.format(url, params), section)
logger.debug(f'Opening URL: {url} with PARAMS: {params}', section)
try:
r = requests.get(url, params=params, verify=False, timeout=(30, 300))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url), section)
logger.error(f'Unable to open URL {url}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Unable to connect to '
f'{section}'
)
logger.debug('Result: {0}'.format(r.text), section)
logger.debug(f'Result: {r.text}', section)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Server returned status {r.status_code}'
)
elif r.text == 'OK':
logger.postprocess('SUCCESS: Post-Processing started for {0} in folder {1} ...'.format(input_name, dir_name), section)
logger.postprocess(f'SUCCESS: Post-Processing started for {input_name} in folder {dir_name} ...', section)
else:
logger.error('FAILED: Post-Processing has NOT started for {0} in folder {1}. exiting!'.format(input_name, dir_name), section)
logger.error(f'FAILED: Post-Processing has NOT started for {input_name} in folder {dir_name}. exiting!', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Returned log from {section} '
f'was not as expected.'
@ -278,12 +278,12 @@ def force_process(params, url, apikey, input_name, dir_name, section, wait_for):
while time.time() < timeout:
current_status = get_status(url, apikey, dir_name)
if current_status is not None and current_status != release_status: # Something has changed. CPS must have processed this movie.
logger.postprocess('SUCCESS: This release is now marked as status [{0}]'.format(current_status), section)
logger.postprocess(f'SUCCESS: This release is now marked as status [{current_status}]', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
if not os.path.isdir(dir_name):
logger.postprocess('SUCCESS: The input directory {0} has been removed Processing must have finished.'.format(dir_name), section)
logger.postprocess(f'SUCCESS: The input directory {dir_name} has been removed Processing must have finished.', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)

View file

@ -136,7 +136,7 @@ def process(
# Now check if tv files exist in destination.
if not list_media_files(dir_name, media=True, audio=False, meta=False, archives=False):
if list_media_files(dir_name, media=False, audio=False, meta=False, archives=True) and extract:
logger.debug('Checking for archives to extract in directory: {0}'.format(dir_name))
logger.debug(f'Checking for archives to extract in directory: {dir_name}')
core.extract_files(dir_name)
input_name, dir_name = convert_to_ascii(input_name, dir_name)
@ -167,13 +167,13 @@ def process(
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
print('[NZB] MARK=BAD')
if good_files == num_files:
logger.debug('Video marked as failed due to missing required language: {0}'.format(core.REQUIRE_LAN), section)
logger.debug(f'Video marked as failed due to missing required language: {core.REQUIRE_LAN}', section)
else:
logger.debug('Video marked as failed due to missing playable audio or video', section)
if good_files < num_files and failure_link: # only report corrupt files
failure_link += '&corrupt=true'
elif client_agent == 'manual':
logger.warning('No media files found in directory {0} to manually process.'.format(dir_name), section)
logger.warning(f'No media files found in directory {dir_name} to manually process.', section)
# Success (as far as this script is concerned)
return ProcessResult.success()
elif nzb_extraction_by == 'Destination':
@ -187,7 +187,7 @@ def process(
status = 1
failed = 1
else:
logger.warning('No media files found in directory {0}. Processing this as a failed download'.format(dir_name), section)
logger.warning(f'No media files found in directory {dir_name}. Processing this as a failed download', section)
status = 1
failed = 1
if 'NZBOP_VERSION' in os.environ and os.environ['NZBOP_VERSION'][0:5] >= '14.0':
@ -196,15 +196,15 @@ def process(
if status == 0 and core.TRANSCODE == 1: # only transcode successful downloads
result, new_dir_name = transcoder.transcode_directory(dir_name)
if result == 0:
logger.debug('SUCCESS: Transcoding succeeded for files in {0}'.format(dir_name), section)
logger.debug(f'SUCCESS: Transcoding succeeded for files in {dir_name}', section)
dir_name = new_dir_name
logger.debug('Config setting \'chmodDirectory\' currently set to {0}'.format(oct(chmod_directory)), section)
logger.debug(f'Config setting \'chmodDirectory\' currently set to {oct(chmod_directory)}', section)
if chmod_directory:
logger.info('Attempting to set the octal permission of \'{0}\' on directory \'{1}\''.format(oct(chmod_directory), dir_name), section)
logger.info(f'Attempting to set the octal permission of \'{oct(chmod_directory)}\' on directory \'{dir_name}\'', section)
core.rchmod(dir_name, chmod_directory)
else:
logger.error('FAILED: Transcoding failed for files in {0}'.format(dir_name), section)
logger.error(f'FAILED: Transcoding failed for files in {dir_name}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Transcoding failed'
)
@ -290,17 +290,17 @@ def process(
if failure_link:
report_nzb(failure_link, client_agent)
if 'failed' in fork_params:
logger.postprocess('FAILED: The download failed. Sending \'failed\' process request to {0} branch'.format(fork), section)
logger.postprocess(f'FAILED: The download failed. Sending \'failed\' process request to {fork} branch', section)
elif section == 'NzbDrone':
logger.postprocess('FAILED: The download failed. Sending failed download to {0} for CDH processing'.format(fork), section)
logger.postprocess(f'FAILED: The download failed. Sending failed download to {fork} for CDH processing', section)
# Return as failed to flag this in the downloader.
return ProcessResult.failure(
f'{section}: Download Failed. Sending back to {section}'
)
else:
logger.postprocess('FAILED: The download failed. {0} branch does not handle failed downloads. Nothing to process'.format(fork), section)
logger.postprocess(f'FAILED: The download failed. {fork} branch does not handle failed downloads. Nothing to process', section)
if delete_failed and os.path.isdir(dir_name) and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
logger.postprocess(f'Deleting failed files and folder {dir_name}', section)
remove_dir(dir_name)
# Return as failed to flag this in the downloader.
return ProcessResult.failure(
@ -330,10 +330,10 @@ def process(
headers = {'X-Api-Key': apikey}
# params = {'sortKey': 'series.title', 'page': 1, 'pageSize': 1, 'sortDir': 'asc'}
if remote_path:
logger.debug('remote_path: {0}'.format(remote_dir(dir_name)), section)
logger.debug(f'remote_path: {remote_dir(dir_name)}', section)
data = {'name': 'DownloadedEpisodesScan', 'path': remote_dir(dir_name), 'downloadClientId': download_id, 'importMode': import_mode}
else:
logger.debug('path: {0}'.format(dir_name), section)
logger.debug(f'path: {dir_name}', section)
data = {'name': 'DownloadedEpisodesScan', 'path': dir_name, 'downloadClientId': download_id, 'importMode': import_mode}
if not download_id:
data.pop('downloadClientId')
@ -346,7 +346,7 @@ def process(
else:
s = requests.Session()
logger.debug('Opening URL: {0} with params: {1}'.format(url, fork_params), section)
logger.debug(f'Opening URL: {url} with params: {fork_params}', section)
if not apikey and username and password:
login = f'{web_root}/login'
login_params = {'username': username, 'password': password}
@ -381,17 +381,17 @@ def process(
r = s.get(url, params=params, stream=True, verify=False, timeout=(30, 1800))
elif section == 'NzbDrone':
logger.debug('Opening URL: {0} with data: {1}'.format(url, data), section)
logger.debug(f'Opening URL: {url} with data: {data}', section)
r = requests.post(url, data=data, headers=headers, stream=True, verify=False, timeout=(30, 1800))
except requests.ConnectionError:
logger.error('Unable to open URL: {0}'.format(url), section)
logger.error(f'Unable to open URL: {url}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Unable to connect to '
f'{section}'
)
if r.status_code not in [requests.codes.ok, requests.codes.created, requests.codes.accepted]:
logger.error('Server returned status {0}'.format(r.status_code), section)
logger.error(f'Server returned status {r.status_code}', section)
return ProcessResult.failure(
f'{section}: Failed to post-process - Server returned status '
f'{r.status_code}'
@ -408,7 +408,7 @@ def process(
for line in r.iter_lines():
if line:
line = line.decode('utf-8')
logger.postprocess('{0}'.format(line), section)
logger.postprocess(line, section)
if 'Moving file from' in line:
input_name = os.path.split(line)[1]
if 'added to the queue' in line:
@ -428,15 +428,15 @@ def process(
try:
res = r.json()
scan_id = int(res['id'])
logger.debug('Scan started with id: {0}'.format(scan_id), section)
logger.debug(f'Scan started with id: {scan_id}', section)
started = True
except Exception as e:
logger.warning('No scan id was returned due to: {0}'.format(e), section)
logger.warning(f'No scan id was returned due to: {e}', section)
scan_id = None
started = False
if status != 0 and delete_failed and not os.path.dirname(dir_name) == dir_name:
logger.postprocess('Deleting failed files and folder {0}'.format(dir_name), section)
logger.postprocess(f'Deleting failed files and folder {dir_name}', section)
remove_dir(dir_name)
if success:
@ -446,7 +446,7 @@ def process(
elif section == 'NzbDrone' and started:
n = 0
params = {}
url = '{0}/{1}'.format(url, scan_id)
url = f'{url}/{scan_id}'
while n < 6: # set up wait_for minutes to see if command completes..
time.sleep(10 * wait_for)
command_status = command_complete(url, params, headers, section)
@ -454,9 +454,9 @@ def process(
break
n += 1
if command_status:
logger.debug('The Scan command return status: {0}'.format(command_status), section)
logger.debug(f'The Scan command return status: {command_status}', section)
if not os.path.exists(dir_name):
logger.debug('The directory {0} has been removed. Renaming was successful.'.format(dir_name), section)
logger.debug(f'The directory {dir_name} has been removed. Renaming was successful.', section)
return ProcessResult.success(
f'{section}: Successfully post-processed {input_name}'
)
@ -473,7 +473,7 @@ def process(
url2 = core.utils.common.create_url(scheme, host, port, route)
if completed_download_handling(url2, headers, section=section):
logger.debug('The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {0}.'.format(section),
logger.debug(f'The Scan command did not return status completed, but complete Download Handling is enabled. Passing back to {section}.',
section)
return ProcessResult(
message=f'{section}: Complete DownLoad Handling is enabled. '

View file

@ -9,7 +9,7 @@ import core
from core import logger
class Section(configobj.Section, object):
class Section(configobj.Section):
def isenabled(self):
# checks if subsection enabled, returns true/false if subsection specified otherwise returns true/false in {}
if not self.sections:
@ -96,14 +96,12 @@ class ConfigObj(configobj.ConfigObj, Section):
def find_key(node, kv):
if isinstance(node, list):
for i in node:
for x in ConfigObj.find_key(i, kv):
yield x
yield from ConfigObj.find_key(i, kv)
elif isinstance(node, dict):
if kv in node:
yield node[kv]
for j in node.values():
for x in ConfigObj.find_key(j, kv):
yield x
yield from ConfigObj.find_key(j, kv)
@staticmethod
def migrate():
@ -117,7 +115,7 @@ class ConfigObj(configobj.ConfigObj, Section):
shutil.copyfile(core.CONFIG_SPEC_FILE, core.CONFIG_FILE)
CFG_OLD = config(core.CONFIG_FILE)
except Exception as error:
logger.error('Error {msg} when copying to .cfg'.format(msg=error))
logger.error(f'Error {error} when copying to .cfg')
try:
# check for autoProcessMedia.cfg.spec and create if it does not exist
@ -125,7 +123,7 @@ class ConfigObj(configobj.ConfigObj, Section):
shutil.copyfile(core.CONFIG_FILE, core.CONFIG_SPEC_FILE)
CFG_NEW = config(core.CONFIG_SPEC_FILE)
except Exception as error:
logger.error('Error {msg} when copying to .spec'.format(msg=error))
logger.error(f'Error {error} when copying to .spec')
# check for autoProcessMedia.cfg and autoProcessMedia.cfg.spec and if they don't exist return and fail
if CFG_NEW is None or CFG_OLD is None:
@ -264,7 +262,7 @@ class ConfigObj(configobj.ConfigObj, Section):
CFG_NEW['SickBeard']['tv']['fork'] = 'auto'
# create a backup of our old config
CFG_OLD.filename = '{config}.old'.format(config=core.CONFIG_FILE)
CFG_OLD.filename = f'{core.CONFIG_FILE}.old'
CFG_OLD.write()
# write our new config to autoProcessMedia.cfg
@ -315,7 +313,7 @@ class ConfigObj(configobj.ConfigObj, Section):
env_keys = ['AUTO_UPDATE', 'CHECK_MEDIA', 'REQUIRE_LAN', 'SAFE_MODE', 'NO_EXTRACT_FAILED']
cfg_keys = ['auto_update', 'check_media', 'require_lan', 'safe_mode', 'no_extract_failed']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
key = f'NZBPO_{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -325,7 +323,7 @@ class ConfigObj(configobj.ConfigObj, Section):
env_keys = ['MOUNTPOINTS']
cfg_keys = ['mount_points']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
key = f'NZBPO_{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -339,7 +337,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'wait_for', 'watch_dir', 'omdbapikey']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_CPS{index}'.format(index=env_keys[index])
key = f'NZBPO_CPS{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -360,7 +358,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'wait_for', 'watch_dir', 'omdbapikey']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_W3{index}'.format(index=env_keys[index])
key = f'NZBPO_W3{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -381,7 +379,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'nzbExtractionBy', 'remote_path', 'process_method']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_SB{index}'.format(index=env_keys[index])
key = f'NZBPO_SB{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -402,7 +400,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'delete_failed', 'Torrent_NoLink', 'nzbExtractionBy', 'remote_path', 'process_method']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_SR{index}'.format(index=env_keys[index])
key = f'NZBPO_SR{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -421,7 +419,7 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'wait_for', 'watch_dir', 'remote_path', 'delete_failed']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_HP{index}'.format(index=env_keys[index])
key = f'NZBPO_HP{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -440,7 +438,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_MY{index}'.format(index=env_keys[index])
key = f'NZBPO_MY{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -455,7 +453,7 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'watch_dir', 'library', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_GZ{index}'.format(index=env_keys[index])
key = f'NZBPO_GZ{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -470,7 +468,7 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_keys = ['enabled', 'apikey', 'host', 'port', 'ssl', 'web_root', 'watch_dir', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_LL{index}'.format(index=env_keys[index])
key = f'NZBPO_LL{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -488,7 +486,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path', 'importMode']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_ND{index}'.format(index=env_keys[index])
key = f'NZBPO_ND{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -510,7 +508,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path', 'omdbapikey', 'importMode']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_RA{index}'.format(index=env_keys[index])
key = f'NZBPO_RA{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -531,7 +529,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'Torrent_NoLink', 'nzbExtractionBy', 'wait_for', 'delete_failed', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_LI{index}'.format(index=env_keys[index])
key = f'NZBPO_LI{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -546,7 +544,7 @@ class ConfigObj(configobj.ConfigObj, Section):
env_keys = ['COMPRESSEDEXTENSIONS', 'MEDIAEXTENSIONS', 'METAEXTENSIONS']
cfg_keys = ['compressedExtensions', 'mediaExtensions', 'metaExtensions']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
key = f'NZBPO_{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -556,7 +554,7 @@ class ConfigObj(configobj.ConfigObj, Section):
env_keys = ['NICENESS', 'IONICE_CLASS', 'IONICE_CLASSDATA']
cfg_keys = ['niceness', 'ionice_class', 'ionice_classdata']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
key = f'NZBPO_{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -584,7 +582,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'outputSubtitleCodec', 'outputAudioChannels', 'outputAudioTrack2Channels',
'outputAudioOtherChannels', 'outputVideoResolution']
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
key = f'NZBPO_{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -594,7 +592,7 @@ class ConfigObj(configobj.ConfigObj, Section):
env_keys = ['WAKE', 'HOST', 'PORT', 'MAC']
cfg_keys = ['wake', 'host', 'port', 'mac']
for index in range(len(env_keys)):
key = 'NZBPO_WOL{index}'.format(index=env_keys[index])
key = f'NZBPO_WOL{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -608,7 +606,7 @@ class ConfigObj(configobj.ConfigObj, Section):
'user_script_successCodes', 'user_script_clean', 'delay', 'remote_path']
if env_cat_key in os.environ:
for index in range(len(env_keys)):
key = 'NZBPO_{index}'.format(index=env_keys[index])
key = f'NZBPO_{env_keys[index]}'
if key in os.environ:
option = cfg_keys[index]
value = os.environ[key]
@ -618,14 +616,14 @@ class ConfigObj(configobj.ConfigObj, Section):
cfg_new[section][os.environ[env_cat_key]]['enabled'] = 1
except Exception as error:
logger.debug('Error {msg} when applying NZBGet config'.format(msg=error))
logger.debug(f'Error {error} when applying NZBGet config')
try:
# write our new config to autoProcessMedia.cfg
cfg_new.filename = core.CONFIG_FILE
cfg_new.write()
except Exception as error:
logger.debug('Error {msg} when writing changes to .cfg'.format(msg=error))
logger.debug(f'Error {error} when writing changes to .cfg')
return cfg_new

View file

@ -73,7 +73,7 @@ def extract(file_path, output_destination):
if ext[1] in ('.gz', '.bz2', '.lzma'):
# Check if this is a tar
if os.path.splitext(ext[0])[1] == '.tar':
cmd = extract_commands['.tar{ext}'.format(ext=ext[1])]
cmd = extract_commands[f'.tar{ext[1]}']
else: # Try gunzip
cmd = extract_commands[ext[1]]
elif ext[1] in ('.1', '.01', '.001') and os.path.splitext(ext[0])[1] in ('.rar', '.zip', '.7z'):
@ -137,7 +137,7 @@ def extract(file_path, output_destination):
continue
cmd2 = cmd
# append password here.
passcmd = '-p{pwd}'.format(pwd=password)
passcmd = f'-p{password}'
cmd2.append(passcmd)
p = Popen(cmd2, stdout=devnull, stderr=devnull, startupinfo=info) # should extract files fine.
res = p.wait()

View file

@ -45,6 +45,6 @@ class GitHub:
"""
return self._access_api(
['repos', self.github_repo_user, self.github_repo, 'compare',
'{base}...{head}'.format(base=base, head=head)],
f'{base}...{head}'],
params={'per_page': per_page},
)

View file

@ -133,7 +133,7 @@ class NTMRotatingLogHandler:
i: Log number to ues
"""
return self.log_file_path + ('.{0}'.format(i) if i else '')
return self.log_file_path + (f'.{i}' if i else '')
def _num_logs(self):
"""
@ -189,9 +189,9 @@ class NTMRotatingLogHandler:
self.writes_since_check += 1
try:
message = '{0}: {1}'.format(section.upper(), to_log)
message = f'{section.upper()}: {to_log}'
except UnicodeError:
message = '{0}: Message contains non-utf-8 string'.format(section.upper())
message = f'{section.upper()}: Message contains non-utf-8 string'
out_line = message

View file

@ -17,7 +17,7 @@ def db_filename(filename='nzbtomedia.db', suffix=None):
@return: the correct location of the database file.
"""
if suffix:
filename = '{0}.{1}'.format(filename, suffix)
filename = f'{filename}.{suffix}'
return core.os.path.join(core.APP_ROOT, filename)
@ -51,7 +51,7 @@ class DBConnection:
while attempt < 5:
try:
if args is None:
logger.log('{name}: {query}'.format(name=self.filename, query=query), logger.DB)
logger.log(f'{self.filename}: {query}', logger.DB)
cursor = self.connection.cursor()
cursor.execute(query)
sql_result = cursor.fetchone()[0]
@ -66,14 +66,14 @@ class DBConnection:
break
except sqlite3.OperationalError as error:
if 'unable to open database file' in error.args[0] or 'database is locked' in error.args[0]:
logger.log('DB error: {msg}'.format(msg=error), logger.WARNING)
logger.log(f'DB error: {error}', logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log('DB error: {msg}'.format(msg=error), logger.ERROR)
logger.log(f'DB error: {error}', logger.ERROR)
raise
except sqlite3.DatabaseError as error:
logger.log('Fatal error executing query: {msg}'.format(msg=error), logger.ERROR)
logger.log(f'Fatal error executing query: {error}', logger.ERROR)
raise
return sql_result
@ -94,26 +94,26 @@ class DBConnection:
sql_result.append(self.connection.execute(qu[0]))
elif len(qu) > 1:
if log_transaction:
logger.log('{query} with args {args}'.format(query=qu[0], args=qu[1]), logger.DEBUG)
logger.log(f'{qu[0]} with args {qu[1]}', logger.DEBUG)
sql_result.append(self.connection.execute(qu[0], qu[1]))
self.connection.commit()
logger.log('Transaction with {x} query\'s executed'.format(x=len(querylist)), logger.DEBUG)
logger.log(f'Transaction with {len(querylist)} query\'s executed', logger.DEBUG)
return sql_result
except sqlite3.OperationalError as error:
sql_result = []
if self.connection:
self.connection.rollback()
if 'unable to open database file' in error.args[0] or 'database is locked' in error.args[0]:
logger.log('DB error: {msg}'.format(msg=error), logger.WARNING)
logger.log(f'DB error: {error}', logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log('DB error: {msg}'.format(msg=error), logger.ERROR)
logger.log(f'DB error: {error}', logger.ERROR)
raise
except sqlite3.DatabaseError as error:
if self.connection:
self.connection.rollback()
logger.log('Fatal error executing query: {msg}'.format(msg=error), logger.ERROR)
logger.log(f'Fatal error executing query: {error}', logger.ERROR)
raise
return sql_result
@ -128,7 +128,7 @@ class DBConnection:
while attempt < 5:
try:
if args is None:
logger.log('{name}: {query}'.format(name=self.filename, query=query), logger.DB)
logger.log(f'{self.filename}: {query}', logger.DB)
sql_result = self.connection.execute(query)
else:
logger.log('{name}: {query} with args {args}'.format
@ -139,14 +139,14 @@ class DBConnection:
break
except sqlite3.OperationalError as error:
if 'unable to open database file' in error.args[0] or 'database is locked' in error.args[0]:
logger.log('DB error: {msg}'.format(msg=error), logger.WARNING)
logger.log(f'DB error: {error}', logger.WARNING)
attempt += 1
time.sleep(1)
else:
logger.log('DB error: {msg}'.format(msg=error), logger.ERROR)
logger.log(f'DB error: {error}', logger.ERROR)
raise
except sqlite3.DatabaseError as error:
logger.log('Fatal error executing query: {msg}'.format(msg=error), logger.ERROR)
logger.log(f'Fatal error executing query: {error}', logger.ERROR)
raise
return sql_result
@ -164,7 +164,7 @@ class DBConnection:
def gen_params(my_dict):
return [
'{key} = ?'.format(key=k)
f'{k} = ?'
for k in my_dict.keys()
]
@ -194,7 +194,7 @@ class DBConnection:
def table_info(self, table_name):
# FIXME ? binding is not supported here, but I cannot find a way to escape a string manually
cursor = self.connection.execute('PRAGMA table_info({0})'.format(table_name))
cursor = self.connection.execute(f'PRAGMA table_info({table_name})')
return {
column['name']: {'type': column['type']}
for column in cursor
@ -261,8 +261,8 @@ class SchemaUpgrade:
return column in self.connection.table_info(table_name)
def add_column(self, table, column, data_type='NUMERIC', default=0):
self.connection.action('ALTER TABLE {0} ADD {1} {2}'.format(table, column, data_type))
self.connection.action('UPDATE {0} SET {1} = ?'.format(table, column), (default,))
self.connection.action(f'ALTER TABLE {table} ADD {column} {data_type}')
self.connection.action(f'UPDATE {table} SET {column} = ?', (default,))
def check_db_version(self):
result = self.connection.select('SELECT db_version FROM db_version')

View file

@ -11,9 +11,9 @@ def get_nzoid(input_name):
slots = []
logger.debug('Searching for nzoid from SAbnzbd ...')
if 'http' in core.SABNZBD_HOST:
base_url = '{0}:{1}/api'.format(core.SABNZBD_HOST, core.SABNZBD_PORT)
base_url = f'{core.SABNZBD_HOST}:{core.SABNZBD_PORT}/api'
else:
base_url = 'http://{0}:{1}/api'.format(core.SABNZBD_HOST, core.SABNZBD_PORT)
base_url = f'http://{core.SABNZBD_HOST}:{core.SABNZBD_PORT}/api'
url = base_url
params = {
'apikey': core.SABNZBD_APIKEY,
@ -47,7 +47,7 @@ def get_nzoid(input_name):
for nzo_id, name in slots:
if name in [input_name, clean_name]:
nzoid = nzo_id
logger.debug('Found nzoid: {0}'.format(nzoid))
logger.debug(f'Found nzoid: {nzoid}')
break
except Exception:
logger.warning('Data from SABnzbd could not be parsed')
@ -66,5 +66,5 @@ def report_nzb(failure_link, client_agent):
try:
requests.post(failure_link, headers=headers, timeout=(30, 300))
except Exception as e:
logger.error('Unable to open URL {0} due to {1}'.format(failure_link, e))
logger.error(f'Unable to open URL {failure_link} due to {e}')
return

View file

@ -11,7 +11,7 @@ def configure_client():
user = core.DELUGE_USER
password = core.DELUGE_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
logger.debug(f'Connecting to {agent}: http://{host}:{port}')
client = DelugeRPCClient(host, port, user, password)
try:
client.connect()

View file

@ -12,9 +12,9 @@ def configure_client():
password = core.QBITTORRENT_PASSWORD
logger.debug(
'Connecting to {0}: http://{1}:{2}'.format(agent, host, port),
f'Connecting to {agent}: http://{host}:{port}',
)
client = qBittorrentClient('http://{0}:{1}/'.format(host, port))
client = qBittorrentClient(f'http://{host}:{port}/')
try:
client.login(user, password)
except Exception:

View file

@ -11,7 +11,7 @@ def configure_client():
user = core.SYNO_USER
password = core.SYNO_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
logger.debug(f'Connecting to {agent}: http://{host}:{port}')
try:
client = DownloadStation(host, port, user, password)
except Exception:

View file

@ -11,7 +11,7 @@ def configure_client():
user = core.TRANSMISSION_USER
password = core.TRANSMISSION_PASSWORD
logger.debug('Connecting to {0}: http://{1}:{2}'.format(agent, host, port))
logger.debug(f'Connecting to {agent}: http://{host}:{port}')
try:
client = TransmissionClient(host, port, user, password)
except Exception:

View file

@ -28,7 +28,7 @@ def create_torrent_class(client_agent):
def pause_torrent(client_agent, input_hash, input_id, input_name):
logger.debug('Stopping torrent {0} in {1} while processing'.format(input_name, client_agent))
logger.debug(f'Stopping torrent {input_name} in {client_agent} while processing')
try:
if client_agent == 'utorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.stop(input_hash)
@ -42,13 +42,13 @@ def pause_torrent(client_agent, input_hash, input_id, input_name):
core.TORRENT_CLASS.pause(input_hash)
time.sleep(5)
except Exception:
logger.warning('Failed to stop torrent {0} in {1}'.format(input_name, client_agent))
logger.warning(f'Failed to stop torrent {input_name} in {client_agent}')
def resume_torrent(client_agent, input_hash, input_id, input_name):
if not core.TORRENT_RESUME == 1:
return
logger.debug('Starting torrent {0} in {1}'.format(input_name, client_agent))
logger.debug(f'Starting torrent {input_name} in {client_agent}')
try:
if client_agent == 'utorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.start(input_hash)
@ -62,12 +62,12 @@ def resume_torrent(client_agent, input_hash, input_id, input_name):
core.TORRENT_CLASS.resume(input_hash)
time.sleep(5)
except Exception:
logger.warning('Failed to start torrent {0} in {1}'.format(input_name, client_agent))
logger.warning(f'Failed to start torrent {input_name} in {client_agent}')
def remove_torrent(client_agent, input_hash, input_id, input_name):
if core.DELETE_ORIGINAL == 1 or core.USE_LINK == 'move':
logger.debug('Deleting torrent {0} from {1}'.format(input_name, client_agent))
logger.debug(f'Deleting torrent {input_name} from {client_agent}')
try:
if client_agent == 'utorrent' and core.TORRENT_CLASS != '':
core.TORRENT_CLASS.removedata(input_hash)
@ -82,6 +82,6 @@ def remove_torrent(client_agent, input_hash, input_id, input_name):
core.TORRENT_CLASS.delete_permanently(input_hash)
time.sleep(5)
except Exception:
logger.warning('Failed to delete torrent {0} in {1}'.format(input_name, client_agent))
logger.warning(f'Failed to delete torrent {input_name} in {client_agent}')
else:
resume_torrent(client_agent, input_hash, input_id, input_name)

View file

@ -10,7 +10,7 @@ def configure_client():
user = core.UTORRENT_USER
password = core.UTORRENT_PASSWORD
logger.debug('Connecting to {0}: {1}'.format(agent, web_ui))
logger.debug(f'Connecting to {agent}: {web_ui}')
try:
client = UTorrentClient(web_ui, user, password)
except Exception:

View file

@ -33,13 +33,13 @@ def plex_update(category):
section = None
if not core.PLEX_SECTION:
return
logger.debug('Attempting to update Plex Library for category {0}.'.format(category), 'PLEX')
logger.debug(f'Attempting to update Plex Library for category {category}.', 'PLEX')
for item in core.PLEX_SECTION:
if item[0] == category:
section = item[1]
if section:
url = '{url}{section}/refresh?X-Plex-Token={token}'.format(url=url, section=section, token=core.PLEX_TOKEN)
url = f'{url}{section}/refresh?X-Plex-Token={core.PLEX_TOKEN}'
requests.get(url, timeout=(60, 120), verify=False)
logger.debug('Plex Library has been refreshed.', 'PLEX')
else:

View file

@ -29,17 +29,17 @@ def import_subs(filename):
if not languages:
return
logger.info('Attempting to download subtitles for {0}'.format(filename), 'SUBTITLES')
logger.info(f'Attempting to download subtitles for {filename}', 'SUBTITLES')
try:
video = subliminal.scan_video(filename)
subtitles = subliminal.download_best_subtitles({video}, languages)
subliminal.save_subtitles(video, subtitles[video])
for subtitle in subtitles[video]:
subtitle_path = subliminal.subtitle.get_subtitle_path(video.name, subtitle.language)
os.chmod(subtitle_path, 0o644)
except Exception as e:
logger.error('Failed to download subtitles for {0} due to: {1}'.format(filename, e), 'SUBTITLES')
logger.error(f'Failed to download subtitles for {filename} due to: {e}', 'SUBTITLES')
def rename_subs(path):
filepaths = []
@ -78,17 +78,17 @@ def rename_subs(path):
# could call ffprobe to parse the sub information and get language if lan unknown here.
new_sub_name = name
else:
new_sub_name = '{name}.{lan}'.format(name=name, lan=str(lan))
new_sub_name = f'{name}.{str(lan)}'
new_sub = os.path.join(directory, new_sub_name) # full path and name less ext
if '{new_sub}{ext}'.format(new_sub=new_sub, ext=ext) in renamed: # If duplicate names, add unique number before ext.
if f'{new_sub}{ext}' in renamed: # If duplicate names, add unique number before ext.
for i in range(1,len(renamed)+1):
if '{new_sub}.{i}{ext}'.format(new_sub=new_sub, i=i, ext=ext) in renamed:
if f'{new_sub}.{i}{ext}' in renamed:
continue
new_sub = '{new_sub}.{i}'.format(new_sub=new_sub, i=i)
new_sub = f'{new_sub}.{i}'
break
new_sub = '{new_sub}{ext}'.format(new_sub=new_sub, ext=ext) # add extension now
new_sub = f'{new_sub}{ext}' # add extension now
if os.path.isfile(new_sub): # Don't copy over existing - final check.
logger.debug('Unable to rename sub file {old} as destination {new} already exists'.format(old=sub, new=new_sub))
logger.debug(f'Unable to rename sub file {sub} as destination {new_sub} already exists')
continue
logger.debug('Renaming sub file from {old} to {new}'.format
(old=sub, new=new_sub))
@ -96,5 +96,5 @@ def rename_subs(path):
try:
os.rename(sub, new_sub)
except Exception as error:
logger.error('Unable to rename sub file due to: {error}'.format(error=error))
logger.error(f'Unable to rename sub file due to: {error}')
return

View file

@ -27,24 +27,19 @@ def process():
continue
for dir_name in get_dirs(section, subsection, link='move'):
logger.info(
'Starting manual run for {0}:{1} - Folder: {2}'.format(
section, subsection, dir_name))
f'Starting manual run for {section}:{subsection} - Folder: {dir_name}')
logger.info(
'Checking database for download info for {0} ...'.format(
os.path.basename(dir_name)))
f'Checking database for download info for {os.path.basename(dir_name)} ...'
)
core.DOWNLOAD_INFO = get_download_info(
os.path.basename(dir_name), 0)
if core.DOWNLOAD_INFO:
logger.info('Found download info for {0}, '
'setting variables now ...'.format
(os.path.basename(dir_name)))
logger.info(f'Found download info for {os.path.basename(dir_name)}, setting variables now ...')
client_agent = core.DOWNLOAD_INFO[0]['client_agent'] or 'manual'
download_id = core.DOWNLOAD_INFO[0]['input_id'] or ''
else:
logger.info('Unable to locate download info for {0}, '
'continuing to try and process this release ...'.format
(os.path.basename(dir_name)))
logger.info(f'Unable to locate download info for {os.path.basename(dir_name)}, continuing to try and process this release ...')
client_agent = 'manual'
download_id = ''
@ -59,7 +54,6 @@ def process():
input_category=subsection)
if results.status_code != 0:
logger.error(
'A problem was reported when trying to perform a manual run for {0}:{1}.'.format
(section, subsection))
f'A problem was reported when trying to perform a manual run for {section}:{subsection}.')
result = results
return result

View file

@ -19,8 +19,7 @@ from core.utils import (
def process(input_directory, input_name=None, status=0, client_agent='manual', download_id=None, input_category=None, failure_link=None):
if core.SAFE_MODE and input_directory == core.NZB_DEFAULT_DIRECTORY:
logger.error(
'The input directory:[{0}] is the Default Download Directory. Please configure category directories to prevent processing of other media.'.format(
input_directory))
f'The input directory:[{input_directory}] is the Default Download Directory. Please configure category directories to prevent processing of other media.')
return ProcessResult(
message='',
status_code=-1,
@ -30,7 +29,7 @@ def process(input_directory, input_name=None, status=0, client_agent='manual', d
download_id = get_nzoid(input_name)
if client_agent != 'manual' and not core.DOWNLOAD_INFO:
logger.debug('Adding NZB download info for directory {0} to database'.format(input_directory))
logger.debug(f'Adding NZB download info for directory {input_directory} to database')
my_db = main_db.DBConnection()
@ -63,8 +62,7 @@ def process(input_directory, input_name=None, status=0, client_agent='manual', d
section = core.CFG.findsection('ALL').isenabled()
if section is None:
logger.error(
'Category:[{0}] is not defined or is not enabled. Please rename it or ensure it is enabled for the appropriate section in your autoProcessMedia.cfg and try again.'.format(
input_category))
f'Category:[{input_category}] is not defined or is not enabled. Please rename it or ensure it is enabled for the appropriate section in your autoProcessMedia.cfg and try again.')
return ProcessResult(
message='',
status_code=-1,
@ -74,8 +72,7 @@ def process(input_directory, input_name=None, status=0, client_agent='manual', d
if len(section) > 1:
logger.error(
'Category:[{0}] is not unique, {1} are using it. Please rename it or disable all other sections using the same category name in your autoProcessMedia.cfg and try again.'.format(
input_category, section.keys()))
f'Category:[{input_category}] is not unique, {section.keys()} are using it. Please rename it or disable all other sections using the same category name in your autoProcessMedia.cfg and try again.')
return ProcessResult(
message='',
status_code=-1,
@ -83,10 +80,9 @@ def process(input_directory, input_name=None, status=0, client_agent='manual', d
if section:
section_name = section.keys()[0]
logger.info('Auto-detected SECTION:{0}'.format(section_name))
logger.info(f'Auto-detected SECTION:{section_name}')
else:
logger.error('Unable to locate a section with subsection:{0} enabled in your autoProcessMedia.cfg, exiting!'.format(
input_category))
logger.error(f'Unable to locate a section with subsection:{input_category} enabled in your autoProcessMedia.cfg, exiting!')
return ProcessResult(
status_code=-1,
message='',
@ -98,23 +94,22 @@ def process(input_directory, input_name=None, status=0, client_agent='manual', d
try:
if int(cfg.get('remote_path')) and not core.REMOTE_PATHS:
logger.error('Remote Path is enabled for {0}:{1} but no Network mount points are defined. Please check your autoProcessMedia.cfg, exiting!'.format(
section_name, input_category))
logger.error(f'Remote Path is enabled for {section_name}:{input_category} but no Network mount points are defined. Please check your autoProcessMedia.cfg, exiting!')
return ProcessResult(
status_code=-1,
message='',
)
except Exception:
logger.error('Remote Path {0} is not valid for {1}:{2} Please set this to either 0 to disable or 1 to enable!'.format(
cfg.get('remote_path'), section_name, input_category))
remote_path = cfg.get('remote_path')
logger.error(f'Remote Path {remote_path} is not valid for {section_name}:{input_category} Please set this to either 0 to disable or 1 to enable!')
input_name, input_directory = convert_to_ascii(input_name, input_directory)
if extract == 1 and not (status > 0 and core.NOEXTRACTFAILED):
logger.debug('Checking for archives to extract in directory: {0}'.format(input_directory))
logger.debug(f'Checking for archives to extract in directory: {input_directory}')
extract_files(input_directory)
logger.info('Calling {0}:{1} to post-process:{2}'.format(section_name, input_category, input_name))
logger.info(f'Calling {section_name}:{input_category} to post-process:{input_name}')
if section_name == 'UserScript':
result = external_script(input_directory, input_name, input_category, section[usercat])

View file

@ -33,7 +33,7 @@ def _parse_total_status():
status_summary = os.environ['NZBPP_TOTALSTATUS']
if status_summary != 'SUCCESS':
status = os.environ['NZBPP_STATUS']
logger.info('Download failed with status {0}.'.format(status))
logger.info(f'Download failed with status {status}.')
return 1
return 0
@ -87,9 +87,9 @@ def check_version():
version = os.environ['NZBOP_VERSION']
# Check if the script is called from nzbget 11.0 or later
if version[0:5] < '11.0':
logger.error('NZBGet Version {0} is not supported. Please update NZBGet.'.format(version))
logger.error(f'NZBGet Version {version} is not supported. Please update NZBGet.')
sys.exit(core.NZBGET_POSTPROCESS_ERROR)
logger.info('Script triggered from NZBGet Version {0}.'.format(version))
logger.info(f'Script triggered from NZBGet Version {version}.')
def process():

View file

@ -9,7 +9,7 @@ MINIMUM_ARGUMENTS = 8
def process_script():
version = os.environ['SAB_VERSION']
logger.info('Script triggered from SABnzbd {0}.'.format(version))
logger.info(f'Script triggered from SABnzbd {version}.')
return nzb.process(
input_directory=os.environ['SAB_COMPLETE_DIR'],
input_name=os.environ['SAB_FINAL_NAME'],
@ -38,7 +38,7 @@ def process(args):
8. Failure URL
"""
version = '0.7.17+' if len(args) > MINIMUM_ARGUMENTS else ''
logger.info('Script triggered from SABnzbd {}'.format(version))
logger.info(f'Script triggered from SABnzbd {version}')
return nzb.process(
input_directory=args[1],
input_name=args[2],

View file

@ -71,20 +71,20 @@ def rename_file(filename, newfile_path):
try:
os.rename(filename, newfile_path)
except Exception as error:
logger.error('Unable to rename file due to: {error}'.format(error=error), 'EXCEPTION')
logger.error(f'Unable to rename file due to: {error}', 'EXCEPTION')
def replace_filename(filename, dirname, name):
head, file_extension = os.path.splitext(os.path.basename(filename))
if media_pattern.search(os.path.basename(dirname).replace(' ', '.')) is not None:
newname = os.path.basename(dirname).replace(' ', '.')
logger.debug('Replacing file name {old} with directory name {new}'.format(old=head, new=newname), 'EXCEPTION')
logger.debug(f'Replacing file name {head} with directory name {newname}', 'EXCEPTION')
elif media_pattern.search(name.replace(' ', '.').lower()) is not None:
newname = name.replace(' ', '.')
logger.debug('Replacing file name {old} with download name {new}'.format
(old=head, new=newname), 'EXCEPTION')
else:
logger.warning('No name replacement determined for {name}'.format(name=head), 'EXCEPTION')
logger.warning(f'No name replacement determined for {head}', 'EXCEPTION')
newname = name
newfile = newname + file_extension
newfile_path = os.path.join(dirname, newfile)
@ -142,7 +142,7 @@ def rename_script(dirname):
try:
os.rename(orig, dest)
except Exception as error:
logger.error('Unable to rename file due to: {error}'.format(error=error), 'EXCEPTION')
logger.error(f'Unable to rename file due to: {error}', 'EXCEPTION')
def par2(dirname):
@ -164,18 +164,18 @@ def par2(dirname):
bitbucket = open('NUL')
else:
bitbucket = open('/dev/null')
logger.info('Running par2 on file {0}.'.format(parfile), 'PAR2')
logger.info(f'Running par2 on file {parfile}.', 'PAR2')
command = [core.PAR2CMD, 'r', parfile, '*']
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.debug('calling command:{0}'.format(cmd), 'PAR2')
cmd = f'{cmd} {item}'
logger.debug(f'calling command:{cmd}', 'PAR2')
try:
proc = subprocess.Popen(command, stdout=bitbucket, stderr=bitbucket)
proc.communicate()
result = proc.returncode
except Exception:
logger.error('par2 file processing for {0} has failed'.format(parfile), 'PAR2')
logger.error(f'par2 file processing for {parfile} has failed', 'PAR2')
if result == 0:
logger.info('par2 file processing succeeded', 'PAR2')
os.chdir(pwd)

View file

@ -41,14 +41,15 @@ def is_video_good(videofile, status, require_lan=None):
else:
return True
logger.info('Checking [{0}] for corruption, please stand by ...'.format(file_name_ext), 'TRANSCODER')
logger.info(f'Checking [{file_name_ext}] for corruption, please stand by ...', 'TRANSCODER')
video_details, result = get_video_details(videofile)
if result != 0:
logger.error('FAILED: [{0}] is corrupted!'.format(file_name_ext), 'TRANSCODER')
logger.error(f'FAILED: [{file_name_ext}] is corrupted!', 'TRANSCODER')
return False
if video_details.get('error'):
logger.info('FAILED: [{0}] returned error [{1}].'.format(file_name_ext, video_details.get('error')), 'TRANSCODER')
error_details = video_details.get('error')
logger.info(f'FAILED: [{file_name_ext}] returned error [{error_details}].', 'TRANSCODER')
return False
if video_details.get('streams'):
video_streams = [item for item in video_details['streams'] if item['codec_type'] == 'video']
@ -58,12 +59,10 @@ def is_video_good(videofile, status, require_lan=None):
else:
valid_audio = audio_streams
if len(video_streams) > 0 and len(valid_audio) > 0:
logger.info('SUCCESS: [{0}] has no corruption.'.format(file_name_ext), 'TRANSCODER')
logger.info(f'SUCCESS: [{file_name_ext}] has no corruption.', 'TRANSCODER')
return True
else:
logger.info('FAILED: [{0}] has {1} video streams and {2} audio streams. '
'Assume corruption.'.format
(file_name_ext, len(video_streams), len(audio_streams)), 'TRANSCODER')
logger.info(f'FAILED: [{file_name_ext}] has {len(video_streams)} video streams and {len(audio_streams)} audio streams. Assume corruption.', 'TRANSCODER')
return False
@ -76,7 +75,7 @@ def zip_out(file, img, bitbucket):
try:
procin = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
except Exception:
logger.error('Extracting [{0}] has failed'.format(file), 'TRANSCODER')
logger.error(f'Extracting [{file}] has failed', 'TRANSCODER')
return procin
@ -116,7 +115,7 @@ def get_video_details(videofile, img=None, bitbucket=None):
result = proc.returncode
video_details = json.loads(out.decode())
except Exception:
logger.error('Checking [{0}] has failed'.format(file), 'TRANSCODER')
logger.error(f'Checking [{file}] has failed', 'TRANSCODER')
return video_details, result
@ -147,11 +146,11 @@ def build_commands(file, new_dir, movie_name, bitbucket):
if check and core.CONCAT:
name = movie_name
elif check:
name = ('{0}.cd{1}'.format(movie_name, check.groups()[0]))
name = (f'{movie_name}.cd{check.groups()[0]}')
elif core.CONCAT and re.match('(.+)[cC][dD][0-9]', name):
name = re.sub('([ ._=:-]+[cC][dD][0-9])', '', name)
if ext == core.VEXTENSION and new_dir == directory: # we need to change the name to prevent overwriting itself.
core.VEXTENSION = '-transcoded{ext}'.format(ext=core.VEXTENSION) # adds '-transcoded.ext'
core.VEXTENSION = f'-transcoded{core.VEXTENSION}' # adds '-transcoded.ext'
new_file = file
else:
img, data = next(file.items())
@ -196,7 +195,7 @@ def build_commands(file, new_dir, movie_name, bitbucket):
if core.VBITRATE:
video_cmd.extend(['-b:v', str(core.VBITRATE)])
if core.VRESOLUTION:
video_cmd.extend(['-vf', 'scale={vres}'.format(vres=core.VRESOLUTION)])
video_cmd.extend(['-vf', f'scale={core.VRESOLUTION}'])
if core.VPRESET:
video_cmd.extend(['-preset', core.VPRESET])
if core.VCRF:
@ -258,14 +257,14 @@ def build_commands(file, new_dir, movie_name, bitbucket):
height=int((height / w_scale) / 2) * 2,
)
if w_scale > 1:
video_cmd.extend(['-vf', 'scale={width}'.format(width=scale)])
video_cmd.extend(['-vf', f'scale={scale}'])
else: # lower or matching ratio, scale by height only.
scale = '{width}:{height}'.format(
width=int((width / h_scale) / 2) * 2,
height=scale.split(':')[1],
)
if h_scale > 1:
video_cmd.extend(['-vf', 'scale={height}'.format(height=scale)])
video_cmd.extend(['-vf', f'scale={scale}'])
if core.VBITRATE:
video_cmd.extend(['-b:v', str(core.VBITRATE)])
if core.VPRESET:
@ -315,36 +314,36 @@ def build_commands(file, new_dir, movie_name, bitbucket):
a_mapped.extend([audio2[0]['index']])
bitrate = int(float(audio2[0].get('bit_rate', 0))) / 1000
channels = int(float(audio2[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd.extend([f'-c:a:{used_audio}', 'copy'])
elif audio1: # right (or only) language, wrong codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio1[0]['index'])])
a_mapped.extend([audio1[0]['index']])
bitrate = int(float(audio1[0].get('bit_rate', 0))) / 1000
channels = int(float(audio1[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), core.ACODEC if core.ACODEC else 'copy'])
audio_cmd.extend([f'-c:a:{used_audio}', core.ACODEC if core.ACODEC else 'copy'])
elif audio4: # wrong language, right codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio4[0]['index'])])
a_mapped.extend([audio4[0]['index']])
bitrate = int(float(audio4[0].get('bit_rate', 0))) / 1000
channels = int(float(audio4[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd.extend([f'-c:a:{used_audio}', 'copy'])
elif audio3: # wrong language, wrong codec. just pick the default audio track
map_cmd.extend(['-map', '0:{index}'.format(index=audio3[0]['index'])])
a_mapped.extend([audio3[0]['index']])
bitrate = int(float(audio3[0].get('bit_rate', 0))) / 1000
channels = int(float(audio3[0].get('channels', 0)))
audio_cmd.extend(['-c:a:{0}'.format(used_audio), core.ACODEC if core.ACODEC else 'copy'])
audio_cmd.extend([f'-c:a:{used_audio}', core.ACODEC if core.ACODEC else 'copy'])
if core.ACHANNELS and channels and channels > core.ACHANNELS:
audio_cmd.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS)])
audio_cmd.extend([f'-ac:a:{used_audio}', str(core.ACHANNELS)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if core.ABITRATE and not (core.ABITRATE * 0.9 < bitrate < core.ABITRATE * 1.1):
audio_cmd.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE)])
audio_cmd.extend([f'-b:a:{used_audio}', str(core.ABITRATE)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if core.OUTPUTQUALITYPERCENT:
audio_cmd.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
audio_cmd.extend([f'-q:a:{used_audio}', str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd[1] == 'copy':
audio_cmd[1] = core.ACODEC
if audio_cmd[1] in ['aac', 'dts']:
@ -365,42 +364,42 @@ def build_commands(file, new_dir, movie_name, bitbucket):
a_mapped.extend([audio5[0]['index']])
bitrate = int(float(audio5[0].get('bit_rate', 0))) / 1000
channels = int(float(audio5[0].get('channels', 0)))
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd2.extend([f'-c:a:{used_audio}', 'copy'])
elif audio1: # right language wrong codec.
map_cmd.extend(['-map', '0:{index}'.format(index=audio1[0]['index'])])
a_mapped.extend([audio1[0]['index']])
bitrate = int(float(audio1[0].get('bit_rate', 0))) / 1000
channels = int(float(audio1[0].get('channels', 0)))
if core.ACODEC2:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), core.ACODEC2])
audio_cmd2.extend([f'-c:a:{used_audio}', core.ACODEC2])
else:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd2.extend([f'-c:a:{used_audio}', 'copy'])
elif audio6: # wrong language, right codec
map_cmd.extend(['-map', '0:{index}'.format(index=audio6[0]['index'])])
a_mapped.extend([audio6[0]['index']])
bitrate = int(float(audio6[0].get('bit_rate', 0))) / 1000
channels = int(float(audio6[0].get('channels', 0)))
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd2.extend([f'-c:a:{used_audio}', 'copy'])
elif audio3: # wrong language, wrong codec just pick the default audio track
map_cmd.extend(['-map', '0:{index}'.format(index=audio3[0]['index'])])
a_mapped.extend([audio3[0]['index']])
bitrate = int(float(audio3[0].get('bit_rate', 0))) / 1000
channels = int(float(audio3[0].get('channels', 0)))
if core.ACODEC2:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), core.ACODEC2])
audio_cmd2.extend([f'-c:a:{used_audio}', core.ACODEC2])
else:
audio_cmd2.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd2.extend([f'-c:a:{used_audio}', 'copy'])
if core.ACHANNELS2 and channels and channels > core.ACHANNELS2:
audio_cmd2.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS2)])
audio_cmd2.extend([f'-ac:a:{used_audio}', str(core.ACHANNELS2)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if core.ABITRATE2 and not (core.ABITRATE2 * 0.9 < bitrate < core.ABITRATE2 * 1.1):
audio_cmd2.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE2)])
audio_cmd2.extend([f'-b:a:{used_audio}', str(core.ABITRATE2)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if core.OUTPUTQUALITYPERCENT:
audio_cmd2.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
audio_cmd2.extend([f'-q:a:{used_audio}', str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd2[1] == 'copy':
audio_cmd2[1] = core.ACODEC2
if audio_cmd2[1] in ['aac', 'dts']:
@ -422,23 +421,23 @@ def build_commands(file, new_dir, movie_name, bitbucket):
bitrate = int(float(audio.get('bit_rate', 0))) / 1000
channels = int(float(audio.get('channels', 0)))
if audio['codec_name'] in core.ACODEC3_ALLOW:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd3.extend([f'-c:a:{used_audio}', 'copy'])
else:
if core.ACODEC3:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), core.ACODEC3])
audio_cmd3.extend([f'-c:a:{used_audio}', core.ACODEC3])
else:
audio_cmd3.extend(['-c:a:{0}'.format(used_audio), 'copy'])
audio_cmd3.extend([f'-c:a:{used_audio}', 'copy'])
if core.ACHANNELS3 and channels and channels > core.ACHANNELS3:
audio_cmd3.extend(['-ac:a:{0}'.format(used_audio), str(core.ACHANNELS3)])
audio_cmd3.extend([f'-ac:a:{used_audio}', str(core.ACHANNELS3)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if core.ABITRATE3 and not (core.ABITRATE3 * 0.9 < bitrate < core.ABITRATE3 * 1.1):
audio_cmd3.extend(['-b:a:{0}'.format(used_audio), str(core.ABITRATE3)])
audio_cmd3.extend([f'-b:a:{used_audio}', str(core.ABITRATE3)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if core.OUTPUTQUALITYPERCENT > 0:
audio_cmd3.extend(['-q:a:{0}'.format(used_audio), str(core.OUTPUTQUALITYPERCENT)])
audio_cmd3.extend([f'-q:a:{used_audio}', str(core.OUTPUTQUALITYPERCENT)])
if audio_cmd3[1] == 'copy':
audio_cmd3[1] = core.ACODEC3
if audio_cmd3[1] in ['aac', 'dts']:
@ -456,7 +455,7 @@ def build_commands(file, new_dir, movie_name, bitbucket):
if core.BURN and not subs1 and not burnt and os.path.isfile(file):
for subfile in get_subs(file):
if lan in os.path.split(subfile)[1]:
video_cmd.extend(['-vf', 'subtitles={subs}'.format(subs=subfile)])
video_cmd.extend(['-vf', f'subtitles={subfile}'])
burnt = 1
for sub in subs1:
if core.BURN and not burnt and os.path.isfile(input_file):
@ -465,7 +464,7 @@ def build_commands(file, new_dir, movie_name, bitbucket):
if sub_streams[index]['index'] == sub['index']:
subloc = index
break
video_cmd.extend(['-vf', 'subtitles={sub}:si={loc}'.format(sub=input_file, loc=subloc)])
video_cmd.extend(['-vf', f'subtitles={input_file}:si={subloc}'])
burnt = 1
if not core.ALLOWSUBS:
break
@ -519,10 +518,10 @@ def build_commands(file, new_dir, movie_name, bitbucket):
except Exception:
pass
if metlan:
meta_cmd.extend(['-metadata:s:s:{x}'.format(x=len(s_mapped) + n),
'language={lang}'.format(lang=metlan.alpha3)])
meta_cmd.extend([f'-metadata:s:s:{len(s_mapped) + n}',
f'language={metlan.alpha3}'])
n += 1
map_cmd.extend(['-map', '{x}:0'.format(x=n)])
map_cmd.extend(['-map', f'{n}:0'])
if not core.ALLOWSUBS or (not s_mapped and not n):
sub_cmd.extend(['-sn'])
@ -582,20 +581,20 @@ def extract_subs(file, newfile_path, bitbucket):
lan = sub.get('tags', {}).get('language', 'unk')
if num == 1:
output_file = os.path.join(subdir, '{0}.srt'.format(name))
output_file = os.path.join(subdir, f'{name}.srt')
if os.path.isfile(output_file):
output_file = os.path.join(subdir, '{0}.{1}.srt'.format(name, n))
output_file = os.path.join(subdir, f'{name}.{n}.srt')
else:
output_file = os.path.join(subdir, '{0}.{1}.srt'.format(name, lan))
output_file = os.path.join(subdir, f'{name}.{lan}.srt')
if os.path.isfile(output_file):
output_file = os.path.join(subdir, '{0}.{1}.{2}.srt'.format(name, lan, n))
output_file = os.path.join(subdir, f'{name}.{lan}.{n}.srt')
command = [core.FFMPEG, '-loglevel', 'warning', '-i', file, '-vn', '-an',
'-codec:{index}'.format(index=idx), 'srt', output_file]
f'-codec:{idx}', 'srt', output_file]
if platform.system() != 'Windows':
command = core.NICENESS + command
logger.info('Extracting {0} subtitle from: {1}'.format(lan, file))
logger.info(f'Extracting {lan} subtitle from: {file}')
print_cmd(command)
result = 1 # set result to failed in case call fails.
try:
@ -610,7 +609,7 @@ def extract_subs(file, newfile_path, bitbucket):
shutil.copymode(file, output_file)
except Exception:
pass
logger.info('Extracting {0} subtitle from {1} has succeeded'.format(lan, file))
logger.info(f'Extracting {lan} subtitle from {file} has succeeded')
else:
logger.error('Extracting subtitles has failed')
@ -625,11 +624,11 @@ def process_list(it, new_dir, bitbucket):
for item in it:
ext = os.path.splitext(item)[1].lower()
if ext in ['.iso', '.bin', '.img'] and ext not in core.IGNOREEXTENSIONS:
logger.debug('Attempting to rip disk image: {0}'.format(item), 'TRANSCODER')
logger.debug(f'Attempting to rip disk image: {item}', 'TRANSCODER')
new_list.extend(rip_iso(item, new_dir, bitbucket))
rem_list.append(item)
elif re.match('.+VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]', item) and '.vob' not in core.IGNOREEXTENSIONS:
logger.debug('Found VIDEO_TS image file: {0}'.format(item), 'TRANSCODER')
logger.debug(f'Found VIDEO_TS image file: {item}', 'TRANSCODER')
if not vts_path:
try:
vts_path = re.match('(.+VIDEO_TS)', item).groups()[0]
@ -637,7 +636,7 @@ def process_list(it, new_dir, bitbucket):
vts_path = os.path.split(item)[0]
rem_list.append(item)
elif re.match('.+BDMV[/\\]SOURCE[/\\][0-9]+[0-9].[Mm][Tt][Ss]', item) and '.mts' not in core.IGNOREEXTENSIONS:
logger.debug('Found MTS image file: {0}'.format(item), 'TRANSCODER')
logger.debug(f'Found MTS image file: {item}', 'TRANSCODER')
if not mts_path:
try:
mts_path = re.match('(.+BDMV[/\\]SOURCE)', item).groups()[0]
@ -665,7 +664,7 @@ def process_list(it, new_dir, bitbucket):
it.extend(new_list)
for item in rem_list:
it.remove(item)
logger.debug('Successfully extracted .vob file {0} from disk image'.format(new_list[0]), 'TRANSCODER')
logger.debug(f'Successfully extracted .vob file {new_list[0]} from disk image', 'TRANSCODER')
elif new_list and not success:
new_list = []
rem_list = []
@ -675,7 +674,7 @@ def process_list(it, new_dir, bitbucket):
def mount_iso(item, new_dir, bitbucket): #Currently only supports Linux Mount when permissions allow.
if platform.system() == 'Windows':
logger.error('No mounting options available under Windows for image file {0}'.format(item), 'TRANSCODER')
logger.error(f'No mounting options available under Windows for image file {item}', 'TRANSCODER')
return []
mount_point = os.path.join(os.path.dirname(os.path.abspath(item)),'temp')
make_dir(mount_point)
@ -688,20 +687,20 @@ def mount_iso(item, new_dir, bitbucket): #Currently only supports Linux Mount wh
for file in files:
full_path = os.path.join(root, file)
if re.match('.+VTS_[0-9][0-9]_[0-9].[Vv][Oo][Bb]', full_path) and '.vob' not in core.IGNOREEXTENSIONS:
logger.debug('Found VIDEO_TS image file: {0}'.format(full_path), 'TRANSCODER')
logger.debug(f'Found VIDEO_TS image file: {full_path}', 'TRANSCODER')
try:
vts_path = re.match('(.+VIDEO_TS)', full_path).groups()[0]
except Exception:
vts_path = os.path.split(full_path)[0]
return combine_vts(vts_path)
elif re.match('.+BDMV[/\\]STREAM[/\\][0-9]+[0-9].[Mm]', full_path) and '.mts' not in core.IGNOREEXTENSIONS:
logger.debug('Found MTS image file: {0}'.format(full_path), 'TRANSCODER')
logger.debug(f'Found MTS image file: {full_path}', 'TRANSCODER')
try:
mts_path = re.match('(.+BDMV[/\\]STREAM)', full_path).groups()[0]
except Exception:
mts_path = os.path.split(full_path)[0]
return combine_mts(mts_path)
logger.error('No VIDEO_TS or BDMV/SOURCE folder found in image file {0}'.format(mount_point), 'TRANSCODER')
logger.error(f'No VIDEO_TS or BDMV/SOURCE folder found in image file {mount_point}', 'TRANSCODER')
return ['failure'] # If we got here, nothing matched our criteria
@ -710,16 +709,16 @@ def rip_iso(item, new_dir, bitbucket):
failure_dir = 'failure'
# Mount the ISO in your OS and call combineVTS.
if not core.SEVENZIP:
logger.debug('No 7zip installed. Attempting to mount image file {0}'.format(item), 'TRANSCODER')
logger.debug(f'No 7zip installed. Attempting to mount image file {item}', 'TRANSCODER')
try:
new_files = mount_iso(item, new_dir, bitbucket) # Currently only works for Linux.
except Exception:
logger.error('Failed to mount and extract from image file {0}'.format(item), 'TRANSCODER')
logger.error(f'Failed to mount and extract from image file {item}', 'TRANSCODER')
new_files = [failure_dir]
return new_files
cmd = [core.SEVENZIP, 'l', item]
try:
logger.debug('Attempting to extract .vob or .mts from image file {0}'.format(item), 'TRANSCODER')
logger.debug(f'Attempting to extract .vob or .mts from image file {item}', 'TRANSCODER')
print_cmd(cmd)
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=bitbucket)
out, err = proc.communicate()
@ -738,7 +737,7 @@ def rip_iso(item, new_dir, bitbucket):
concat = []
m = 1
while True:
vts_name = 'VIDEO_TS{0}VTS_{1:02d}_{2:d}.VOB'.format(os.sep, n + 1, m)
vts_name = f'VIDEO_TS{os.sep}VTS_{n + 1:02d}_{m:d}.VOB'
if vts_name in file_list:
concat.append(vts_name)
m += 1
@ -783,10 +782,10 @@ def rip_iso(item, new_dir, bitbucket):
name = os.path.splitext(os.path.split(item)[1])[0]
new_files.append({item: {'name': name, 'files': combined}})
if not new_files:
logger.error('No VIDEO_TS or BDMV/SOURCE folder found in image file. Attempting to mount and scan {0}'.format(item), 'TRANSCODER')
logger.error(f'No VIDEO_TS or BDMV/SOURCE folder found in image file. Attempting to mount and scan {item}', 'TRANSCODER')
new_files = mount_iso(item, new_dir, bitbucket)
except Exception:
logger.error('Failed to extract from image file {0}'.format(item), 'TRANSCODER')
logger.error(f'Failed to extract from image file {item}', 'TRANSCODER')
new_files = [failure_dir]
return new_files
@ -803,7 +802,7 @@ def combine_vts(vts_path):
concat = []
m = 1
while True:
vts_name = 'VTS_{0:02d}_{1:d}.VOB'.format(n + 1, m)
vts_name = f'VTS_{n + 1:02d}_{m:d}.VOB'
if os.path.isfile(os.path.join(vts_path, vts_name)):
concat.append(os.path.join(vts_path, vts_name))
m += 1
@ -861,19 +860,19 @@ def combine_cd(combine):
files = [file for file in combine if
n + 1 == int(re.match('.+[cC][dD]([0-9]+).', file).groups()[0]) and item in file]
if files:
concat += '{file}|'.format(file=files[0])
concat += f'{files[0]}|'
else:
break
if concat:
new_files.append('concat:{0}'.format(concat[:-1]))
new_files.append(f'concat:{concat[:-1]}')
return new_files
def print_cmd(command):
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.debug('calling command:{0}'.format(cmd))
cmd = f'{cmd} {item}'
logger.debug(f'calling command:{cmd}')
def transcode_directory(dir_name):
@ -914,11 +913,11 @@ def transcode_directory(dir_name):
os.remove(newfile_path)
except OSError as e:
if e.errno != errno.ENOENT: # Ignore the error if it's just telling us that the file doesn't exist
logger.debug('Error when removing transcoding target: {0}'.format(e))
logger.debug(f'Error when removing transcoding target: {e}')
except Exception as e:
logger.debug('Error when removing transcoding target: {0}'.format(e))
logger.debug(f'Error when removing transcoding target: {e}')
logger.info('Transcoding video: {0}'.format(newfile_path))
logger.info(f'Transcoding video: {newfile_path}')
print_cmd(command)
result = 1 # set result to failed in case call fails.
try:
@ -930,15 +929,15 @@ def transcode_directory(dir_name):
for vob in data['files']:
procin = zip_out(vob, img, bitbucket)
if procin:
logger.debug('Feeding in file: {0} to Transcoder'.format(vob))
logger.debug(f'Feeding in file: {vob} to Transcoder')
shutil.copyfileobj(procin.stdout, proc.stdin)
procin.stdout.close()
out, err = proc.communicate()
if err:
logger.error('Transcoder returned:{0} has failed'.format(err))
logger.error(f'Transcoder returned:{err} has failed')
result = proc.returncode
except Exception:
logger.error('Transcoding of video {0} has failed'.format(newfile_path))
logger.error(f'Transcoding of video {newfile_path} has failed')
if core.SUBSDIR and result == 0 and isinstance(file, str):
for sub in get_subs(file):
@ -954,14 +953,14 @@ def transcode_directory(dir_name):
shutil.copymode(file, newfile_path)
except Exception:
pass
logger.info('Transcoding of video to {0} succeeded'.format(newfile_path))
logger.info(f'Transcoding of video to {newfile_path} succeeded')
if os.path.isfile(newfile_path) and (file in new_list or not core.DUPLICATE):
try:
os.unlink(file)
except Exception:
pass
else:
logger.error('Transcoding of video to {0} failed with result {1}'.format(newfile_path, result))
logger.error(f'Transcoding of video to {newfile_path} failed with result {result}')
# this will be 0 (successful) it all are successful, else will return a positive integer for failure.
final_result = final_result + result
if core.MOUNTED: # In case we mounted an .iso file, unmount here.

View file

@ -54,7 +54,7 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
if transcoder.is_video_good(video, 0):
import_subs(video)
else:
logger.info('Corrupt video file found {0}. Deleting.'.format(video), 'USERSCRIPT')
logger.info(f'Corrupt video file found {video}. Deleting.', 'USERSCRIPT')
os.unlink(video)
for dirpath, _, filenames in os.walk(output_destination):
@ -62,7 +62,7 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
file_path = core.os.path.join(dirpath, file)
file_name, file_extension = os.path.splitext(file)
logger.debug('Checking file {0} to see if this should be processed.'.format(file), 'USERSCRIPT')
logger.debug(f'Checking file {file} to see if this should be processed.', 'USERSCRIPT')
if file_extension in core.USER_SCRIPT_MEDIAEXTENSIONS or 'all' in core.USER_SCRIPT_MEDIAEXTENSIONS:
num_files += 1
@ -71,44 +71,42 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
command = [core.USER_SCRIPT]
for param in core.USER_SCRIPT_PARAM:
if param == 'FN':
command.append('{0}'.format(file))
command.append(f'{file}')
continue
elif param == 'FP':
command.append('{0}'.format(file_path))
command.append(f'{file_path}')
continue
elif param == 'TN':
command.append('{0}'.format(torrent_name))
command.append(f'{torrent_name}')
continue
elif param == 'TL':
command.append('{0}'.format(torrent_label))
command.append(f'{torrent_label}')
continue
elif param == 'DN':
if core.USER_SCRIPT_RUNONCE == 1:
command.append('{0}'.format(output_destination))
command.append(f'{output_destination}')
else:
command.append('{0}'.format(dirpath))
command.append(f'{dirpath}')
continue
else:
command.append(param)
continue
cmd = ''
for item in command:
cmd = '{cmd} {item}'.format(cmd=cmd, item=item)
logger.info('Running script {cmd} on file {path}.'.format(cmd=cmd, path=file_path), 'USERSCRIPT')
cmd = f'{cmd} {item}'
logger.info(f'Running script {cmd} on file {file_path}.', 'USERSCRIPT')
try:
p = Popen(command)
res = p.wait()
if str(res) in core.USER_SCRIPT_SUCCESSCODES: # Linux returns 0 for successful.
logger.info('UserScript {0} was successfull'.format(command[0]))
logger.info(f'UserScript {command[0]} was successfull')
result = 0
else:
logger.error('UserScript {0} has failed with return code: {1}'.format(command[0], res), 'USERSCRIPT')
logger.info(
'If the UserScript completed successfully you should add {0} to the user_script_successCodes'.format(
res), 'USERSCRIPT')
logger.error(f'UserScript {command[0]} has failed with return code: {res}', 'USERSCRIPT')
logger.info(f'If the UserScript completed successfully you should add {res} to the user_script_successCodes', 'USERSCRIPT')
result = int(1)
except Exception:
logger.error('UserScript {0} has failed'.format(command[0]), 'USERSCRIPT')
logger.error(f'UserScript {command[0]} has failed', 'USERSCRIPT')
result = int(1)
final_result += result
@ -121,11 +119,10 @@ def external_script(output_destination, torrent_name, torrent_label, settings):
num_files_new += 1
if core.USER_SCRIPT_CLEAN == int(1) and num_files_new == 0 and final_result == 0:
logger.info('All files have been processed. Cleaning outputDirectory {0}'.format(output_destination))
logger.info(f'All files have been processed. Cleaning outputDirectory {output_destination}')
remove_dir(output_destination)
elif core.USER_SCRIPT_CLEAN == int(1) and num_files_new != 0:
logger.info('{0} files were processed, but {1} still remain. outputDirectory will not be cleaned.'.format(
num_files, num_files_new))
logger.info(f'{num_files} files were processed, but {num_files_new} still remain. outputDirectory will not be cleaned.')
return ProcessResult(
status_code=final_result,
message='User Script Completed',

View file

@ -26,7 +26,7 @@ def clean_dir(path, section, subsection):
def process_dir(path, link):
folders = []
logger.info('Searching {0} for mediafiles to post-process ...'.format(path))
logger.info(f'Searching {path} for mediafiles to post-process ...')
dir_contents = os.listdir(path)
# search for single files and move them into their own folder for post-processing
@ -56,7 +56,7 @@ def process_dir(path, link):
try:
move_file(mediafile, path, link)
except Exception as e:
logger.error('Failed to move {0} to its own directory: {1}'.format(os.path.split(mediafile)[1], e))
logger.error(f'Failed to move {os.path.split(mediafile)[1]} to its own directory: {e}')
# removeEmptyFolders(path, removeRoot=False)
@ -97,7 +97,7 @@ def get_dirs(section, subsection, link='hard'):
try:
to_return.extend(process_dir(directory, link))
except Exception as e:
logger.error('Failed to add directories from {0} for post-processing: {1}'.format(watch_directory, e))
logger.error(f'Failed to add directories from {watch_directory} for post-processing: {e}')
if core.USE_LINK == 'move':
try:
@ -105,10 +105,10 @@ def get_dirs(section, subsection, link='hard'):
if os.path.exists(output_directory):
to_return.extend(process_dir(output_directory, link))
except Exception as e:
logger.error('Failed to add directories from {0} for post-processing: {1}'.format(core.OUTPUT_DIRECTORY, e))
logger.error(f'Failed to add directories from {core.OUTPUT_DIRECTORY} for post-processing: {e}')
if not to_return:
logger.debug('No directories identified in {0}:{1} for post-processing'.format(section, subsection))
logger.debug(f'No directories identified in {section}:{subsection} for post-processing')
return list(set(to_return))

View file

@ -1,5 +1,4 @@
import os
from builtins import bytes
import core
from core import logger
@ -66,23 +65,23 @@ def convert_to_ascii(input_name, dir_name):
encoded, base2 = char_replace(base)
if encoded:
dir_name = os.path.join(directory, base2)
logger.info('Renaming directory to: {0}.'.format(base2), 'ENCODER')
logger.info(f'Renaming directory to: {base2}.', 'ENCODER')
os.rename(os.path.join(directory, base), dir_name)
if 'NZBOP_SCRIPTDIR' in os.environ:
print('[NZB] DIRECTORY={0}'.format(dir_name))
print(f'[NZB] DIRECTORY={dir_name}')
for dirname, dirnames, _ in os.walk(dir_name, topdown=False):
for subdirname in dirnames:
encoded, subdirname2 = char_replace(subdirname)
if encoded:
logger.info('Renaming directory to: {0}.'.format(subdirname2), 'ENCODER')
logger.info(f'Renaming directory to: {subdirname2}.', 'ENCODER')
os.rename(os.path.join(dirname, subdirname), os.path.join(dirname, subdirname2))
for dirname, _, filenames in os.walk(dir_name):
for filename in filenames:
encoded, filename2 = char_replace(filename)
if encoded:
logger.info('Renaming file to: {0}.'.format(filename2), 'ENCODER')
logger.info(f'Renaming file to: {filename2}.', 'ENCODER')
os.rename(os.path.join(dirname, filename), os.path.join(dirname, filename2))
return input_name, dir_name

View file

@ -15,7 +15,7 @@ from core.utils.paths import get_dir_size, make_dir
def move_file(mediafile, path, link):
logger.debug('Found file {0} in root directory {1}.'.format(os.path.split(mediafile)[1], path))
logger.debug(f'Found file {os.path.split(mediafile)[1]} in root directory {path}.')
new_path = None
file_ext = os.path.splitext(mediafile)[1]
try:
@ -27,7 +27,7 @@ def move_file(mediafile, path, link):
album = f.album
# create new path
new_path = os.path.join(path, '{0} - {1}'.format(sanitize_name(artist), sanitize_name(album)))
new_path = os.path.join(path, f'{sanitize_name(artist)} - {sanitize_name(album)}')
elif file_ext in core.MEDIA_CONTAINER:
f = guessit.guessit(mediafile)
@ -39,7 +39,7 @@ def move_file(mediafile, path, link):
new_path = os.path.join(path, sanitize_name(title))
except Exception as e:
logger.error('Exception parsing name for media file: {0}: {1}'.format(os.path.split(mediafile)[1], e))
logger.error(f'Exception parsing name for media file: {os.path.split(mediafile)[1]}: {e}')
if not new_path:
title = os.path.splitext(os.path.basename(mediafile))[0]
@ -79,7 +79,7 @@ def is_min_size(input_name, min_size):
try:
input_size = get_dir_size(os.path.dirname(input_name))
except Exception:
logger.error('Failed to get file size for {0}'.format(input_name), 'MINSIZE')
logger.error(f'Failed to get file size for {input_name}', 'MINSIZE')
return True
# Ignore files under a certain size
@ -131,8 +131,7 @@ def list_media_files(path, min_size=0, delete_ignored=0, media=True, audio=True,
if delete_ignored == 1:
try:
os.unlink(path)
logger.debug('Ignored file {0} has been removed ...'.format
(cur_file))
logger.debug(f'Ignored file {cur_file} has been removed ...')
except Exception:
pass
else:
@ -153,8 +152,7 @@ def list_media_files(path, min_size=0, delete_ignored=0, media=True, audio=True,
if delete_ignored == 1:
try:
os.unlink(full_cur_file)
logger.debug('Ignored file {0} has been removed ...'.format
(cur_file))
logger.debug(f'Ignored file {cur_file} has been removed ...')
except Exception:
pass
continue
@ -182,7 +180,7 @@ def extract_files(src, dst=None, keep_archive=None):
extracted_folder.append(dir_path)
extracted_archive.append(archive_name)
except Exception:
logger.error('Extraction failed for: {0}'.format(full_file_name))
logger.error(f'Extraction failed for: {full_file_name}')
for folder in extracted_folder:
for inputFile in list_media_files(folder, media=False, audio=False, meta=False, archives=True):
@ -191,24 +189,24 @@ def extract_files(src, dst=None, keep_archive=None):
archive_name = re.sub(r'part[0-9]+', '', archive_name)
if archive_name not in extracted_archive or keep_archive:
continue # don't remove if we haven't extracted this archive, or if we want to preserve them.
logger.info('Removing extracted archive {0} from folder {1} ...'.format(full_file_name, folder))
logger.info(f'Removing extracted archive {full_file_name} from folder {folder} ...')
try:
if not os.access(inputFile, os.W_OK):
os.chmod(inputFile, stat.S_IWUSR)
os.remove(inputFile)
time.sleep(1)
except Exception as e:
logger.error('Unable to remove file {0} due to: {1}'.format(inputFile, e))
logger.error(f'Unable to remove file {inputFile} due to: {e}')
def backup_versioned_file(old_file, version):
num_tries = 0
new_file = '{old}.v{version}'.format(old=old_file, version=version)
new_file = f'{old_file}.v{version}'
while not os.path.isfile(new_file):
if not os.path.isfile(old_file):
logger.log('Not creating backup, {file} doesn\'t exist'.format(file=old_file), logger.DEBUG)
logger.log(f'Not creating backup, {old_file} doesn\'t exist', logger.DEBUG)
break
try:
@ -224,7 +222,7 @@ def backup_versioned_file(old_file, version):
logger.log('Trying again.', logger.DEBUG)
if num_tries >= 10:
logger.log('Unable to back up {old} to {new} please do it manually.'.format(old=old_file, new=new_file), logger.ERROR)
logger.log(f'Unable to back up {old_file} to {new_file} please do it manually.', logger.ERROR)
return False
return True

View file

@ -11,21 +11,21 @@ from core.utils.naming import sanitize_name
def find_imdbid(dir_name, input_name, omdb_api_key):
imdbid = None
logger.info('Attemping imdbID lookup for {0}'.format(input_name))
logger.info(f'Attemping imdbID lookup for {input_name}')
# find imdbid in dirName
logger.info('Searching folder and file names for imdbID ...')
m = re.search(r'\b(tt\d{7,8})\b', dir_name + input_name)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}]'.format(imdbid))
logger.info(f'Found imdbID [{imdbid}]')
return imdbid
if os.path.isdir(dir_name):
for file in os.listdir(dir_name):
m = re.search(r'\b(tt\d{7,8})\b', file)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}] via file name'.format(imdbid))
logger.info(f'Found imdbID [{imdbid}] via file name')
return imdbid
if 'NZBPR__DNZB_MOREINFO' in os.environ:
dnzb_more_info = os.environ.get('NZBPR__DNZB_MOREINFO', '')
@ -34,7 +34,7 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
m = regex.match(dnzb_more_info)
if m:
imdbid = m.group(1)
logger.info('Found imdbID [{0}] from DNZB-MoreInfo'.format(imdbid))
logger.info(f'Found imdbID [{imdbid}] from DNZB-MoreInfo')
return imdbid
logger.info('Searching IMDB for imdbID ...')
try:
@ -58,13 +58,13 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
logger.info('Unable to determine imdbID: No api key provided for omdbapi.com.')
return
logger.debug('Opening URL: {0}'.format(url))
logger.debug(f'Opening URL: {url}')
try:
r = requests.get(url, params={'apikey': omdb_api_key, 'y': year, 't': title},
verify=False, timeout=(60, 300))
except requests.ConnectionError:
logger.error('Unable to open URL {0}'.format(url))
logger.error(f'Unable to open URL {url}')
return
try:
@ -78,10 +78,10 @@ def find_imdbid(dir_name, input_name, omdb_api_key):
logger.error('No imdbID returned from omdbapi.com')
if imdbid:
logger.info('Found imdbID [{0}]'.format(imdbid))
logger.info(f'Found imdbID [{imdbid}]')
return imdbid
logger.warning('Unable to find a imdbID for {0}'.format(input_name))
logger.warning(f'Unable to find a imdbID for {input_name}')
return imdbid
@ -94,13 +94,13 @@ def category_search(input_directory, input_name, input_category, root, categorie
pathlist = os.path.normpath(input_directory).split(os.sep)
if input_category and input_category in pathlist:
logger.debug('SEARCH: Found the Category: {0} in directory structure'.format(input_category))
logger.debug(f'SEARCH: Found the Category: {input_category} in directory structure')
elif input_category:
logger.debug('SEARCH: Could not find the category: {0} in the directory structure'.format(input_category))
logger.debug(f'SEARCH: Could not find the category: {input_category} in the directory structure')
else:
try:
input_category = list(set(pathlist) & set(categories))[-1] # assume last match is most relevant category.
logger.debug('SEARCH: Found Category: {0} in directory structure'.format(input_category))
logger.debug(f'SEARCH: Found Category: {input_category} in directory structure')
except IndexError:
input_category = ''
logger.debug('SEARCH: Could not find a category in the directory structure')
@ -111,37 +111,37 @@ def category_search(input_directory, input_name, input_category, root, categorie
if input_category and os.path.isdir(os.path.join(input_directory, input_category)):
logger.info(
'SEARCH: Found category directory {0} in input directory directory {1}'.format(input_category, input_directory))
f'SEARCH: Found category directory {input_category} in input directory directory {input_directory}')
input_directory = os.path.join(input_directory, input_category)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
logger.info(f'SEARCH: Setting input_directory to {input_directory}')
if input_name and os.path.isdir(os.path.join(input_directory, input_name)):
logger.info('SEARCH: Found torrent directory {0} in input directory directory {1}'.format(input_name, input_directory))
logger.info(f'SEARCH: Found torrent directory {input_name} in input directory directory {input_directory}')
input_directory = os.path.join(input_directory, input_name)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
logger.info(f'SEARCH: Setting input_directory to {input_directory}')
tordir = True
elif input_name and os.path.isdir(os.path.join(input_directory, sanitize_name(input_name))):
logger.info('SEARCH: Found torrent directory {0} in input directory directory {1}'.format(
logger.info('SEARCH: Found torrent directory {} in input directory directory {}'.format(
sanitize_name(input_name), input_directory))
input_directory = os.path.join(input_directory, sanitize_name(input_name))
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
logger.info(f'SEARCH: Setting input_directory to {input_directory}')
tordir = True
elif input_name and os.path.isfile(os.path.join(input_directory, input_name)):
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(input_name, input_directory))
logger.info(f'SEARCH: Found torrent file {input_name} in input directory directory {input_directory}')
input_directory = os.path.join(input_directory, input_name)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
logger.info(f'SEARCH: Setting input_directory to {input_directory}')
tordir = True
elif input_name and os.path.isfile(os.path.join(input_directory, sanitize_name(input_name))):
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(
logger.info('SEARCH: Found torrent file {} in input directory directory {}'.format(
sanitize_name(input_name), input_directory))
input_directory = os.path.join(input_directory, sanitize_name(input_name))
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
logger.info(f'SEARCH: Setting input_directory to {input_directory}')
tordir = True
elif input_name and os.path.isdir(input_directory):
for file in os.listdir(input_directory):
if os.path.splitext(file)[0] in [input_name, sanitize_name(input_name)]:
logger.info('SEARCH: Found torrent file {0} in input directory directory {1}'.format(file, input_directory))
logger.info(f'SEARCH: Found torrent file {file} in input directory directory {input_directory}')
input_directory = os.path.join(input_directory, file)
logger.info('SEARCH: Setting input_directory to {0}'.format(input_directory))
logger.info(f'SEARCH: Setting input_directory to {input_directory}')
input_name = file
tordir = True
break
@ -156,8 +156,7 @@ def category_search(input_directory, input_name, input_category, root, categorie
index = pathlist.index(input_category)
if index + 1 < len(pathlist):
tordir = True
logger.info('SEARCH: Found a unique directory {0} in the category directory'.format
(pathlist[index + 1]))
logger.info(f'SEARCH: Found a unique directory {pathlist[index + 1]} in the category directory')
if not input_name:
input_name = pathlist[index + 1]
except ValueError:
@ -165,7 +164,7 @@ def category_search(input_directory, input_name, input_category, root, categorie
if input_name and not tordir:
if input_name in pathlist or sanitize_name(input_name) in pathlist:
logger.info('SEARCH: Found torrent directory {0} in the directory structure'.format(input_name))
logger.info(f'SEARCH: Found torrent directory {input_name} in the directory structure')
tordir = True
else:
root = 1

View file

@ -17,9 +17,9 @@ except ImportError:
def copy_link(src, target_link, use_link):
logger.info('MEDIAFILE: [{0}]'.format(os.path.basename(target_link)), 'COPYLINK')
logger.info('SOURCE FOLDER: [{0}]'.format(os.path.dirname(src)), 'COPYLINK')
logger.info('TARGET FOLDER: [{0}]'.format(os.path.dirname(target_link)), 'COPYLINK')
logger.info(f'MEDIAFILE: [{os.path.basename(target_link)}]', 'COPYLINK')
logger.info(f'SOURCE FOLDER: [{os.path.dirname(src)}]', 'COPYLINK')
logger.info(f'TARGET FOLDER: [{os.path.dirname(target_link)}]', 'COPYLINK')
if src != target_link and os.path.exists(target_link):
logger.info('MEDIAFILE already exists in the TARGET folder, skipping ...', 'COPYLINK')
@ -59,7 +59,7 @@ def copy_link(src, target_link, use_link):
shutil.move(src, target_link)
return True
except Exception as e:
logger.warning('Error: {0}, copying instead ... '.format(e), 'COPYLINK')
logger.warning(f'Error: {e}, copying instead ... ', 'COPYLINK')
logger.info('Copying SOURCE MEDIAFILE -> TARGET FOLDER', 'COPYLINK')
shutil.copy(src, target_link)
@ -78,10 +78,10 @@ def replace_links(link, max_depth=10):
link_depth = attempt
if not link_depth:
logger.debug('{0} is not a link'.format(link))
logger.debug(f'{link} is not a link')
elif link_depth > max_depth or (link_depth == max_depth and islink(target)):
logger.warning('Exceeded maximum depth {0} while following link {1}'.format(max_depth, link))
logger.warning(f'Exceeded maximum depth {max_depth} while following link {link}')
else:
logger.info('Changing sym-link: {0} to point directly to file: {1}'.format(link, target), 'COPYLINK')
logger.info(f'Changing sym-link: {link} to point directly to file: {target}', 'COPYLINK')
os.unlink(link)
linktastic.symlink(target, link)

View file

@ -30,7 +30,7 @@ def wake_on_lan(ethernet_address):
connection.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
connection.sendto(magic_packet, ('<broadcast>', 9))
logger.info('WakeOnLan sent for mac: {0}'.format(ethernet_address))
logger.info(f'WakeOnLan sent for mac: {ethernet_address}')
def test_connection(host, port):
@ -38,7 +38,7 @@ def test_connection(host, port):
address = host, port
try:
socket.create_connection(address)
except socket.error:
except OSError:
return 'Down'
else:
return 'Up'
@ -54,9 +54,9 @@ def wake_up():
logger.info('Trying to wake On lan.')
for attempt in range(0, max_attempts):
logger.info('Attempt {0} of {1}'.format(attempt + 1, max_attempts, mac))
logger.info(f'Attempt {attempt + 1} of {max_attempts}')
if test_connection(host, port) == 'Up':
logger.info('System with mac: {0} has been woken.'.format(mac))
logger.info(f'System with mac: {mac} has been woken.')
break
wake_on_lan(mac)
time.sleep(20)
@ -69,19 +69,19 @@ def wake_up():
def server_responding(base_url):
logger.debug('Attempting to connect to server at {0}'.format(base_url), 'SERVER')
logger.debug(f'Attempting to connect to server at {base_url}', 'SERVER')
try:
requests.get(base_url, timeout=(60, 120), verify=False)
except (requests.ConnectionError, requests.exceptions.Timeout):
logger.error('Server failed to respond at {0}'.format(base_url), 'SERVER')
logger.error(f'Server failed to respond at {base_url}', 'SERVER')
return False
else:
logger.debug('Server responded at {0}'.format(base_url), 'SERVER')
logger.debug(f'Server responded at {base_url}', 'SERVER')
return True
def find_download(client_agent, download_id):
logger.debug('Searching for Download on {0} ...'.format(client_agent))
logger.debug(f'Searching for Download on {client_agent} ...')
if client_agent == 'utorrent':
torrents = core.TORRENT_CLASS.list()[1]['torrents']
for torrent in torrents:
@ -102,9 +102,9 @@ def find_download(client_agent, download_id):
return True
if client_agent == 'sabnzbd':
if 'http' in core.SABNZBD_HOST:
base_url = '{0}:{1}/api'.format(core.SABNZBD_HOST, core.SABNZBD_PORT)
base_url = f'{core.SABNZBD_HOST}:{core.SABNZBD_PORT}/api'
else:
base_url = 'http://{0}:{1}/api'.format(core.SABNZBD_HOST, core.SABNZBD_PORT)
base_url = f'http://{core.SABNZBD_HOST}:{core.SABNZBD_PORT}/api'
url = base_url
params = {
'apikey': core.SABNZBD_APIKEY,

View file

@ -84,10 +84,11 @@ def parse_synods(args):
input_hash = os.getenv('TR_TORRENT_HASH')
if not input_name: # No info passed. Assume manual download.
return input_directory, input_name, input_category, input_hash, input_id
input_id = 'dbid_{0}'.format(os.getenv('TR_TORRENT_ID'))
torrent_id = os.getenv('TR_TORRENT_ID')
input_id = f'dbid_{torrent_id}'
#res = core.TORRENT_CLASS.tasks_list(additional_param='detail')
res = core.TORRENT_CLASS.tasks_info(input_id, additional_param='detail')
logger.debug('result from syno {0}'.format(res))
logger.debug(f'result from syno {res}')
if res['success']:
try:
tasks = res['data']['tasks']

View file

@ -28,11 +28,11 @@ def onerror(func, path, exc_info):
def remove_dir(dir_name):
logger.info('Deleting {0}'.format(dir_name))
logger.info(f'Deleting {dir_name}')
try:
shutil.rmtree(dir_name, onerror=onerror)
except Exception:
logger.error('Unable to delete folder {0}'.format(dir_name))
logger.error(f'Unable to delete folder {dir_name}')
def make_dir(path):
@ -76,7 +76,7 @@ def remove_empty_folders(path, remove_root=True):
return
# remove empty subfolders
logger.debug('Checking for empty folders in:{0}'.format(path))
logger.debug(f'Checking for empty folders in:{path}')
files = os.listdir(path)
if len(files):
for f in files:
@ -87,7 +87,7 @@ def remove_empty_folders(path, remove_root=True):
# if folder empty, delete it
files = os.listdir(path)
if len(files) == 0 and remove_root:
logger.debug('Removing empty folder:{}'.format(path))
logger.debug(f'Removing empty folder:{path}')
os.rmdir(path)
@ -102,11 +102,11 @@ def remove_read_only(filename):
try:
os.chmod(filename, stat.S_IWRITE)
except Exception:
logger.warning('Cannot change permissions of {file}'.format(file=filename), logger.WARNING)
logger.warning(f'Cannot change permissions of {filename}', logger.WARNING)
def flatten_dir(destination, files):
logger.info('FLATTEN: Flattening directory: {0}'.format(destination))
logger.info(f'FLATTEN: Flattening directory: {destination}')
for outputFile in files:
dir_path = os.path.dirname(outputFile)
file_name = os.path.basename(outputFile)
@ -119,37 +119,37 @@ def flatten_dir(destination, files):
try:
shutil.move(outputFile, target)
except Exception:
logger.error('Could not flatten {0}'.format(outputFile), 'FLATTEN')
logger.error(f'Could not flatten {outputFile}', 'FLATTEN')
remove_empty_folders(destination) # Cleanup empty directories
def clean_directory(path, files):
if not os.path.exists(path):
logger.info('Directory {0} has been processed and removed ...'.format(path), 'CLEANDIR')
logger.info(f'Directory {path} has been processed and removed ...', 'CLEANDIR')
return
if core.FORCE_CLEAN and not core.FAILED:
logger.info('Doing Forceful Clean of {0}'.format(path), 'CLEANDIR')
logger.info(f'Doing Forceful Clean of {path}', 'CLEANDIR')
remove_dir(path)
return
if files:
logger.info(
'Directory {0} still contains {1} unprocessed file(s), skipping ...'.format(path, len(files)),
f'Directory {path} still contains {len(files)} unprocessed file(s), skipping ...',
'CLEANDIRS',
)
return
logger.info('Directory {0} has been processed, removing ...'.format(path), 'CLEANDIRS')
logger.info(f'Directory {path} has been processed, removing ...', 'CLEANDIRS')
try:
shutil.rmtree(path, onerror=onerror)
except Exception:
logger.error('Unable to delete directory {0}'.format(path))
logger.error(f'Unable to delete directory {path}')
def rchmod(path, mod):
logger.log('Changing file mode of {0} to {1}'.format(path, oct(mod)))
logger.log(f'Changing file mode of {path} to {oct(mod)}')
os.chmod(path, mod)
if not os.path.isdir(path):
return # Skip files

View file

@ -1,4 +1,3 @@
import os
import socket
import subprocess
@ -44,10 +43,10 @@ class PosixProcess:
def alreadyrunning(self):
try:
self.lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
self.lock_socket.bind('\0{path}'.format(path=self.pidpath))
self.lock_socket.bind(f'\0{self.pidpath}')
self.lasterror = False
return self.lasterror
except socket.error as e:
except OSError as e:
if 'Address already in use' in str(e):
self.lasterror = True
return self.lasterror
@ -56,7 +55,7 @@ class PosixProcess:
if os.path.exists(self.pidpath):
# Make sure it is not a 'stale' pidFile
try:
pid = int(open(self.pidpath, 'r').read().strip())
pid = int(open(self.pidpath).read().strip())
except Exception:
pid = None
# Check list of running pids, if not running it is stale so overwrite
@ -107,7 +106,7 @@ def restart():
if popen_list:
popen_list += SYS_ARGV
logger.log('Restarting nzbToMedia with {args}'.format(args=popen_list))
logger.log(f'Restarting nzbToMedia with {popen_list}')
logger.close()
p = subprocess.Popen(popen_list, cwd=os.getcwd())
p.wait()

View file

@ -64,7 +64,7 @@ class CheckVersion:
logger.log('Version checking is disabled, not checking for the newest version')
return False
logger.log('Checking if {install} needs an update'.format(install=self.install_type))
logger.log(f'Checking if {self.install_type} needs an update')
if not self.updater.need_update():
core.NEWEST_VERSION_STRING = None
logger.log('No update needed')
@ -111,7 +111,7 @@ class GitUpdateManager(UpdateManager):
test_cmd = 'version'
if core.GIT_PATH:
main_git = '"{git}"'.format(git=core.GIT_PATH)
main_git = f'"{core.GIT_PATH}"'
else:
main_git = 'git'
@ -120,10 +120,10 @@ class GitUpdateManager(UpdateManager):
output, err, exit_status = self._run_git(main_git, test_cmd)
if exit_status == 0:
logger.log('Using: {git}'.format(git=main_git), logger.DEBUG)
logger.log(f'Using: {main_git}', logger.DEBUG)
return main_git
else:
logger.log('Not using: {git}'.format(git=main_git), logger.DEBUG)
logger.log(f'Not using: {main_git}', logger.DEBUG)
# trying alternatives
@ -146,10 +146,10 @@ class GitUpdateManager(UpdateManager):
output, err, exit_status = self._run_git(cur_git, test_cmd)
if exit_status == 0:
logger.log('Using: {git}'.format(git=cur_git), logger.DEBUG)
logger.log(f'Using: {cur_git}', logger.DEBUG)
return cur_git
else:
logger.log('Not using: {git}'.format(git=cur_git), logger.DEBUG)
logger.log(f'Not using: {cur_git}', logger.DEBUG)
# Still haven't found a working git
logger.debug('Unable to find your git executable - '
@ -168,7 +168,7 @@ class GitUpdateManager(UpdateManager):
exit_status = 1
return output, err, exit_status
cmd = '{git} {args}'.format(git=git_path, args=args)
cmd = f'{git_path} {args}'
try:
logger.log('Executing {cmd} with your shell in {directory}'.format
@ -183,15 +183,15 @@ class GitUpdateManager(UpdateManager):
if output:
output = output.strip()
if core.LOG_GIT:
logger.log('git output: {output}'.format(output=output), logger.DEBUG)
logger.log(f'git output: {output}', logger.DEBUG)
except OSError:
logger.log('Command {cmd} didn\'t work'.format(cmd=cmd))
logger.log(f'Command {cmd} didn\'t work')
exit_status = 1
exit_status = 128 if ('fatal:' in output) or err else exit_status
if exit_status == 0:
logger.log('{cmd} : returned successful'.format(cmd=cmd), logger.DEBUG)
logger.log(f'{cmd} : returned successful', logger.DEBUG)
exit_status = 0
elif core.LOG_GIT and exit_status in (1, 128):
logger.log('{cmd} returned : {output}'.format
@ -310,7 +310,7 @@ class GitUpdateManager(UpdateManager):
try:
self._check_github_for_update()
except Exception as error:
logger.log('Unable to contact github, can\'t check for update: {msg!r}'.format(msg=error), logger.ERROR)
logger.log(f'Unable to contact github, can\'t check for update: {error!r}', logger.ERROR)
return False
if self._num_commits_behind > 0:
@ -325,7 +325,7 @@ class GitUpdateManager(UpdateManager):
Calls git pull origin <branch> in order to update Sick Beard.
Returns a bool depending on the call's success.
"""
output, err, exit_status = self._run_git(self._git_path, 'pull origin {branch}'.format(branch=self.branch)) # @UnusedVariable
output, err, exit_status = self._run_git(self._git_path, f'pull origin {self.branch}') # @UnusedVariable
if exit_status == 0:
return True
@ -352,10 +352,10 @@ class SourceUpdateManager(UpdateManager):
return
try:
with open(version_file, 'r') as fp:
with open(version_file) as fp:
self._cur_commit_hash = fp.read().strip(' \n\r')
except EnvironmentError as error:
logger.log('Unable to open \'version.txt\': {msg}'.format(msg=error), logger.DEBUG)
except OSError as error:
logger.log(f'Unable to open \'version.txt\': {error}', logger.DEBUG)
if not self._cur_commit_hash:
self._cur_commit_hash = None
@ -369,7 +369,7 @@ class SourceUpdateManager(UpdateManager):
try:
self._check_github_for_update()
except Exception as error:
logger.log('Unable to contact github, can\'t check for update: {msg!r}'.format(msg=error), logger.ERROR)
logger.log(f'Unable to contact github, can\'t check for update: {error!r}', logger.ERROR)
return False
if not self._cur_commit_hash or self._num_commits_behind > 0:
@ -444,14 +444,14 @@ class SourceUpdateManager(UpdateManager):
sb_update_dir = os.path.join(core.APP_ROOT, 'sb-update')
if os.path.isdir(sb_update_dir):
logger.log('Clearing out update folder {dir} before extracting'.format(dir=sb_update_dir))
logger.log(f'Clearing out update folder {sb_update_dir} before extracting')
shutil.rmtree(sb_update_dir)
logger.log('Creating update folder {dir} before extracting'.format(dir=sb_update_dir))
logger.log(f'Creating update folder {sb_update_dir} before extracting')
os.makedirs(sb_update_dir)
# retrieve file
logger.log('Downloading update from {url!r}'.format(url=tar_download_url))
logger.log(f'Downloading update from {tar_download_url!r}')
tar_download_path = os.path.join(sb_update_dir, 'nzbtomedia-update.tar')
urlretrieve(tar_download_url, tar_download_path)
@ -466,20 +466,20 @@ class SourceUpdateManager(UpdateManager):
return False
# extract to sb-update dir
logger.log('Extracting file {path}'.format(path=tar_download_path))
logger.log(f'Extracting file {tar_download_path}')
tar = tarfile.open(tar_download_path)
tar.extractall(sb_update_dir)
tar.close()
# delete .tar.gz
logger.log('Deleting file {path}'.format(path=tar_download_path))
logger.log(f'Deleting file {tar_download_path}')
os.remove(tar_download_path)
# find update dir name
update_dir_contents = [x for x in os.listdir(sb_update_dir) if
os.path.isdir(os.path.join(sb_update_dir, x))]
if len(update_dir_contents) != 1:
logger.log('Invalid update data, update failed: {0}'.format(update_dir_contents), logger.ERROR)
logger.log(f'Invalid update data, update failed: {update_dir_contents}', logger.ERROR)
return False
content_dir = os.path.join(sb_update_dir, update_dir_contents[0])
@ -514,7 +514,7 @@ class SourceUpdateManager(UpdateManager):
try:
with open(version_path, 'w') as ver_file:
ver_file.write(self._newest_commit_hash)
except EnvironmentError as error:
except OSError as error:
logger.log('Unable to write version file, update not complete: {msg}'.format
(msg=error), logger.ERROR)
return False
@ -522,7 +522,7 @@ class SourceUpdateManager(UpdateManager):
except Exception as error:
logger.log('Error while trying to update: {msg}'.format
(msg=error), logger.ERROR)
logger.log('Traceback: {error}'.format(error=traceback.format_exc()), logger.DEBUG)
logger.log(f'Traceback: {traceback.format_exc()}', logger.DEBUG)
return False
return True

View file

@ -1,11 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import site
import sys

View file

@ -1,11 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import shutil
import os
import time
@ -23,5 +15,5 @@ if __name__ == '__main__':
else:
print('Removed', directory)
time.sleep(10)
requirements = 'requirements-{name}.txt'.format(name=lib)
requirements = f'requirements-{lib}.txt'
libs.util.install_requirements(requirements, file=True, path=directory)

View file

@ -1,11 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import libs
__all__ = ['completed']

View file

@ -1,11 +1,3 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import subprocess
import sys
import os

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -117,8 +116,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -262,12 +261,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -68,8 +67,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -102,12 +101,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -82,8 +81,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -124,12 +123,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -76,8 +75,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -110,12 +109,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -97,8 +96,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -239,12 +238,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -715,12 +714,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import os
import sys
@ -743,11 +736,11 @@ def main(args, section=None):
core.initialize(section)
logger.info('#########################################################')
logger.info('## ..::[{0}]::.. ##'.format(os.path.basename(__file__)))
logger.info(f'## ..::[{os.path.basename(__file__)}]::.. ##')
logger.info('#########################################################')
# debug command line options
logger.debug('Options passed into nzbToMedia: {0}'.format(args))
logger.debug(f'Options passed into nzbToMedia: {args}')
# Post-Processing Result
result = ProcessResult(
@ -774,14 +767,14 @@ def main(args, section=None):
manual.process()
if result.status_code == 0:
logger.info('The {0} script completed successfully.'.format(args[0]))
logger.info(f'The {args[0]} script completed successfully.')
if result.message:
print(result.message + '!')
if 'NZBOP_SCRIPTDIR' in os.environ: # return code for nzbget v11
del core.MYAPP
return core.NZBGET_POSTPROCESS_SUCCESS
else:
logger.error('A problem was reported in the {0} script.'.format(args[0]))
logger.error(f'A problem was reported in the {args[0]} script.')
if result.message:
print(result.message + '!')
if 'NZBOP_SCRIPTDIR' in os.environ: # return code for nzbget v11

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -73,8 +72,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -115,12 +114,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -252,12 +251,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -257,12 +256,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -266,12 +265,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -263,12 +262,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,5 +1,4 @@
#!/usr/bin/env python
# coding=utf-8
#
##############################################################################
### NZBGET POST-PROCESSING SCRIPT ###
@ -112,8 +111,8 @@
# Niceness for external tasks Extractor and Transcoder.
#
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# Set the Niceness value for the nice command. These range from -20 (most favorable to the process) to 19 (least favorable to the process).
# If entering an integer e.g 'niceness=4', this is added to the nice command and passed as 'nice -n4' (Default).
# If entering a comma separated list e.g. 'niceness=nice,4' this will be passed as 'nice 4' (Safer).
#niceness=nice,-n0
@ -257,12 +256,6 @@
### NZBGET POST-PROCESSING SCRIPT ###
##############################################################################
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import sys

View file

@ -1,11 +1,4 @@
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import io
import os.path
@ -14,7 +7,7 @@ from setuptools import setup
def read(*names, **kwargs):
with io.open(
with open(
os.path.join(os.path.dirname(__file__), *names),
encoding=kwargs.get('encoding', 'utf8'),
) as fh:

View file

@ -1,8 +1 @@
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
__author__ = 'Justin'

View file

@ -1,10 +1,4 @@
#! /usr/bin/env python
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import core

View file

@ -1,10 +1,4 @@
#! /usr/bin/env python
from __future__ import (
absolute_import,
division,
print_function,
unicode_literals,
)
import core
from core import transcoder