Fix issue with RTX GPU and TensorFlow

An issue affecting at least 2070 and 2080 cards (possibly other RTX cards too) requires auto growth to be enabled for TensorFlow to work.

I don't know enough about the impact of this change to know whether this ought to be made optional or not, but for RTX owners, this simple change fixes TensorFlow errors when generating models.
This commit is contained in:
Josh Johnson 2019-07-31 22:44:50 +01:00 committed by GitHub
commit c059544292
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -45,7 +45,7 @@ class ModelBase(object):
device_args['force_gpu_idx'] = io.input_int("Which GPU idx to choose? ( skip: best GPU ) : ", -1, [ x[0] for x in idxs_names_list] )
self.device_args = device_args
self.device_config = nnlib.DeviceConfig(allow_growth=False, **self.device_args)
self.device_config = nnlib.DeviceConfig(allow_growth=True, **self.device_args)
io.log_info ("Loading model...")