Fix issue with RTX GPU and TensorFlow (#322)

An issue affecting at least 2070 and 2080 cards (possibly other RTX cards too) requires auto growth to be enabled for TensorFlow to work.

I don't know enough about the impact of this change to know whether this ought to be made optional or not, but for RTX owners, this simple change fixes TensorFlow errors when generating models.
This commit is contained in:
Josh Johnson 2019-08-02 13:40:41 +01:00 committed by iperov
parent 582c974851
commit e2bc65d5f0

View file

@ -45,7 +45,7 @@ class ModelBase(object):
device_args['force_gpu_idx'] = io.input_int("Which GPU idx to choose? ( skip: best GPU ) : ", -1, [ x[0] for x in idxs_names_list] ) device_args['force_gpu_idx'] = io.input_int("Which GPU idx to choose? ( skip: best GPU ) : ", -1, [ x[0] for x in idxs_names_list] )
self.device_args = device_args self.device_args = device_args
self.device_config = nnlib.DeviceConfig(allow_growth=False, **self.device_args) self.device_config = nnlib.DeviceConfig(allow_growth=True, **self.device_args)
io.log_info ("Loading model...") io.log_info ("Loading model...")