Fix segfault issue

I'd been using dlpack for copying triton tensors to torch tensors,
which I did because it was advertised to perform zero copy transfers.
Turns out that only worked on my laptop, and didn't work on other machines.
IDK why. But for now, I'm just copying the tensors as triton<->numpy<->torch.
That works on the VM on which earlier code was segfaulting

Signed-off-by: Parth <thakkarparth007@gmail.com>
This commit is contained in:
Parth 2022-11-19 18:32:50 +00:00
parent f0a12b5e8e
commit 7ea388fe19
4 changed files with 13 additions and 9 deletions

View file

@ -161,4 +161,3 @@ def test_python_backend(n_gpus: int):
# killing docker-compose process doesn't bring down the containers.
# explicitly stop the containers:
subprocess.run(["docker-compose", "-f", compose_file, "down"], cwd=curdir, check=True, env=load_test_env())