Fix segfault issue

I'd been using dlpack for copying triton tensors to torch tensors,
which I did because it was advertised to perform zero copy transfers.
Turns out that only worked on my laptop, and didn't work on other machines.
IDK why. But for now, I'm just copying the tensors as triton<->numpy<->torch.
That works on the VM on which earlier code was segfaulting

Signed-off-by: Parth <thakkarparth007@gmail.com>
This commit is contained in:
Parth 2022-11-19 18:32:50 +00:00
commit 7ea388fe19
4 changed files with 13 additions and 9 deletions

View file

@ -122,12 +122,12 @@ output [
{
name: "output_ids"
data_type: TYPE_INT32
dims: [ -1, -1 ]
dims: [ -1, -1, -1 ]
},
{
name: "sequence_length"
data_type: TYPE_INT32
dims: [ -1 ]
dims: [ -1, -1 ]
} #,
# Following is currently unsupported, but should be supported in the future
# {
@ -177,4 +177,4 @@ parameters {
value: {
string_value: "${use_auto_device_map}" # e.g. "0" or "1"
}
}
}