import torch
def create_torch_tensors(device):
= torch.rand((10000, 10000), dtype=torch.float32)
x = torch.rand((10000, 10000), dtype=torch.float32)
y = x.to(device)
x = y.to(device)
y
return x, y
Introduction
In year 2022 PyTorch and Metal engineering team at apple collaborated and announced support for GPU-accelerated pytorch operations on mac. Before that PyTorch operations on mac only leveraged CPU. But with PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training as well. Here we will perform simple experiment to see the difference in doing tensor operations on CPU vs leveraging gpu support on M1 Mac.
Initial Setup
In order to run the experiment we need to install below libraries.
!pip install torch torchvision torchaudio
Experiment
Once libraries are installed we can start experiment. In the experiment we will create simple PyTorch tensors and send on device cpu
and mps
one by one and measure the time taken to run multiplication operation.
Lets start with creating some tensors
moving tensor to cpu
= torch.device("cpu")
device = create_torch_tensors(device) x, y
Multiplying the tensors on cpu
device.
%%timeit
* y x
23.2 ms ± 325 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Now run the same operation using gpu of mac and see how much is improve.
= torch.device("mps")
device = create_torch_tensors(device) x, y
Multiplying the tensors on mps
device.
%%timeit
* y x
6.78 ms ± 60.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
We can see there is significant improvement in speed when doing tensor operation using gpu.