100 Numpy Exercises solved in PyTorch¶
The pytorch solution for exercies in numpy-100.
Symbols:
- 🔧 Refactor: Rewrite the problem to ensure compatibility and clarity.
- 🚫 Exclude: Omit the problem if it is not suitable for PyTorch implementation.
- 😢 Skip: Skip due to high internal complexity or impracticality.
1. Import the torch package(★☆☆)¶
In [1]:
Copied!
import torch
import torch
2. Print the torch version and the configuration (★☆☆)¶
In [2]:
Copied!
print(torch.__version__)
print(f"CUDA Availability: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA Version: {torch.version.cuda}")
print(f"CUDA Device Count: {torch.cuda.device_count()}")
for i in range(torch.cuda.device_count()):
print(f"Device {i}: {torch.cuda.get_device_name(i)}")
print(torch.__version__)
print(f"CUDA Availability: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA Version: {torch.version.cuda}")
print(f"CUDA Device Count: {torch.cuda.device_count()}")
for i in range(torch.cuda.device_count()):
print(f"Device {i}: {torch.cuda.get_device_name(i)}")
2.5.1+cu121 CUDA Availability: True CUDA Version: 12.1 CUDA Device Count: 1 Device 0: NVIDIA GeForce RTX 4060 Ti
3. Create a null vector of size 10 (★☆☆)¶
In [3]:
Copied!
z = torch.zeros(10)
z
z = torch.zeros(10)
z
Out[3]:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
In [4]:
Copied!
z = torch.zeros((10,))
z
z = torch.zeros((10,))
z
Out[4]:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
4. How to find the memory size of any array (★☆☆)¶
In [5]:
Copied!
z = torch.zeros(10)
z
z = torch.zeros(10)
z
Out[5]:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
In [6]:
Copied!
z.dtype, z.element_size()
z.dtype, z.element_size()
Out[6]:
(torch.float32, 4)
In [7]:
Copied!
print("%d bytes" % (z.numel() * z.element_size()))
print("%d bytes" % (z.numel() * z.element_size()))
40 bytes
5. How to get the documentation of the torch add function from the command line? (★☆☆)¶
In [8]:
Copied!
!python -c "import torch; help(torch.add)"
!python -c "import torch; help(torch.add)"
Help on built-in function add in module torch: add(...) add(input, other, *, alpha=1, out=None) -> Tensor Adds :attr:`other`, scaled by :attr:`alpha`, to :attr:`input`. .. math:: \text{{out}}_i = \text{{input}}_i + \text{{alpha}} \times \text{{other}}_i Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`, :ref:`type promotion <type-promotion-doc>`, and integer, float, and complex inputs. Args: input (Tensor): the input tensor. other (Tensor or Number): the tensor or number to add to :attr:`input`. Keyword arguments: alpha (Number): the multiplier for :attr:`other`. out (Tensor, optional): the output tensor. Examples:: >>> a = torch.randn(4) >>> a tensor([ 0.0202, 1.0985, 1.3506, -0.6056]) >>> torch.add(a, 20) tensor([ 20.0202, 21.0985, 21.3506, 19.3944]) >>> b = torch.randn(4) >>> b tensor([-0.9732, -0.3497, 0.6245, 0.4022]) >>> c = torch.randn(4, 1) >>> c tensor([[ 0.3743], [-1.7724], [-0.5811], [-0.8017]]) >>> torch.add(b, c, alpha=10) tensor([[ 2.7695, 3.3930, 4.3672, 4.1450], [-18.6971, -18.0736, -17.0994, -17.3216], [ -6.7845, -6.1610, -5.1868, -5.4090], [ -8.9902, -8.3667, -7.3925, -7.6147]])
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)¶
In [9]:
Copied!
z = torch.zeros(10)
z[4] = 1
z
z = torch.zeros(10)
z[4] = 1
z
Out[9]:
tensor([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.])
7. Create a vector with values ranging from 10 to 49 (★☆☆)¶
In [10]:
Copied!
z = torch.arange(10, 50)
z
z = torch.arange(10, 50)
z
Out[10]:
tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49])
8. Reverse a vector (first element becomes last) (★☆☆)¶
In [11]:
Copied!
z = torch.arange(50)
z.flip(dims=[0])
z = torch.arange(50)
z.flip(dims=[0])
Out[11]:
tensor([49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0])
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)¶
In [12]:
Copied!
z = torch.arange(0, 9).reshape(3, 3)
z
z = torch.arange(0, 9).reshape(3, 3)
z
Out[12]:
tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
10. Find indices of non-zero elements from [1,2,0,0,4,0] (★☆☆)¶
In [13]:
Copied!
z = torch.tensor([1, 2, 0, 0, 4, 0])
torch.nonzero(z)
z = torch.tensor([1, 2, 0, 0, 4, 0])
torch.nonzero(z)
Out[13]:
tensor([[0], [1], [4]])
11. Create a 3x3 identity matrix (★☆☆)¶
In [14]:
Copied!
z = torch.eye(3)
z
z = torch.eye(3)
z
Out[14]:
tensor([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
12. Create a 3x3x3 array with random values (★☆☆)¶
In [15]:
Copied!
z = torch.rand((3, 3, 3))
z
z = torch.rand((3, 3, 3))
z
Out[15]:
tensor([[[0.0544, 0.7409, 0.3586], [0.4821, 0.6628, 0.1824], [0.6343, 0.1023, 0.3586]], [[0.4451, 0.0445, 0.0448], [0.3173, 0.4945, 0.6955], [0.3680, 0.5672, 0.6336]], [[0.8069, 0.8924, 0.4566], [0.3470, 0.9187, 0.1749], [0.1230, 0.6413, 0.3714]]])
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)¶
In [16]:
Copied!
z = torch.rand((10, 10))
z
z = torch.rand((10, 10))
z
Out[16]:
tensor([[0.1149, 0.4582, 0.3877, 0.7770, 0.6502, 0.4463, 0.8894, 0.2684, 0.9787, 0.0042], [0.3663, 0.0221, 0.3212, 0.7482, 0.1575, 0.6710, 0.1775, 0.3190, 0.5801, 0.2634], [0.1509, 0.7520, 0.8496, 0.3584, 0.1530, 0.2575, 0.0639, 0.5072, 0.9011, 0.5436], [0.2638, 0.1881, 0.6395, 0.7895, 0.6149, 0.9446, 0.6417, 0.4836, 0.0602, 0.5661], [0.9850, 0.0575, 0.6128, 0.2509, 0.8271, 0.7064, 0.9278, 0.3506, 0.7337, 0.9946], [0.1780, 0.8824, 0.3741, 0.4165, 0.9171, 0.4368, 0.5185, 0.4635, 0.0759, 0.6047], [0.7319, 0.3339, 0.0714, 0.2986, 0.1479, 0.4290, 0.9089, 0.0661, 0.9228, 0.0198], [0.6623, 0.4880, 0.3415, 0.8989, 0.9928, 0.4645, 0.3125, 0.0810, 0.7916, 0.3466], [0.7760, 0.6280, 0.9847, 0.9007, 0.9535, 0.9762, 0.2596, 0.0592, 0.5514, 0.2857], [0.2687, 0.7569, 0.6609, 0.1533, 0.2881, 0.8472, 0.5495, 0.6888, 0.6515, 0.4354]])
In [17]:
Copied!
z.min(), z.max()
z.min(), z.max()
Out[17]:
(tensor(0.0042), tensor(0.9946))
14. Create a random vector of size 30 and find the mean value (★☆☆)¶
In [18]:
Copied!
z = torch.rand(30)
z.mean()
z = torch.rand(30)
z.mean()
Out[18]:
tensor(0.5269)
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)¶
In [19]:
Copied!
z = torch.ones(10, 10)
z[1:-1, 1:-1] = 0
z
z = torch.ones(10, 10)
z[1:-1, 1:-1] = 0
z
Out[19]:
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
16. How to add a border (filled with 0's) around an existing array? (★☆☆)¶
In [20]:
Copied!
z = torch.ones(5, 5)
z
z = torch.ones(5, 5)
z
Out[20]:
tensor([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]])
In [21]:
Copied!
border_width = 1
torch.nn.functional.pad(z,
pad=(border_width, border_width, border_width, border_width),
mode='constant',
value=0,
)
border_width = 1
torch.nn.functional.pad(z,
pad=(border_width, border_width, border_width, border_width),
mode='constant',
value=0,
)
Out[21]:
tensor([[0., 0., 0., 0., 0., 0., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 0., 0., 0., 0., 0., 0.]])
17. What is the result of the following expression? (★☆☆)¶
0 * torch.nan
torch.nan == torch.nan
torch.inf > torch.nan
torch.nan - torch.nan
torch.nan in set([torch.nan])
0.3 == 3 * 0.1
In [22]:
Copied!
0 * torch.nan
0 * torch.nan
Out[22]:
nan
In [23]:
Copied!
torch.nan == torch.nan
torch.nan == torch.nan
Out[23]:
False
In [24]:
Copied!
torch.inf > torch.nan
torch.inf > torch.nan
Out[24]:
False
In [25]:
Copied!
torch.nan - torch.nan
torch.nan - torch.nan
Out[25]:
nan
In [26]:
Copied!
torch.nan in set([torch.nan])
torch.nan in set([torch.nan])
Out[26]:
True
In [27]:
Copied!
0.3 == 3 * 0.1
0.3 == 3 * 0.1
Out[27]:
False
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)¶
In [28]:
Copied!
z = torch.tensor([1, 2, 3, 4])
torch.diag(z, diagonal=-1)
z = torch.tensor([1, 2, 3, 4])
torch.diag(z, diagonal=-1)
Out[28]:
tensor([[0, 0, 0, 0, 0], [1, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 4, 0]])
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)¶
In [29]:
Copied!
z = torch.zeros((8, 8))
z[1::2, ::2] = 1 # rows: 2, 4, 6, 8
z[::2, 1::2] = 1 # rows: 1, 3, 5, 7
z
z = torch.zeros((8, 8))
z[1::2, ::2] = 1 # rows: 2, 4, 6, 8
z[::2, 1::2] = 1 # rows: 1, 3, 5, 7
z
Out[29]:
tensor([[0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.]])
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element? (★☆☆)¶
In [30]:
Copied!
torch.unravel_index(torch.tensor(99), (6, 7, 8))
torch.unravel_index(torch.tensor(99), (6, 7, 8))
Out[30]:
(tensor(1), tensor(5), tensor(3))
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)¶
In [31]:
Copied!
z = torch.tile( torch.tensor([[0, 1], [1, 0]]), (4,4))
z
z = torch.tile( torch.tensor([[0, 1], [1, 0]]), (4,4))
z
Out[31]:
tensor([[0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0]])
22. Normalize a 5x5 random matrix (★☆☆)¶
In [32]:
Copied!
z = torch.rand((5,5))
z = (z - torch.mean (z)) / (torch.std (z))
z
z = torch.rand((5,5))
z = (z - torch.mean (z)) / (torch.std (z))
z
Out[32]:
tensor([[ 0.1877, 0.3092, 0.6385, -1.4535, 0.2774], [ 0.8624, 0.0979, 0.9551, -2.0317, 0.0086], [-1.8338, -0.1372, -0.6845, -1.4898, 0.7762], [ 1.4238, -0.4690, 0.3789, 0.7241, -1.0796], [ 1.2606, -0.0356, 1.2659, 0.8993, -0.8509]])
23. 🚫Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)¶
We cannot do this in PyTorch.
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)¶
In [33]:
Copied!
z = torch.matmul(torch.ones((5, 3)), torch.ones((3, 2)))
z
z = torch.matmul(torch.ones((5, 3)), torch.ones((3, 2)))
z
Out[33]:
tensor([[3., 3.], [3., 3.], [3., 3.], [3., 3.], [3., 3.]])
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)¶
In [34]:
Copied!
z = torch.arange(11)
z[(3 < z) & (z < 8)] *= -1
z
z = torch.arange(11)
z[(3 < z) & (z < 8)] *= -1
z
Out[34]:
tensor([ 0, 1, 2, 3, -4, -5, -6, -7, 8, 9, 10])
26. 🚫What is the output of the following script? (★☆☆)¶
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
We cannot do this in PyTorch.
27. Consider an integer vector z, which of these expressions are legal? (★☆☆)¶
z**z
2 << z >> 2
z <- z
1j*z
z/1/1
z<z>z
In [35]:
Copied!
z = torch.arange(10)
z
z = torch.arange(10)
z
Out[35]:
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [36]:
Copied!
# 1
z ** z
# 1
z ** z
Out[36]:
tensor([ 1, 1, 4, 27, 256, 3125, 46656, 823543, 16777216, 387420489])
In [37]:
Copied!
# 2
print(2 << z >> 2)
print((2 << z) >> 2)
# 2
print(2 << z >> 2)
print((2 << z) >> 2)
tensor([ 0, 1, 2, 4, 8, 16, 32, 64, 128, 256]) tensor([ 0, 1, 2, 4, 8, 16, 32, 64, 128, 256])
In [38]:
Copied!
# 3
print(z <- z)
print(z < (-z))
# 3
print(z <- z)
print(z < (-z))
tensor([False, False, False, False, False, False, False, False, False, False]) tensor([False, False, False, False, False, False, False, False, False, False])
In [39]:
Copied!
# 3
import dis
dis.dis('z <- z')
# 3
import dis
dis.dis('z <- z')
0 RESUME 0 1 LOAD_NAME 0 (z) LOAD_NAME 0 (z) UNARY_NEGATIVE COMPARE_OP 2 (<) RETURN_VALUE
In [40]:
Copied!
# 4
1j*z
# 4
1j*z
Out[40]:
tensor([0.+0.j, 0.+1.j, 0.+2.j, 0.+3.j, 0.+4.j, 0.+5.j, 0.+6.j, 0.+7.j, 0.+8.j, 0.+9.j])
In [41]:
Copied!
# 5
print(z/1/1)
print((z/1)/1)
# 5
print(z/1/1)
print((z/1)/1)
tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
In [42]:
Copied!
# 6
try:
print(z<z>z)
except Exception as e:
print(e)
# 6
try:
print(zz)
except Exception as e:
print(e)
Boolean value of Tensor with more than one value is ambiguous
28. What are the result of the following expressions? (★☆☆)¶
torch.tensor(0) / torch.tensor(0)
torch.tensor(0) // torch.tensor(0)
torch.tensor([torch.nan]).to(torch.int).to(torch.float)
In [43]:
Copied!
torch.tensor(0) / torch.tensor(0)
torch.tensor(0) / torch.tensor(0)
Out[43]:
tensor(nan)
In [44]:
Copied!
try:
torch.tensor(0) // torch.tensor(0)
except Exception as e:
print(e)
try:
torch.tensor(0) // torch.tensor(0)
except Exception as e:
print(e)
ZeroDivisionError
In [45]:
Copied!
torch.tensor([torch.nan]).to(torch.int).to(torch.float)
torch.tensor([torch.nan]).to(torch.int).to(torch.float)
Out[45]:
tensor([-2.1475e+09])
29. How to round away from zero a float array ? (★☆☆)¶
In [46]:
Copied!
z = torch.randn((10))
z
z = torch.randn((10))
z
Out[46]:
tensor([-0.3417, 1.2910, 0.2666, -2.0166, 1.6535, -0.4432, 1.0893, 0.7234, -0.0371, 1.5461])
In [47]:
Copied!
torch.copysign(torch.ceil(torch.abs(z)), z)
torch.copysign(torch.ceil(torch.abs(z)), z)
Out[47]:
tensor([-1., 2., 1., -3., 2., -1., 2., 1., -1., 2.])
In [48]:
Copied!
torch.where(z>0, torch.ceil(z), torch.floor(z))
torch.where(z>0, torch.ceil(z), torch.floor(z))
Out[48]:
tensor([-1., 2., 1., -3., 2., -1., 2., 1., -1., 2.])
30. How to find common values between two arrays? (★☆☆)¶
In [49]:
Copied!
z1 = torch.randint(0, 10, (10, ))
z2 = torch.randint(0, 10, (10, ))
print(f"{z1 = }\n{z2 = }")
z1 = torch.randint(0, 10, (10, ))
z2 = torch.randint(0, 10, (10, ))
print(f"{z1 = }\n{z2 = }")
z1 = tensor([3, 3, 6, 5, 8, 9, 5, 5, 9, 8]) z2 = tensor([4, 3, 4, 2, 3, 3, 4, 1, 7, 0])
In [50]:
Copied!
set(z1.tolist()) & set(z2.tolist())
set(z1.tolist()) & set(z2.tolist())
Out[50]:
{3}
31. How to ignore all torch warnings (not recommended)? (★☆☆)¶
In [51]:
Copied!
torch.autograd.detect_anomaly()
torch.autograd.detect_anomaly()
/tmp/ipykernel_85296/675420015.py:1: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. torch.autograd.detect_anomaly()
Out[51]:
<torch.autograd.anomaly_mode.detect_anomaly at 0x7fe2fa8941a0>
In [52]:
Copied!
import warnings
class IgnoreWarnings:
def __enter__(self):
warnings.filterwarnings("ignore")
def __exit__(self, exc_type, exc_val, exc_tb):
warnings.resetwarnings()
with IgnoreWarnings():
torch.autograd.detect_anomaly()
import warnings
class IgnoreWarnings:
def __enter__(self):
warnings.filterwarnings("ignore")
def __exit__(self, exc_type, exc_val, exc_tb):
warnings.resetwarnings()
with IgnoreWarnings():
torch.autograd.detect_anomaly()
In [53]:
Copied!
torch.autograd.detect_anomaly()
torch.autograd.detect_anomaly()
/tmp/ipykernel_85296/675420015.py:1: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. torch.autograd.detect_anomaly()
Out[53]:
<torch.autograd.anomaly_mode.detect_anomaly at 0x7fe2f980b9d0>
32. 🔧How to get the square root of a complex value in torch (★☆☆)¶
In [54]:
Copied!
real = torch.tensor(-1, dtype=torch.float32)
imag = torch.tensor(0, dtype=torch.float32)
x = torch.complex(real, imag)
x
real = torch.tensor(-1, dtype=torch.float32)
imag = torch.tensor(0, dtype=torch.float32)
x = torch.complex(real, imag)
x
Out[54]:
tensor(-1.+0.j)
In [55]:
Copied!
torch.sqrt(x)
torch.sqrt(x)
Out[55]:
tensor(0.+1.j)
33. 🚫How to get the dates of yesterday, today and tomorrow? (★☆☆)¶
34. 🚫How to get all the dates corresponding to the month of July 2016? (★★☆)¶
35. How to compute ((A+B)*(-A/2)) in place (without copy)? (★★☆)¶
In [56]:
Copied!
A = torch.ones(3) * 1
B = torch.ones(3) * 2
A, B
A = torch.ones(3) * 1
B = torch.ones(3) * 2
A, B
Out[56]:
(tensor([1., 1., 1.]), tensor([2., 2., 2.]))
In [57]:
Copied!
torch.add(A, B, out=B)
B
torch.add(A, B, out=B)
B
Out[57]:
tensor([3., 3., 3.])
In [58]:
Copied!
torch.divide(A, 2, out=A)
torch.neg(A, out=A)
A
torch.divide(A, 2, out=A)
torch.neg(A, out=A)
A
Out[58]:
tensor([-0.5000, -0.5000, -0.5000])
In [59]:
Copied!
torch.multiply(B, A, out=A)
torch.multiply(B, A, out=A)
Out[59]:
tensor([-1.5000, -1.5000, -1.5000])
36. Extract the integer part of a random array of positive numbers using 4 different methods (★★☆)¶
In [60]:
Copied!
z = 10 * torch.rand(10)
z
z = 10 * torch.rand(10)
z
Out[60]:
tensor([0.5108, 7.3979, 8.2650, 5.4840, 8.8788, 0.9747, 4.1578, 7.0270, 7.9722, 0.5862])
In [61]:
Copied!
# solution1
z - z % 1
# solution1
z - z % 1
Out[61]:
tensor([0., 7., 8., 5., 8., 0., 4., 7., 7., 0.])
In [62]:
Copied!
# solution2
z // 1
# solution2
z // 1
Out[62]:
tensor([0., 7., 8., 5., 8., 0., 4., 7., 7., 0.])
In [63]:
Copied!
# solution3
z.int()
# solution3
z.int()
Out[63]:
tensor([0, 7, 8, 5, 8, 0, 4, 7, 7, 0], dtype=torch.int32)
In [64]:
Copied!
# solution4
torch.trunc(z)
# solution4
torch.trunc(z)
Out[64]:
tensor([0., 7., 8., 5., 8., 0., 4., 7., 7., 0.])
37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)¶
In [65]:
Copied!
# solution1
torch.zeros(5, 5) + torch.arange(5)
# solution1
torch.zeros(5, 5) + torch.arange(5)
Out[65]:
tensor([[0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.]])
In [66]:
Copied!
# solution2
torch.tile(torch.arange(5), dims=(5, 1))
# solution2
torch.tile(torch.arange(5), dims=(5, 1))
Out[66]:
tensor([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]])
38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)¶
https://stackoverflow.com/questions/55307368/creating-a-torch-tensor-from-a-generator
In [67]:
Copied!
import numpy as np
def generate():
for x in range(10):
yield x
z = torch.from_numpy(np.fromiter(generate(),dtype=float,count=-1))
z
import numpy as np
def generate():
for x in range(10):
yield x
z = torch.from_numpy(np.fromiter(generate(),dtype=float,count=-1))
z
Out[67]:
tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], dtype=torch.float64)
39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)¶
In [68]:
Copied!
torch.linspace(start=0, end=1, steps=12)[1:-1]
torch.linspace(start=0, end=1, steps=12)[1:-1]
Out[68]:
tensor([0.0909, 0.1818, 0.2727, 0.3636, 0.4545, 0.5455, 0.6364, 0.7273, 0.8182, 0.9091])
40. Create a random vector of size 10 and sort it (★★☆)¶
In [69]:
Copied!
z = torch.rand(10)
z
z = torch.rand(10)
z
Out[69]:
tensor([0.3381, 0.1853, 0.7896, 0.6117, 0.1559, 0.6938, 0.4223, 0.4506, 0.4406, 0.0759])
In [70]:
Copied!
z = z.sort()
z
z = z.sort()
z
Out[70]:
torch.return_types.sort( values=tensor([0.0759, 0.1559, 0.1853, 0.3381, 0.4223, 0.4406, 0.4506, 0.6117, 0.6938, 0.7896]), indices=tensor([9, 4, 1, 0, 6, 8, 7, 3, 5, 2]))
🔧41. How to sum a small array faster? (★★☆)¶
np.sum
torch.sum
np.add.reduce
sum
in pythonfor
and+
in python
In [71]:
Copied!
import numpy as np
z_torch = torch.arange(10)
z_numpy = np.arange(10)
z_python = [i for i in range(10)]
z_torch, z_numpy, z_python
import numpy as np
z_torch = torch.arange(10)
z_numpy = np.arange(10)
z_python = [i for i in range(10)]
z_torch, z_numpy, z_python
Out[71]:
(tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [72]:
Copied!
# method 1
%timeit np.sum(z_numpy)
# method 1
%timeit np.sum(z_numpy)
1.62 μs ± 14.3 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [73]:
Copied!
# method 2
%timeit torch.sum(z_torch)
# method 2
%timeit torch.sum(z_torch)
1.35 μs ± 52.4 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [74]:
Copied!
# method 3
%timeit np.add.reduce(z_numpy)
# method 3
%timeit np.add.reduce(z_numpy)
851 ns ± 4.84 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [75]:
Copied!
# method 4
%timeit sum(z_python)
# method 4
%timeit sum(z_python)
70 ns ± 1.2 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
In [76]:
Copied!
# method 5
def sum_with_for_loop():
n = 0
for i in z_python:
n += i
return n
%timeit sum_with_for_loop
# method 5
def sum_with_for_loop():
n = 0
for i in z_python:
n += i
return n
%timeit sum_with_for_loop
8.69 ns ± 0.207 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
42. Consider two random array/tensor A and B, check if they are equal (★★☆)¶
In [77]:
Copied!
t1 = torch.randint(0, 2, (5, ))
t2 = torch.randint(0, 2, (5, ))
t1, t2
t1 = torch.randint(0, 2, (5, ))
t2 = torch.randint(0, 2, (5, ))
t1, t2
Out[77]:
(tensor([0, 1, 1, 0, 0]), tensor([0, 0, 0, 0, 0]))
In [78]:
Copied!
# The behaviour of this function is analogous to `numpy.allclose`
torch.allclose(t1, t2)
# The behaviour of this function is analogous to `numpy.allclose`
torch.allclose(t1, t2)
Out[78]:
False
In [79]:
Copied!
# Computes element-wise equality
torch.eq(t1, t2)
# Computes element-wise equality
torch.eq(t1, t2)
Out[79]:
tensor([ True, False, False, True, True])
43. 🚫Make an array/tensor immutable (read-only) (★★☆)¶
44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)¶
In [80]:
Copied!
z = torch.rand((10, 2))
x, y = z[:, 0], z[:, 1]
r = torch.sqrt(x**2 + y ** 2)
t = torch.arctan2(y, x)
r, t
z = torch.rand((10, 2))
x, y = z[:, 0], z[:, 1]
r = torch.sqrt(x**2 + y ** 2)
t = torch.arctan2(y, x)
r, t
Out[80]:
(tensor([0.3682, 0.8140, 0.7976, 0.9053, 0.5343, 0.6765, 0.7569, 0.5681, 0.9009, 0.2592]), tensor([0.5282, 1.3840, 0.2411, 1.0299, 0.2051, 1.0906, 0.8905, 1.1787, 0.3060, 0.2000]))
45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)¶
In [81]:
Copied!
z = torch.rand(10)
print(f"Before: {z}")
z[z.argmax()] = 0
print(f"After: {z}")
z = torch.rand(10)
print(f"Before: {z}")
z[z.argmax()] = 0
print(f"After: {z}")
Before: tensor([0.8554, 0.7286, 0.8169, 0.6331, 0.1021, 0.6234, 0.4370, 0.5352, 0.8075, 0.0101]) After: tensor([0.0000, 0.7286, 0.8169, 0.6331, 0.1021, 0.6234, 0.4370, 0.5352, 0.8075, 0.0101])
In [82]:
Copied!
z = torch.rand(10)
print(f"Before: {z}")
z[z == z.max()] = 0
print(f"After: {z}")
z = torch.rand(10)
print(f"Before: {z}")
z[z == z.max()] = 0
print(f"After: {z}")
Before: tensor([0.1990, 0.0827, 0.4351, 0.8479, 0.8309, 0.8883, 0.1792, 0.2564, 0.6933, 0.5706]) After: tensor([0.1990, 0.0827, 0.4351, 0.8479, 0.8309, 0.0000, 0.1792, 0.2564, 0.6933, 0.5706])
🚫46. Create a structured array with x
and y
coordinates covering the [0,1]x[0,1] area (★★☆)¶
47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj)) (★★☆)¶
$$ C_{ij} = \frac{1}{x_i - y_j} $$
In [83]:
Copied!
x = torch.arange(8)
y = x + 0.5
x, y
x = torch.arange(8)
y = x + 0.5
x, y
Out[83]:
(tensor([0, 1, 2, 3, 4, 5, 6, 7]), tensor([0.5000, 1.5000, 2.5000, 3.5000, 4.5000, 5.5000, 6.5000, 7.5000]))
In [84]:
Copied!
# c = 1 / (x.unsqueeze(1) - y.unsqueeze(0))
c = 1 / (x.reshape(8, 1) - y.reshape(1, 8))
c
# c = 1 / (x.unsqueeze(1) - y.unsqueeze(0))
c = 1 / (x.reshape(8, 1) - y.reshape(1, 8))
c
Out[84]:
tensor([[-2.0000, -0.6667, -0.4000, -0.2857, -0.2222, -0.1818, -0.1538, -0.1333], [ 2.0000, -2.0000, -0.6667, -0.4000, -0.2857, -0.2222, -0.1818, -0.1538], [ 0.6667, 2.0000, -2.0000, -0.6667, -0.4000, -0.2857, -0.2222, -0.1818], [ 0.4000, 0.6667, 2.0000, -2.0000, -0.6667, -0.4000, -0.2857, -0.2222], [ 0.2857, 0.4000, 0.6667, 2.0000, -2.0000, -0.6667, -0.4000, -0.2857], [ 0.2222, 0.2857, 0.4000, 0.6667, 2.0000, -2.0000, -0.6667, -0.4000], [ 0.1818, 0.2222, 0.2857, 0.4000, 0.6667, 2.0000, -2.0000, -0.6667], [ 0.1538, 0.1818, 0.2222, 0.2857, 0.4000, 0.6667, 2.0000, -2.0000]])
In [85]:
Copied!
np.linalg.det(c)
np.linalg.det(c)
Out[85]:
np.float32(3638.1638)
48. Print the minimum and maximum representable value for each torch scalar type (★★☆)¶
In [86]:
Copied!
for dtype in [torch.int8, torch.int16, torch.int32, torch.int64]:
print(f"{dtype}.min: {torch.iinfo(dtype).min}")
print(f"{dtype}.max: {torch.iinfo(dtype).max}")
print("="*42)
for dtype in [torch.float32, torch.float64]:
print(f"{dtype}.min: {torch.finfo(dtype).min}")
print(f"{dtype}.max: {torch.finfo(dtype).max}")
print(f"{dtype}.eps: {torch.finfo(dtype).eps}")
print("="*42)
for dtype in [torch.int8, torch.int16, torch.int32, torch.int64]:
print(f"{dtype}.min: {torch.iinfo(dtype).min}")
print(f"{dtype}.max: {torch.iinfo(dtype).max}")
print("="*42)
for dtype in [torch.float32, torch.float64]:
print(f"{dtype}.min: {torch.finfo(dtype).min}")
print(f"{dtype}.max: {torch.finfo(dtype).max}")
print(f"{dtype}.eps: {torch.finfo(dtype).eps}")
print("="*42)
torch.int8.min: -128 torch.int8.max: 127 ========================================== torch.int16.min: -32768 torch.int16.max: 32767 ========================================== torch.int32.min: -2147483648 torch.int32.max: 2147483647 ========================================== torch.int64.min: -9223372036854775808 torch.int64.max: 9223372036854775807 ========================================== torch.float32.min: -3.4028234663852886e+38 torch.float32.max: 3.4028234663852886e+38 torch.float32.eps: 1.1920928955078125e-07 ========================================== torch.float64.min: -1.7976931348623157e+308 torch.float64.max: 1.7976931348623157e+308 torch.float64.eps: 2.220446049250313e-16 ==========================================
49. How to print all the values(without ellipses: ...
) of an array/tensor? (★★☆)¶
In [87]:
Copied!
z = torch.ones((40, 40))
print(z)
z = torch.ones((40, 40))
print(z)
tensor([[1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], ..., [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.]])
In [88]:
Copied!
# Limit the number of elements shown
torch.set_printoptions(threshold=torch.inf)
print(z)
# Limit the number of elements shown
torch.set_printoptions(threshold=torch.inf)
print(z)
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
In [89]:
Copied!
# to recover the default print options
torch.set_printoptions(threshold=1000, precision=4)
# to recover the default print options
torch.set_printoptions(threshold=1000, precision=4)
50. How to find the closest value (to a given scalar) in a vector? (★★☆)¶
In [90]:
Copied!
z = torch.arange(100)
v = torch.randint(0, 100, (1,))
z, v
z = torch.arange(100)
v = torch.randint(0, 100, (1,))
z, v
Out[90]:
(tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]), tensor([57]))
In [91]:
Copied!
index = (z - v).abs().argmin()
index, z[index]
index = (z - v).abs().argmin()
index, z[index]
Out[91]:
(tensor(57), tensor(57))
🚫51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)¶
52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)¶
In [92]:
Copied!
def pairwise_distances(coords):
"""
Calculates pairwise distances between points in a tensor of coordinates.
Args:
coords: A PyTorch tensor of shape (N, 2) where N is the number of points
and each row represents the (x, y) coordinates of a point.
Returns:
A PyTorch tensor of shape (N, N) containing the pairwise distances.
"""
# Calculate pairwise differences along each dimension
x_diff = coords[:, 0].unsqueeze(1) - coords[:, 0]
y_diff = coords[:, 1].unsqueeze(1) - coords[:, 1]
# Compute squared distances
squared_dists = x_diff ** 2 + y_diff ** 2
# Compute Euclidean distances
distances = torch.sqrt(squared_dists)
return distances
# Example usage
coords = torch.randn(100, 2) # Generate 100 random 2D points
distances = pairwise_distances(coords)
print(distances.shape)
print(distances)
def pairwise_distances(coords):
"""
Calculates pairwise distances between points in a tensor of coordinates.
Args:
coords: A PyTorch tensor of shape (N, 2) where N is the number of points
and each row represents the (x, y) coordinates of a point.
Returns:
A PyTorch tensor of shape (N, N) containing the pairwise distances.
"""
# Calculate pairwise differences along each dimension
x_diff = coords[:, 0].unsqueeze(1) - coords[:, 0]
y_diff = coords[:, 1].unsqueeze(1) - coords[:, 1]
# Compute squared distances
squared_dists = x_diff ** 2 + y_diff ** 2
# Compute Euclidean distances
distances = torch.sqrt(squared_dists)
return distances
# Example usage
coords = torch.randn(100, 2) # Generate 100 random 2D points
distances = pairwise_distances(coords)
print(distances.shape)
print(distances)
torch.Size([100, 100]) tensor([[0.0000, 2.7868, 1.7565, ..., 1.1450, 0.5879, 0.9544], [2.7868, 0.0000, 1.1243, ..., 2.4681, 2.2167, 3.0109], [1.7565, 1.1243, 0.0000, ..., 1.8093, 1.1688, 2.2166], ..., [1.1450, 2.4681, 1.8093, ..., 0.0000, 1.1407, 0.6083], [0.5879, 2.2167, 1.1688, ..., 1.1407, 0.0000, 1.2571], [0.9544, 3.0109, 2.2166, ..., 0.6083, 1.2571, 0.0000]])
53. How to convert a float (32 bits) array into an integer (32 bits) in place?¶
In [93]:
Copied!
# Create a float tensor
float_tensor = torch.tensor([1.5, 2.7, 3.2], dtype=torch.float32)
# Convert to int in-place
float_tensor.data = float_tensor.to(torch.int32)
print(float_tensor)
# Create a float tensor
float_tensor = torch.tensor([1.5, 2.7, 3.2], dtype=torch.float32)
# Convert to int in-place
float_tensor.data = float_tensor.to(torch.int32)
print(float_tensor)
tensor([1, 2, 3], dtype=torch.int32)
54. How to read the following file? (★★☆)¶
1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
In [94]:
Copied!
import numpy as np
from io import StringIO
# Fake file
s = StringIO('''1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
''')
z = torch.from_numpy(np.genfromtxt(s, delimiter=",", dtype=np.int32))
z
import numpy as np
from io import StringIO
# Fake file
s = StringIO('''1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
''')
z = torch.from_numpy(np.genfromtxt(s, delimiter=",", dtype=np.int32))
z
Out[94]:
tensor([[ 1, 2, 3, 4, 5], [ 6, -1, -1, 7, 8], [-1, -1, 9, 10, 11]], dtype=torch.int32)
55. What is the equivalent of enumerate for torch tensors? (★★☆)¶
In [95]:
Copied!
def iterate_tensor(tensor):
"""
Iterates over all values in a PyTorch tensor of any dimension.
Args:
tensor: The input PyTorch tensor.
Yields:
A tuple containing:
- The current value of the tensor.
- A tuple of indices representing the position of the value in the tensor.
"""
shape = tensor.shape
indices = torch.zeros(len(shape), dtype=torch.long) # Initialize indices
while True:
yield tensor[tuple(indices)], tuple(indices)
# Increment indices
dim = 0
while dim < len(shape):
indices[dim] += 1
if indices[dim] < shape[dim]:
break
else:
indices[dim] = 0
dim += 1
# Check if all indices have reached the end
if dim == len(shape):
break
def iterate_tensor(tensor):
"""
Iterates over all values in a PyTorch tensor of any dimension.
Args:
tensor: The input PyTorch tensor.
Yields:
A tuple containing:
- The current value of the tensor.
- A tuple of indices representing the position of the value in the tensor.
"""
shape = tensor.shape
indices = torch.zeros(len(shape), dtype=torch.long) # Initialize indices
while True:
yield tensor[tuple(indices)], tuple(indices)
# Increment indices
dim = 0
while dim < len(shape):
indices[dim] += 1
if indices[dim] < shape[dim]:
break
else:
indices[dim] = 0
dim += 1
# Check if all indices have reached the end
if dim == len(shape):
break
In [96]:
Copied!
z = np.arange(9).reshape(3,3)
z = np.arange(9).reshape(3,3)
In [97]:
Copied!
for value, indices in iterate_tensor(z):
print(f"Value: {value}, Indices: {indices}")
for value, indices in iterate_tensor(z):
print(f"Value: {value}, Indices: {indices}")
Value: 0, Indices: (tensor(0), tensor(0)) Value: 3, Indices: (tensor(1), tensor(0)) Value: 6, Indices: (tensor(2), tensor(0)) Value: 1, Indices: (tensor(0), tensor(1)) Value: 4, Indices: (tensor(1), tensor(1)) Value: 7, Indices: (tensor(2), tensor(1)) Value: 2, Indices: (tensor(0), tensor(2)) Value: 5, Indices: (tensor(1), tensor(2)) Value: 8, Indices: (tensor(2), tensor(2))
56. Generate a generic 2D Gaussian-like array (★★☆)¶
In [98]:
Copied!
x, y = torch.meshgrid(torch.linspace(-1, 1, 10), torch.linspace(-1, 1, 10))
d = torch.sqrt(x * x + y * y)
sigma, mu = 1.0, 0.0
g = torch.exp(-((d - mu) ** 2 / ( 2.0 * sigma ** 2 )))
g
x, y = torch.meshgrid(torch.linspace(-1, 1, 10), torch.linspace(-1, 1, 10))
d = torch.sqrt(x * x + y * y)
sigma, mu = 1.0, 0.0
g = torch.exp(-((d - mu) ** 2 / ( 2.0 * sigma ** 2 )))
g
/home/mathewshen/Workspace/Projects/github/Oh-PyTorch/.venv/lib/python3.13/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Out[98]:
tensor([[0.3679, 0.4482, 0.5198, 0.5738, 0.6028, 0.6028, 0.5738, 0.5198, 0.4482, 0.3679], [0.4482, 0.5461, 0.6333, 0.6991, 0.7344, 0.7344, 0.6991, 0.6333, 0.5461, 0.4482], [0.5198, 0.6333, 0.7344, 0.8107, 0.8517, 0.8517, 0.8107, 0.7344, 0.6333, 0.5198], [0.5738, 0.6991, 0.8107, 0.8948, 0.9401, 0.9401, 0.8948, 0.8107, 0.6991, 0.5738], [0.6028, 0.7344, 0.8517, 0.9401, 0.9877, 0.9877, 0.9401, 0.8517, 0.7344, 0.6028], [0.6028, 0.7344, 0.8517, 0.9401, 0.9877, 0.9877, 0.9401, 0.8517, 0.7344, 0.6028], [0.5738, 0.6991, 0.8107, 0.8948, 0.9401, 0.9401, 0.8948, 0.8107, 0.6991, 0.5738], [0.5198, 0.6333, 0.7344, 0.8107, 0.8517, 0.8517, 0.8107, 0.7344, 0.6333, 0.5198], [0.4482, 0.5461, 0.6333, 0.6991, 0.7344, 0.7344, 0.6991, 0.6333, 0.5461, 0.4482], [0.3679, 0.4482, 0.5198, 0.5738, 0.6028, 0.6028, 0.5738, 0.5198, 0.4482, 0.3679]])
57. How to randomly place p elements in a 2D array? (★★☆)¶
In [99]:
Copied!
z = torch.zeros(5, 5)
z
z = torch.zeros(5, 5)
z
Out[99]:
tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]])
In [100]:
Copied!
p = 3
z.put_(torch.randint(0, z.numel(), (p, )), torch.ones(p))
p = 3
z.put_(torch.randint(0, z.numel(), (p, )), torch.ones(p))
Out[100]:
tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 1., 0., 1., 0.]])
58. Subtract the mean of each row of a matrix (★★☆)¶
In [101]:
Copied!
z = torch.randint(0, 10, (2, 2), dtype=torch.float)
z
z = torch.randint(0, 10, (2, 2), dtype=torch.float)
z
Out[101]:
tensor([[0., 7.], [0., 8.]])
In [102]:
Copied!
# solution 1
z - z.mean(dim=1).reshape(-1, 1)
# solution 1
z - z.mean(dim=1).reshape(-1, 1)
Out[102]:
tensor([[-3.5000, 3.5000], [-4.0000, 4.0000]])
In [103]:
Copied!
# # solution 2
z - z.mean(dim=1, keepdims=True)
# # solution 2
z - z.mean(dim=1, keepdims=True)
Out[103]:
tensor([[-3.5000, 3.5000], [-4.0000, 4.0000]])
59. How to sort an array/tensor by the nth column? (★★☆)¶
In [104]:
Copied!
z = torch.randint(0, 10, (3, 3))
z
z = torch.randint(0, 10, (3, 3))
z
Out[104]:
tensor([[6, 0, 1], [5, 4, 8], [5, 9, 9]])
In [105]:
Copied!
z[z[:, 1].argsort()]
z[z[:, 1].argsort()]
Out[105]:
tensor([[6, 0, 1], [5, 4, 8], [5, 9, 9]])
60. How to tell if a given 2D array/tensor has null columns? (★★☆)¶
In [106]:
Copied!
z = torch.randint(0, 10, (3, 3), dtype=torch.float)
z[0, 1] = torch.log(torch.tensor([-1.]))
z
z = torch.randint(0, 10, (3, 3), dtype=torch.float)
z[0, 1] = torch.log(torch.tensor([-1.]))
z
Out[106]:
tensor([[0., nan, 4.], [6., 2., 1.], [9., 2., 3.]])
In [107]:
Copied!
z.isnan().any(dim=0)
z.isnan().any(dim=0)
Out[107]:
tensor([False, True, False])
61. Find the nearest value from a given value in an array (★★☆)¶
In [108]:
Copied!
z = torch.rand(10)
z
z = torch.rand(10)
z
Out[108]:
tensor([0.4676, 0.1715, 0.8227, 0.4240, 0.6835, 0.4485, 0.8485, 0.7086, 0.9771, 0.2584])
In [109]:
Copied!
x = 0.5
nearest_value_index = torch.flatten(torch.abs(z - x).argmin())
nearest_value = z[nearest_value_index]
nearest_value_index, nearest_value
x = 0.5
nearest_value_index = torch.flatten(torch.abs(z - x).argmin())
nearest_value = z[nearest_value_index]
nearest_value_index, nearest_value
Out[109]:
(tensor([0]), tensor([0.4676]))
🚫62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)¶
63. 🔧Create an named tensor (★★☆)¶
https://pytorch.org/docs/stable/named_tensor.html#creating-named-tensors
In [110]:
Copied!
imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
imgs.names
imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
imgs.names
/tmp/ipykernel_85296/263191427.py:1: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ../c10/core/TensorImpl.h:1928.) imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
Out[110]:
('N', 'C', 'H', 'W')
64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)¶
In [111]:
Copied!
z1 = torch.zeros(10)
z2 = torch.randint(0, 10, (5,))
z1, z2
z1 = torch.zeros(10)
z2 = torch.randint(0, 10, (5,))
z1, z2
Out[111]:
(tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), tensor([3, 5, 8, 8, 1]))
In [112]:
Copied!
# in-place addition
z1[z2] += 1
z1
# in-place addition
z1[z2] += 1
z1
Out[112]:
tensor([0., 1., 0., 1., 0., 1., 0., 0., 1., 0.])
⚠️Note that z1 = z1[z2] + 1
doesn't work for this.
65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)¶
In [113]:
Copied!