100 Numpy Exercises solved in PyTorch¶
The pytorch solution for exercies in numpy-100.
Symbols:
- 🔧 Refactor: Rewrite the problem to ensure compatibility and clarity.
- 🚫 Exclude: Omit the problem if it is not suitable for PyTorch implementation.
- 😢 Skip: Skip due to high internal complexity or impracticality.
1. Import the torch package(★☆☆)¶
In [1]:
Copied!
import torch
import torch
2. Print the torch version and the configuration (★☆☆)¶
In [2]:
Copied!
print(torch.__version__)
print(f"CUDA Availability: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA Version: {torch.version.cuda}")
print(f"CUDA Device Count: {torch.cuda.device_count()}")
for i in range(torch.cuda.device_count()):
print(f"Device {i}: {torch.cuda.get_device_name(i)}")
print(torch.__version__)
print(f"CUDA Availability: {torch.cuda.is_available()}")
if torch.cuda.is_available():
print(f"CUDA Version: {torch.version.cuda}")
print(f"CUDA Device Count: {torch.cuda.device_count()}")
for i in range(torch.cuda.device_count()):
print(f"Device {i}: {torch.cuda.get_device_name(i)}")
2.5.1+cu121 CUDA Availability: True CUDA Version: 12.1 CUDA Device Count: 1 Device 0: NVIDIA GeForce RTX 4060 Ti
3. Create a null vector of size 10 (★☆☆)¶
In [3]:
Copied!
z = torch.zeros(10)
z
z = torch.zeros(10)
z
Out[3]:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
In [4]:
Copied!
z = torch.zeros((10,))
z
z = torch.zeros((10,))
z
Out[4]:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
4. How to find the memory size of any array (★☆☆)¶
In [5]:
Copied!
z = torch.zeros(10)
z
z = torch.zeros(10)
z
Out[5]:
tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
In [6]:
Copied!
z.dtype, z.element_size()
z.dtype, z.element_size()
Out[6]:
(torch.float32, 4)
In [7]:
Copied!
print("%d bytes" % (z.numel() * z.element_size()))
print("%d bytes" % (z.numel() * z.element_size()))
40 bytes
5. How to get the documentation of the torch add function from the command line? (★☆☆)¶
In [8]:
Copied!
!python -c "import torch; help(torch.add)"
!python -c "import torch; help(torch.add)"
Help on built-in function add in module torch: add(...) add(input, other, *, alpha=1, out=None) -> Tensor Adds :attr:`other`, scaled by :attr:`alpha`, to :attr:`input`. .. math:: \text{{out}}_i = \text{{input}}_i + \text{{alpha}} \times \text{{other}}_i Supports :ref:`broadcasting to a common shape <broadcasting-semantics>`, :ref:`type promotion <type-promotion-doc>`, and integer, float, and complex inputs. Args: input (Tensor): the input tensor. other (Tensor or Number): the tensor or number to add to :attr:`input`. Keyword arguments: alpha (Number): the multiplier for :attr:`other`. out (Tensor, optional): the output tensor. Examples:: >>> a = torch.randn(4) >>> a tensor([ 0.0202, 1.0985, 1.3506, -0.6056]) >>> torch.add(a, 20) tensor([ 20.0202, 21.0985, 21.3506, 19.3944]) >>> b = torch.randn(4) >>> b tensor([-0.9732, -0.3497, 0.6245, 0.4022]) >>> c = torch.randn(4, 1) >>> c tensor([[ 0.3743], [-1.7724], [-0.5811], [-0.8017]]) >>> torch.add(b, c, alpha=10) tensor([[ 2.7695, 3.3930, 4.3672, 4.1450], [-18.6971, -18.0736, -17.0994, -17.3216], [ -6.7845, -6.1610, -5.1868, -5.4090], [ -8.9902, -8.3667, -7.3925, -7.6147]])
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)¶
In [9]:
Copied!
z = torch.zeros(10)
z[4] = 1
z
z = torch.zeros(10)
z[4] = 1
z
Out[9]:
tensor([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.])
7. Create a vector with values ranging from 10 to 49 (★☆☆)¶
In [10]:
Copied!
z = torch.arange(10, 50)
z
z = torch.arange(10, 50)
z
Out[10]:
tensor([10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49])
8. Reverse a vector (first element becomes last) (★☆☆)¶
In [11]:
Copied!
z = torch.arange(50)
z.flip(dims=[0])
z = torch.arange(50)
z.flip(dims=[0])
Out[11]:
tensor([49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0])
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)¶
In [12]:
Copied!
z = torch.arange(0, 9).reshape(3, 3)
z
z = torch.arange(0, 9).reshape(3, 3)
z
Out[12]:
tensor([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
10. Find indices of non-zero elements from [1,2,0,0,4,0] (★☆☆)¶
In [13]:
Copied!
z = torch.tensor([1, 2, 0, 0, 4, 0])
torch.nonzero(z)
z = torch.tensor([1, 2, 0, 0, 4, 0])
torch.nonzero(z)
Out[13]:
tensor([[0], [1], [4]])
11. Create a 3x3 identity matrix (★☆☆)¶
In [14]:
Copied!
z = torch.eye(3)
z
z = torch.eye(3)
z
Out[14]:
tensor([[1., 0., 0.], [0., 1., 0.], [0., 0., 1.]])
12. Create a 3x3x3 array with random values (★☆☆)¶
In [15]:
Copied!
z = torch.rand((3, 3, 3))
z
z = torch.rand((3, 3, 3))
z
Out[15]:
tensor([[[0.0544, 0.7409, 0.3586], [0.4821, 0.6628, 0.1824], [0.6343, 0.1023, 0.3586]], [[0.4451, 0.0445, 0.0448], [0.3173, 0.4945, 0.6955], [0.3680, 0.5672, 0.6336]], [[0.8069, 0.8924, 0.4566], [0.3470, 0.9187, 0.1749], [0.1230, 0.6413, 0.3714]]])
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)¶
In [16]:
Copied!
z = torch.rand((10, 10))
z
z = torch.rand((10, 10))
z
Out[16]:
tensor([[0.1149, 0.4582, 0.3877, 0.7770, 0.6502, 0.4463, 0.8894, 0.2684, 0.9787, 0.0042], [0.3663, 0.0221, 0.3212, 0.7482, 0.1575, 0.6710, 0.1775, 0.3190, 0.5801, 0.2634], [0.1509, 0.7520, 0.8496, 0.3584, 0.1530, 0.2575, 0.0639, 0.5072, 0.9011, 0.5436], [0.2638, 0.1881, 0.6395, 0.7895, 0.6149, 0.9446, 0.6417, 0.4836, 0.0602, 0.5661], [0.9850, 0.0575, 0.6128, 0.2509, 0.8271, 0.7064, 0.9278, 0.3506, 0.7337, 0.9946], [0.1780, 0.8824, 0.3741, 0.4165, 0.9171, 0.4368, 0.5185, 0.4635, 0.0759, 0.6047], [0.7319, 0.3339, 0.0714, 0.2986, 0.1479, 0.4290, 0.9089, 0.0661, 0.9228, 0.0198], [0.6623, 0.4880, 0.3415, 0.8989, 0.9928, 0.4645, 0.3125, 0.0810, 0.7916, 0.3466], [0.7760, 0.6280, 0.9847, 0.9007, 0.9535, 0.9762, 0.2596, 0.0592, 0.5514, 0.2857], [0.2687, 0.7569, 0.6609, 0.1533, 0.2881, 0.8472, 0.5495, 0.6888, 0.6515, 0.4354]])
In [17]:
Copied!
z.min(), z.max()
z.min(), z.max()
Out[17]:
(tensor(0.0042), tensor(0.9946))
14. Create a random vector of size 30 and find the mean value (★☆☆)¶
In [18]:
Copied!
z = torch.rand(30)
z.mean()
z = torch.rand(30)
z.mean()
Out[18]:
tensor(0.5269)
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)¶
In [19]:
Copied!
z = torch.ones(10, 10)
z[1:-1, 1:-1] = 0
z
z = torch.ones(10, 10)
z[1:-1, 1:-1] = 0
z
Out[19]:
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 0., 0., 0., 0., 0., 0., 0., 0., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
16. How to add a border (filled with 0's) around an existing array? (★☆☆)¶
In [20]:
Copied!
z = torch.ones(5, 5)
z
z = torch.ones(5, 5)
z
Out[20]:
tensor([[1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.], [1., 1., 1., 1., 1.]])
In [21]:
Copied!
border_width = 1
torch.nn.functional.pad(z,
pad=(border_width, border_width, border_width, border_width),
mode='constant',
value=0,
)
border_width = 1
torch.nn.functional.pad(z,
pad=(border_width, border_width, border_width, border_width),
mode='constant',
value=0,
)
Out[21]:
tensor([[0., 0., 0., 0., 0., 0., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 1., 1., 1., 1., 1., 0.], [0., 0., 0., 0., 0., 0., 0.]])
17. What is the result of the following expression? (★☆☆)¶
0 * torch.nan
torch.nan == torch.nan
torch.inf > torch.nan
torch.nan - torch.nan
torch.nan in set([torch.nan])
0.3 == 3 * 0.1
In [22]:
Copied!
0 * torch.nan
0 * torch.nan
Out[22]:
nan
In [23]:
Copied!
torch.nan == torch.nan
torch.nan == torch.nan
Out[23]:
False
In [24]:
Copied!
torch.inf > torch.nan
torch.inf > torch.nan
Out[24]:
False
In [25]:
Copied!
torch.nan - torch.nan
torch.nan - torch.nan
Out[25]:
nan
In [26]:
Copied!
torch.nan in set([torch.nan])
torch.nan in set([torch.nan])
Out[26]:
True
In [27]:
Copied!
0.3 == 3 * 0.1
0.3 == 3 * 0.1
Out[27]:
False
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)¶
In [28]:
Copied!
z = torch.tensor([1, 2, 3, 4])
torch.diag(z, diagonal=-1)
z = torch.tensor([1, 2, 3, 4])
torch.diag(z, diagonal=-1)
Out[28]:
tensor([[0, 0, 0, 0, 0], [1, 0, 0, 0, 0], [0, 2, 0, 0, 0], [0, 0, 3, 0, 0], [0, 0, 0, 4, 0]])
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)¶
In [29]:
Copied!
z = torch.zeros((8, 8))
z[1::2, ::2] = 1 # rows: 2, 4, 6, 8
z[::2, 1::2] = 1 # rows: 1, 3, 5, 7
z
z = torch.zeros((8, 8))
z[1::2, ::2] = 1 # rows: 2, 4, 6, 8
z[::2, 1::2] = 1 # rows: 1, 3, 5, 7
z
Out[29]:
tensor([[0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.], [0., 1., 0., 1., 0., 1., 0., 1.], [1., 0., 1., 0., 1., 0., 1., 0.]])
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element? (★☆☆)¶
In [30]:
Copied!
torch.unravel_index(torch.tensor(99), (6, 7, 8))
torch.unravel_index(torch.tensor(99), (6, 7, 8))
Out[30]:
(tensor(1), tensor(5), tensor(3))
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)¶
In [31]:
Copied!
z = torch.tile( torch.tensor([[0, 1], [1, 0]]), (4,4))
z
z = torch.tile( torch.tensor([[0, 1], [1, 0]]), (4,4))
z
Out[31]:
tensor([[0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0], [0, 1, 0, 1, 0, 1, 0, 1], [1, 0, 1, 0, 1, 0, 1, 0]])
22. Normalize a 5x5 random matrix (★☆☆)¶
In [32]:
Copied!
z = torch.rand((5,5))
z = (z - torch.mean (z)) / (torch.std (z))
z
z = torch.rand((5,5))
z = (z - torch.mean (z)) / (torch.std (z))
z
Out[32]:
tensor([[ 0.1877, 0.3092, 0.6385, -1.4535, 0.2774], [ 0.8624, 0.0979, 0.9551, -2.0317, 0.0086], [-1.8338, -0.1372, -0.6845, -1.4898, 0.7762], [ 1.4238, -0.4690, 0.3789, 0.7241, -1.0796], [ 1.2606, -0.0356, 1.2659, 0.8993, -0.8509]])
23. 🚫Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)¶
We cannot do this in PyTorch.
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)¶
In [33]:
Copied!
z = torch.matmul(torch.ones((5, 3)), torch.ones((3, 2)))
z
z = torch.matmul(torch.ones((5, 3)), torch.ones((3, 2)))
z
Out[33]:
tensor([[3., 3.], [3., 3.], [3., 3.], [3., 3.], [3., 3.]])
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)¶
In [34]:
Copied!
z = torch.arange(11)
z[(3 < z) & (z < 8)] *= -1
z
z = torch.arange(11)
z[(3 < z) & (z < 8)] *= -1
z
Out[34]:
tensor([ 0, 1, 2, 3, -4, -5, -6, -7, 8, 9, 10])
26. 🚫What is the output of the following script? (★☆☆)¶
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
We cannot do this in PyTorch.
27. Consider an integer vector z, which of these expressions are legal? (★☆☆)¶
z**z
2 << z >> 2
z <- z
1j*z
z/1/1
z<z>z
In [35]:
Copied!
z = torch.arange(10)
z
z = torch.arange(10)
z
Out[35]:
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [36]:
Copied!
# 1
z ** z
# 1
z ** z
Out[36]:
tensor([ 1, 1, 4, 27, 256, 3125, 46656, 823543, 16777216, 387420489])
In [37]:
Copied!
# 2
print(2 << z >> 2)
print((2 << z) >> 2)
# 2
print(2 << z >> 2)
print((2 << z) >> 2)
tensor([ 0, 1, 2, 4, 8, 16, 32, 64, 128, 256]) tensor([ 0, 1, 2, 4, 8, 16, 32, 64, 128, 256])
In [38]:
Copied!
# 3
print(z <- z)
print(z < (-z))
# 3
print(z <- z)
print(z < (-z))
tensor([False, False, False, False, False, False, False, False, False, False]) tensor([False, False, False, False, False, False, False, False, False, False])
In [39]:
Copied!
# 3
import dis
dis.dis('z <- z')
# 3
import dis
dis.dis('z <- z')
0 RESUME 0 1 LOAD_NAME 0 (z) LOAD_NAME 0 (z) UNARY_NEGATIVE COMPARE_OP 2 (<) RETURN_VALUE
In [40]:
Copied!
# 4
1j*z
# 4
1j*z
Out[40]:
tensor([0.+0.j, 0.+1.j, 0.+2.j, 0.+3.j, 0.+4.j, 0.+5.j, 0.+6.j, 0.+7.j, 0.+8.j, 0.+9.j])
In [41]:
Copied!
# 5
print(z/1/1)
print((z/1)/1)
# 5
print(z/1/1)
print((z/1)/1)
tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
In [42]:
Copied!
# 6
try:
print(z<z>z)
except Exception as e:
print(e)
# 6
try:
print(zz)
except Exception as e:
print(e)
Boolean value of Tensor with more than one value is ambiguous
28. What are the result of the following expressions? (★☆☆)¶
torch.tensor(0) / torch.tensor(0)
torch.tensor(0) // torch.tensor(0)
torch.tensor([torch.nan]).to(torch.int).to(torch.float)
In [43]:
Copied!
torch.tensor(0) / torch.tensor(0)
torch.tensor(0) / torch.tensor(0)
Out[43]:
tensor(nan)
In [44]:
Copied!
try:
torch.tensor(0) // torch.tensor(0)
except Exception as e:
print(e)
try:
torch.tensor(0) // torch.tensor(0)
except Exception as e:
print(e)
ZeroDivisionError
In [45]:
Copied!
torch.tensor([torch.nan]).to(torch.int).to(torch.float)
torch.tensor([torch.nan]).to(torch.int).to(torch.float)
Out[45]:
tensor([-2.1475e+09])
29. How to round away from zero a float array ? (★☆☆)¶
In [46]:
Copied!
z = torch.randn((10))
z
z = torch.randn((10))
z
Out[46]:
tensor([-0.3417, 1.2910, 0.2666, -2.0166, 1.6535, -0.4432, 1.0893, 0.7234, -0.0371, 1.5461])
In [47]:
Copied!
torch.copysign(torch.ceil(torch.abs(z)), z)
torch.copysign(torch.ceil(torch.abs(z)), z)
Out[47]:
tensor([-1., 2., 1., -3., 2., -1., 2., 1., -1., 2.])
In [48]:
Copied!
torch.where(z>0, torch.ceil(z), torch.floor(z))
torch.where(z>0, torch.ceil(z), torch.floor(z))
Out[48]:
tensor([-1., 2., 1., -3., 2., -1., 2., 1., -1., 2.])
30. How to find common values between two arrays? (★☆☆)¶
In [49]:
Copied!
z1 = torch.randint(0, 10, (10, ))
z2 = torch.randint(0, 10, (10, ))
print(f"{z1 = }\n{z2 = }")
z1 = torch.randint(0, 10, (10, ))
z2 = torch.randint(0, 10, (10, ))
print(f"{z1 = }\n{z2 = }")
z1 = tensor([3, 3, 6, 5, 8, 9, 5, 5, 9, 8]) z2 = tensor([4, 3, 4, 2, 3, 3, 4, 1, 7, 0])
In [50]:
Copied!
set(z1.tolist()) & set(z2.tolist())
set(z1.tolist()) & set(z2.tolist())
Out[50]:
{3}
31. How to ignore all torch warnings (not recommended)? (★☆☆)¶
In [51]:
Copied!
torch.autograd.detect_anomaly()
torch.autograd.detect_anomaly()
/tmp/ipykernel_85296/675420015.py:1: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. torch.autograd.detect_anomaly()
Out[51]:
<torch.autograd.anomaly_mode.detect_anomaly at 0x7fe2fa8941a0>
In [52]:
Copied!
import warnings
class IgnoreWarnings:
def __enter__(self):
warnings.filterwarnings("ignore")
def __exit__(self, exc_type, exc_val, exc_tb):
warnings.resetwarnings()
with IgnoreWarnings():
torch.autograd.detect_anomaly()
import warnings
class IgnoreWarnings:
def __enter__(self):
warnings.filterwarnings("ignore")
def __exit__(self, exc_type, exc_val, exc_tb):
warnings.resetwarnings()
with IgnoreWarnings():
torch.autograd.detect_anomaly()
In [53]:
Copied!
torch.autograd.detect_anomaly()
torch.autograd.detect_anomaly()
/tmp/ipykernel_85296/675420015.py:1: UserWarning: Anomaly Detection has been enabled. This mode will increase the runtime and should only be enabled for debugging. torch.autograd.detect_anomaly()
Out[53]:
<torch.autograd.anomaly_mode.detect_anomaly at 0x7fe2f980b9d0>
32. 🔧How to get the square root of a complex value in torch (★☆☆)¶
In [54]:
Copied!
real = torch.tensor(-1, dtype=torch.float32)
imag = torch.tensor(0, dtype=torch.float32)
x = torch.complex(real, imag)
x
real = torch.tensor(-1, dtype=torch.float32)
imag = torch.tensor(0, dtype=torch.float32)
x = torch.complex(real, imag)
x
Out[54]:
tensor(-1.+0.j)
In [55]:
Copied!
torch.sqrt(x)
torch.sqrt(x)
Out[55]:
tensor(0.+1.j)
33. 🚫How to get the dates of yesterday, today and tomorrow? (★☆☆)¶
34. 🚫How to get all the dates corresponding to the month of July 2016? (★★☆)¶
35. How to compute ((A+B)*(-A/2)) in place (without copy)? (★★☆)¶
In [56]:
Copied!
A = torch.ones(3) * 1
B = torch.ones(3) * 2
A, B
A = torch.ones(3) * 1
B = torch.ones(3) * 2
A, B
Out[56]:
(tensor([1., 1., 1.]), tensor([2., 2., 2.]))
In [57]:
Copied!
torch.add(A, B, out=B)
B
torch.add(A, B, out=B)
B
Out[57]:
tensor([3., 3., 3.])
In [58]:
Copied!
torch.divide(A, 2, out=A)
torch.neg(A, out=A)
A
torch.divide(A, 2, out=A)
torch.neg(A, out=A)
A
Out[58]:
tensor([-0.5000, -0.5000, -0.5000])
In [59]:
Copied!
torch.multiply(B, A, out=A)
torch.multiply(B, A, out=A)
Out[59]:
tensor([-1.5000, -1.5000, -1.5000])
36. Extract the integer part of a random array of positive numbers using 4 different methods (★★☆)¶
In [60]:
Copied!
z = 10 * torch.rand(10)
z
z = 10 * torch.rand(10)
z
Out[60]:
tensor([0.5108, 7.3979, 8.2650, 5.4840, 8.8788, 0.9747, 4.1578, 7.0270, 7.9722, 0.5862])
In [61]:
Copied!
# solution1
z - z % 1
# solution1
z - z % 1
Out[61]:
tensor([0., 7., 8., 5., 8., 0., 4., 7., 7., 0.])
In [62]:
Copied!
# solution2
z // 1
# solution2
z // 1
Out[62]:
tensor([0., 7., 8., 5., 8., 0., 4., 7., 7., 0.])
In [63]:
Copied!
# solution3
z.int()
# solution3
z.int()
Out[63]:
tensor([0, 7, 8, 5, 8, 0, 4, 7, 7, 0], dtype=torch.int32)
In [64]:
Copied!
# solution4
torch.trunc(z)
# solution4
torch.trunc(z)
Out[64]:
tensor([0., 7., 8., 5., 8., 0., 4., 7., 7., 0.])
37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)¶
In [65]:
Copied!
# solution1
torch.zeros(5, 5) + torch.arange(5)
# solution1
torch.zeros(5, 5) + torch.arange(5)
Out[65]:
tensor([[0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.], [0., 1., 2., 3., 4.]])
In [66]:
Copied!
# solution2
torch.tile(torch.arange(5), dims=(5, 1))
# solution2
torch.tile(torch.arange(5), dims=(5, 1))
Out[66]:
tensor([[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]])
38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)¶
https://stackoverflow.com/questions/55307368/creating-a-torch-tensor-from-a-generator
In [67]:
Copied!
import numpy as np
def generate():
for x in range(10):
yield x
z = torch.from_numpy(np.fromiter(generate(),dtype=float,count=-1))
z
import numpy as np
def generate():
for x in range(10):
yield x
z = torch.from_numpy(np.fromiter(generate(),dtype=float,count=-1))
z
Out[67]:
tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], dtype=torch.float64)
39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)¶
In [68]:
Copied!
torch.linspace(start=0, end=1, steps=12)[1:-1]
torch.linspace(start=0, end=1, steps=12)[1:-1]
Out[68]:
tensor([0.0909, 0.1818, 0.2727, 0.3636, 0.4545, 0.5455, 0.6364, 0.7273, 0.8182, 0.9091])
40. Create a random vector of size 10 and sort it (★★☆)¶
In [69]:
Copied!
z = torch.rand(10)
z
z = torch.rand(10)
z
Out[69]:
tensor([0.3381, 0.1853, 0.7896, 0.6117, 0.1559, 0.6938, 0.4223, 0.4506, 0.4406, 0.0759])
In [70]:
Copied!
z = z.sort()
z
z = z.sort()
z
Out[70]:
torch.return_types.sort( values=tensor([0.0759, 0.1559, 0.1853, 0.3381, 0.4223, 0.4406, 0.4506, 0.6117, 0.6938, 0.7896]), indices=tensor([9, 4, 1, 0, 6, 8, 7, 3, 5, 2]))
🔧41. How to sum a small array faster? (★★☆)¶
np.sum
torch.sum
np.add.reduce
sum
in pythonfor
and+
in python
In [71]:
Copied!
import numpy as np
z_torch = torch.arange(10)
z_numpy = np.arange(10)
z_python = [i for i in range(10)]
z_torch, z_numpy, z_python
import numpy as np
z_torch = torch.arange(10)
z_numpy = np.arange(10)
z_python = [i for i in range(10)]
z_torch, z_numpy, z_python
Out[71]:
(tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), [0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [72]:
Copied!
# method 1
%timeit np.sum(z_numpy)
# method 1
%timeit np.sum(z_numpy)
1.62 μs ± 14.3 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [73]:
Copied!
# method 2
%timeit torch.sum(z_torch)
# method 2
%timeit torch.sum(z_torch)
1.35 μs ± 52.4 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [74]:
Copied!
# method 3
%timeit np.add.reduce(z_numpy)
# method 3
%timeit np.add.reduce(z_numpy)
851 ns ± 4.84 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [75]:
Copied!
# method 4
%timeit sum(z_python)
# method 4
%timeit sum(z_python)
70 ns ± 1.2 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
In [76]:
Copied!
# method 5
def sum_with_for_loop():
n = 0
for i in z_python:
n += i
return n
%timeit sum_with_for_loop
# method 5
def sum_with_for_loop():
n = 0
for i in z_python:
n += i
return n
%timeit sum_with_for_loop
8.69 ns ± 0.207 ns per loop (mean ± std. dev. of 7 runs, 100,000,000 loops each)
42. Consider two random array/tensor A and B, check if they are equal (★★☆)¶
In [77]:
Copied!
t1 = torch.randint(0, 2, (5, ))
t2 = torch.randint(0, 2, (5, ))
t1, t2
t1 = torch.randint(0, 2, (5, ))
t2 = torch.randint(0, 2, (5, ))
t1, t2
Out[77]:
(tensor([0, 1, 1, 0, 0]), tensor([0, 0, 0, 0, 0]))
In [78]:
Copied!
# The behaviour of this function is analogous to `numpy.allclose`
torch.allclose(t1, t2)
# The behaviour of this function is analogous to `numpy.allclose`
torch.allclose(t1, t2)
Out[78]:
False
In [79]:
Copied!
# Computes element-wise equality
torch.eq(t1, t2)
# Computes element-wise equality
torch.eq(t1, t2)
Out[79]:
tensor([ True, False, False, True, True])
43. 🚫Make an array/tensor immutable (read-only) (★★☆)¶
44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)¶
In [80]:
Copied!
z = torch.rand((10, 2))
x, y = z[:, 0], z[:, 1]
r = torch.sqrt(x**2 + y ** 2)
t = torch.arctan2(y, x)
r, t
z = torch.rand((10, 2))
x, y = z[:, 0], z[:, 1]
r = torch.sqrt(x**2 + y ** 2)
t = torch.arctan2(y, x)
r, t
Out[80]:
(tensor([0.3682, 0.8140, 0.7976, 0.9053, 0.5343, 0.6765, 0.7569, 0.5681, 0.9009, 0.2592]), tensor([0.5282, 1.3840, 0.2411, 1.0299, 0.2051, 1.0906, 0.8905, 1.1787, 0.3060, 0.2000]))
45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)¶
In [81]:
Copied!
z = torch.rand(10)
print(f"Before: {z}")
z[z.argmax()] = 0
print(f"After: {z}")
z = torch.rand(10)
print(f"Before: {z}")
z[z.argmax()] = 0
print(f"After: {z}")
Before: tensor([0.8554, 0.7286, 0.8169, 0.6331, 0.1021, 0.6234, 0.4370, 0.5352, 0.8075, 0.0101]) After: tensor([0.0000, 0.7286, 0.8169, 0.6331, 0.1021, 0.6234, 0.4370, 0.5352, 0.8075, 0.0101])
In [82]:
Copied!
z = torch.rand(10)
print(f"Before: {z}")
z[z == z.max()] = 0
print(f"After: {z}")
z = torch.rand(10)
print(f"Before: {z}")
z[z == z.max()] = 0
print(f"After: {z}")
Before: tensor([0.1990, 0.0827, 0.4351, 0.8479, 0.8309, 0.8883, 0.1792, 0.2564, 0.6933, 0.5706]) After: tensor([0.1990, 0.0827, 0.4351, 0.8479, 0.8309, 0.0000, 0.1792, 0.2564, 0.6933, 0.5706])
🚫46. Create a structured array with x
and y
coordinates covering the [0,1]x[0,1] area (★★☆)¶
47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj)) (★★☆)¶
$$ C_{ij} = \frac{1}{x_i - y_j} $$
In [83]:
Copied!
x = torch.arange(8)
y = x + 0.5
x, y
x = torch.arange(8)
y = x + 0.5
x, y
Out[83]:
(tensor([0, 1, 2, 3, 4, 5, 6, 7]), tensor([0.5000, 1.5000, 2.5000, 3.5000, 4.5000, 5.5000, 6.5000, 7.5000]))
In [84]:
Copied!
# c = 1 / (x.unsqueeze(1) - y.unsqueeze(0))
c = 1 / (x.reshape(8, 1) - y.reshape(1, 8))
c
# c = 1 / (x.unsqueeze(1) - y.unsqueeze(0))
c = 1 / (x.reshape(8, 1) - y.reshape(1, 8))
c
Out[84]:
tensor([[-2.0000, -0.6667, -0.4000, -0.2857, -0.2222, -0.1818, -0.1538, -0.1333], [ 2.0000, -2.0000, -0.6667, -0.4000, -0.2857, -0.2222, -0.1818, -0.1538], [ 0.6667, 2.0000, -2.0000, -0.6667, -0.4000, -0.2857, -0.2222, -0.1818], [ 0.4000, 0.6667, 2.0000, -2.0000, -0.6667, -0.4000, -0.2857, -0.2222], [ 0.2857, 0.4000, 0.6667, 2.0000, -2.0000, -0.6667, -0.4000, -0.2857], [ 0.2222, 0.2857, 0.4000, 0.6667, 2.0000, -2.0000, -0.6667, -0.4000], [ 0.1818, 0.2222, 0.2857, 0.4000, 0.6667, 2.0000, -2.0000, -0.6667], [ 0.1538, 0.1818, 0.2222, 0.2857, 0.4000, 0.6667, 2.0000, -2.0000]])
In [85]:
Copied!
np.linalg.det(c)
np.linalg.det(c)
Out[85]:
np.float32(3638.1638)
48. Print the minimum and maximum representable value for each torch scalar type (★★☆)¶
In [86]:
Copied!
for dtype in [torch.int8, torch.int16, torch.int32, torch.int64]:
print(f"{dtype}.min: {torch.iinfo(dtype).min}")
print(f"{dtype}.max: {torch.iinfo(dtype).max}")
print("="*42)
for dtype in [torch.float32, torch.float64]:
print(f"{dtype}.min: {torch.finfo(dtype).min}")
print(f"{dtype}.max: {torch.finfo(dtype).max}")
print(f"{dtype}.eps: {torch.finfo(dtype).eps}")
print("="*42)
for dtype in [torch.int8, torch.int16, torch.int32, torch.int64]:
print(f"{dtype}.min: {torch.iinfo(dtype).min}")
print(f"{dtype}.max: {torch.iinfo(dtype).max}")
print("="*42)
for dtype in [torch.float32, torch.float64]:
print(f"{dtype}.min: {torch.finfo(dtype).min}")
print(f"{dtype}.max: {torch.finfo(dtype).max}")
print(f"{dtype}.eps: {torch.finfo(dtype).eps}")
print("="*42)
torch.int8.min: -128 torch.int8.max: 127 ========================================== torch.int16.min: -32768 torch.int16.max: 32767 ========================================== torch.int32.min: -2147483648 torch.int32.max: 2147483647 ========================================== torch.int64.min: -9223372036854775808 torch.int64.max: 9223372036854775807 ========================================== torch.float32.min: -3.4028234663852886e+38 torch.float32.max: 3.4028234663852886e+38 torch.float32.eps: 1.1920928955078125e-07 ========================================== torch.float64.min: -1.7976931348623157e+308 torch.float64.max: 1.7976931348623157e+308 torch.float64.eps: 2.220446049250313e-16 ==========================================
49. How to print all the values(without ellipses: ...
) of an array/tensor? (★★☆)¶
In [87]:
Copied!
z = torch.ones((40, 40))
print(z)
z = torch.ones((40, 40))
print(z)
tensor([[1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], ..., [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.], [1., 1., 1., ..., 1., 1., 1.]])
In [88]:
Copied!
# Limit the number of elements shown
torch.set_printoptions(threshold=torch.inf)
print(z)
# Limit the number of elements shown
torch.set_printoptions(threshold=torch.inf)
print(z)
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]])
In [89]:
Copied!
# to recover the default print options
torch.set_printoptions(threshold=1000, precision=4)
# to recover the default print options
torch.set_printoptions(threshold=1000, precision=4)
50. How to find the closest value (to a given scalar) in a vector? (★★☆)¶
In [90]:
Copied!
z = torch.arange(100)
v = torch.randint(0, 100, (1,))
z, v
z = torch.arange(100)
v = torch.randint(0, 100, (1,))
z, v
Out[90]:
(tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]), tensor([57]))
In [91]:
Copied!
index = (z - v).abs().argmin()
index, z[index]
index = (z - v).abs().argmin()
index, z[index]
Out[91]:
(tensor(57), tensor(57))
🚫51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)¶
52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)¶
In [92]:
Copied!
def pairwise_distances(coords):
"""
Calculates pairwise distances between points in a tensor of coordinates.
Args:
coords: A PyTorch tensor of shape (N, 2) where N is the number of points
and each row represents the (x, y) coordinates of a point.
Returns:
A PyTorch tensor of shape (N, N) containing the pairwise distances.
"""
# Calculate pairwise differences along each dimension
x_diff = coords[:, 0].unsqueeze(1) - coords[:, 0]
y_diff = coords[:, 1].unsqueeze(1) - coords[:, 1]
# Compute squared distances
squared_dists = x_diff ** 2 + y_diff ** 2
# Compute Euclidean distances
distances = torch.sqrt(squared_dists)
return distances
# Example usage
coords = torch.randn(100, 2) # Generate 100 random 2D points
distances = pairwise_distances(coords)
print(distances.shape)
print(distances)
def pairwise_distances(coords):
"""
Calculates pairwise distances between points in a tensor of coordinates.
Args:
coords: A PyTorch tensor of shape (N, 2) where N is the number of points
and each row represents the (x, y) coordinates of a point.
Returns:
A PyTorch tensor of shape (N, N) containing the pairwise distances.
"""
# Calculate pairwise differences along each dimension
x_diff = coords[:, 0].unsqueeze(1) - coords[:, 0]
y_diff = coords[:, 1].unsqueeze(1) - coords[:, 1]
# Compute squared distances
squared_dists = x_diff ** 2 + y_diff ** 2
# Compute Euclidean distances
distances = torch.sqrt(squared_dists)
return distances
# Example usage
coords = torch.randn(100, 2) # Generate 100 random 2D points
distances = pairwise_distances(coords)
print(distances.shape)
print(distances)
torch.Size([100, 100]) tensor([[0.0000, 2.7868, 1.7565, ..., 1.1450, 0.5879, 0.9544], [2.7868, 0.0000, 1.1243, ..., 2.4681, 2.2167, 3.0109], [1.7565, 1.1243, 0.0000, ..., 1.8093, 1.1688, 2.2166], ..., [1.1450, 2.4681, 1.8093, ..., 0.0000, 1.1407, 0.6083], [0.5879, 2.2167, 1.1688, ..., 1.1407, 0.0000, 1.2571], [0.9544, 3.0109, 2.2166, ..., 0.6083, 1.2571, 0.0000]])
53. How to convert a float (32 bits) array into an integer (32 bits) in place?¶
In [93]:
Copied!
# Create a float tensor
float_tensor = torch.tensor([1.5, 2.7, 3.2], dtype=torch.float32)
# Convert to int in-place
float_tensor.data = float_tensor.to(torch.int32)
print(float_tensor)
# Create a float tensor
float_tensor = torch.tensor([1.5, 2.7, 3.2], dtype=torch.float32)
# Convert to int in-place
float_tensor.data = float_tensor.to(torch.int32)
print(float_tensor)
tensor([1, 2, 3], dtype=torch.int32)
54. How to read the following file? (★★☆)¶
1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
In [94]:
Copied!
import numpy as np
from io import StringIO
# Fake file
s = StringIO('''1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
''')
z = torch.from_numpy(np.genfromtxt(s, delimiter=",", dtype=np.int32))
z
import numpy as np
from io import StringIO
# Fake file
s = StringIO('''1, 2, 3, 4, 5
6, , , 7, 8
, , 9,10,11
''')
z = torch.from_numpy(np.genfromtxt(s, delimiter=",", dtype=np.int32))
z
Out[94]:
tensor([[ 1, 2, 3, 4, 5], [ 6, -1, -1, 7, 8], [-1, -1, 9, 10, 11]], dtype=torch.int32)
55. What is the equivalent of enumerate for torch tensors? (★★☆)¶
In [95]:
Copied!
def iterate_tensor(tensor):
"""
Iterates over all values in a PyTorch tensor of any dimension.
Args:
tensor: The input PyTorch tensor.
Yields:
A tuple containing:
- The current value of the tensor.
- A tuple of indices representing the position of the value in the tensor.
"""
shape = tensor.shape
indices = torch.zeros(len(shape), dtype=torch.long) # Initialize indices
while True:
yield tensor[tuple(indices)], tuple(indices)
# Increment indices
dim = 0
while dim < len(shape):
indices[dim] += 1
if indices[dim] < shape[dim]:
break
else:
indices[dim] = 0
dim += 1
# Check if all indices have reached the end
if dim == len(shape):
break
def iterate_tensor(tensor):
"""
Iterates over all values in a PyTorch tensor of any dimension.
Args:
tensor: The input PyTorch tensor.
Yields:
A tuple containing:
- The current value of the tensor.
- A tuple of indices representing the position of the value in the tensor.
"""
shape = tensor.shape
indices = torch.zeros(len(shape), dtype=torch.long) # Initialize indices
while True:
yield tensor[tuple(indices)], tuple(indices)
# Increment indices
dim = 0
while dim < len(shape):
indices[dim] += 1
if indices[dim] < shape[dim]:
break
else:
indices[dim] = 0
dim += 1
# Check if all indices have reached the end
if dim == len(shape):
break
In [96]:
Copied!
z = np.arange(9).reshape(3,3)
z = np.arange(9).reshape(3,3)
In [97]:
Copied!
for value, indices in iterate_tensor(z):
print(f"Value: {value}, Indices: {indices}")
for value, indices in iterate_tensor(z):
print(f"Value: {value}, Indices: {indices}")
Value: 0, Indices: (tensor(0), tensor(0)) Value: 3, Indices: (tensor(1), tensor(0)) Value: 6, Indices: (tensor(2), tensor(0)) Value: 1, Indices: (tensor(0), tensor(1)) Value: 4, Indices: (tensor(1), tensor(1)) Value: 7, Indices: (tensor(2), tensor(1)) Value: 2, Indices: (tensor(0), tensor(2)) Value: 5, Indices: (tensor(1), tensor(2)) Value: 8, Indices: (tensor(2), tensor(2))
56. Generate a generic 2D Gaussian-like array (★★☆)¶
In [98]:
Copied!
x, y = torch.meshgrid(torch.linspace(-1, 1, 10), torch.linspace(-1, 1, 10))
d = torch.sqrt(x * x + y * y)
sigma, mu = 1.0, 0.0
g = torch.exp(-((d - mu) ** 2 / ( 2.0 * sigma ** 2 )))
g
x, y = torch.meshgrid(torch.linspace(-1, 1, 10), torch.linspace(-1, 1, 10))
d = torch.sqrt(x * x + y * y)
sigma, mu = 1.0, 0.0
g = torch.exp(-((d - mu) ** 2 / ( 2.0 * sigma ** 2 )))
g
/home/mathewshen/Workspace/Projects/github/Oh-PyTorch/.venv/lib/python3.13/site-packages/torch/functional.py:534: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3595.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Out[98]:
tensor([[0.3679, 0.4482, 0.5198, 0.5738, 0.6028, 0.6028, 0.5738, 0.5198, 0.4482, 0.3679], [0.4482, 0.5461, 0.6333, 0.6991, 0.7344, 0.7344, 0.6991, 0.6333, 0.5461, 0.4482], [0.5198, 0.6333, 0.7344, 0.8107, 0.8517, 0.8517, 0.8107, 0.7344, 0.6333, 0.5198], [0.5738, 0.6991, 0.8107, 0.8948, 0.9401, 0.9401, 0.8948, 0.8107, 0.6991, 0.5738], [0.6028, 0.7344, 0.8517, 0.9401, 0.9877, 0.9877, 0.9401, 0.8517, 0.7344, 0.6028], [0.6028, 0.7344, 0.8517, 0.9401, 0.9877, 0.9877, 0.9401, 0.8517, 0.7344, 0.6028], [0.5738, 0.6991, 0.8107, 0.8948, 0.9401, 0.9401, 0.8948, 0.8107, 0.6991, 0.5738], [0.5198, 0.6333, 0.7344, 0.8107, 0.8517, 0.8517, 0.8107, 0.7344, 0.6333, 0.5198], [0.4482, 0.5461, 0.6333, 0.6991, 0.7344, 0.7344, 0.6991, 0.6333, 0.5461, 0.4482], [0.3679, 0.4482, 0.5198, 0.5738, 0.6028, 0.6028, 0.5738, 0.5198, 0.4482, 0.3679]])
57. How to randomly place p elements in a 2D array? (★★☆)¶
In [99]:
Copied!
z = torch.zeros(5, 5)
z
z = torch.zeros(5, 5)
z
Out[99]:
tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.]])
In [100]:
Copied!
p = 3
z.put_(torch.randint(0, z.numel(), (p, )), torch.ones(p))
p = 3
z.put_(torch.randint(0, z.numel(), (p, )), torch.ones(p))
Out[100]:
tensor([[0., 0., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 1., 0., 0., 0.], [0., 0., 0., 0., 0.], [0., 1., 0., 1., 0.]])
58. Subtract the mean of each row of a matrix (★★☆)¶
In [101]:
Copied!
z = torch.randint(0, 10, (2, 2), dtype=torch.float)
z
z = torch.randint(0, 10, (2, 2), dtype=torch.float)
z
Out[101]:
tensor([[0., 7.], [0., 8.]])
In [102]:
Copied!
# solution 1
z - z.mean(dim=1).reshape(-1, 1)
# solution 1
z - z.mean(dim=1).reshape(-1, 1)
Out[102]:
tensor([[-3.5000, 3.5000], [-4.0000, 4.0000]])
In [103]:
Copied!
# # solution 2
z - z.mean(dim=1, keepdims=True)
# # solution 2
z - z.mean(dim=1, keepdims=True)
Out[103]:
tensor([[-3.5000, 3.5000], [-4.0000, 4.0000]])
59. How to sort an array/tensor by the nth column? (★★☆)¶
In [104]:
Copied!
z = torch.randint(0, 10, (3, 3))
z
z = torch.randint(0, 10, (3, 3))
z
Out[104]:
tensor([[6, 0, 1], [5, 4, 8], [5, 9, 9]])
In [105]:
Copied!
z[z[:, 1].argsort()]
z[z[:, 1].argsort()]
Out[105]:
tensor([[6, 0, 1], [5, 4, 8], [5, 9, 9]])
60. How to tell if a given 2D array/tensor has null columns? (★★☆)¶
In [106]:
Copied!
z = torch.randint(0, 10, (3, 3), dtype=torch.float)
z[0, 1] = torch.log(torch.tensor([-1.]))
z
z = torch.randint(0, 10, (3, 3), dtype=torch.float)
z[0, 1] = torch.log(torch.tensor([-1.]))
z
Out[106]:
tensor([[0., nan, 4.], [6., 2., 1.], [9., 2., 3.]])
In [107]:
Copied!
z.isnan().any(dim=0)
z.isnan().any(dim=0)
Out[107]:
tensor([False, True, False])
61. Find the nearest value from a given value in an array (★★☆)¶
In [108]:
Copied!
z = torch.rand(10)
z
z = torch.rand(10)
z
Out[108]:
tensor([0.4676, 0.1715, 0.8227, 0.4240, 0.6835, 0.4485, 0.8485, 0.7086, 0.9771, 0.2584])
In [109]:
Copied!
x = 0.5
nearest_value_index = torch.flatten(torch.abs(z - x).argmin())
nearest_value = z[nearest_value_index]
nearest_value_index, nearest_value
x = 0.5
nearest_value_index = torch.flatten(torch.abs(z - x).argmin())
nearest_value = z[nearest_value_index]
nearest_value_index, nearest_value
Out[109]:
(tensor([0]), tensor([0.4676]))
🚫62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)¶
63. 🔧Create an named tensor (★★☆)¶
https://pytorch.org/docs/stable/named_tensor.html#creating-named-tensors
In [110]:
Copied!
imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
imgs.names
imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
imgs.names
/tmp/ipykernel_85296/263191427.py:1: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ../c10/core/TensorImpl.h:1928.) imgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))
Out[110]:
('N', 'C', 'H', 'W')
64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)¶
In [111]:
Copied!
z1 = torch.zeros(10)
z2 = torch.randint(0, 10, (5,))
z1, z2
z1 = torch.zeros(10)
z2 = torch.randint(0, 10, (5,))
z1, z2
Out[111]:
(tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]), tensor([3, 5, 8, 8, 1]))
In [112]:
Copied!
# in-place addition
z1[z2] += 1
z1
# in-place addition
z1[z2] += 1
z1
Out[112]:
tensor([0., 1., 0., 1., 0., 1., 0., 0., 1., 0.])
⚠️Note that z1 = z1[z2] + 1
doesn't work for this.
65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)¶
In [113]:
Copied!
x = torch.tensor([1,2,3,4,5,6])
index = torch.tensor([1,3,9,3,4,1])
f = torch.bincount(index, x)
f
x = torch.tensor([1,2,3,4,5,6])
index = torch.tensor([1,3,9,3,4,1])
f = torch.bincount(index, x)
f
Out[113]:
tensor([0., 7., 0., 6., 5., 0., 0., 0., 0., 3.], dtype=torch.float64)
66. Considering a (w,h,3) image of (dtype=uint8), compute the number of unique colors (★★☆)¶
In [114]:
Copied!
w, h = 256, 256
image = torch.randint(0, 4, (w, h, 3))
colors = torch.unique(image.reshape(-1, 3), dim=0)
len(colors)
w, h = 256, 256
image = torch.randint(0, 4, (w, h, 3))
colors = torch.unique(image.reshape(-1, 3), dim=0)
len(colors)
Out[114]:
64
67. Considering a four dimensions array/tensor, how to get sum over the last two axis at once? (★★★)¶
In [115]:
Copied!
z = torch.randint(0, 10, (2, 3, 4, 5))
z.sum(dim=(-2, -1))
z = torch.randint(0, 10, (2, 3, 4, 5))
z.sum(dim=(-2, -1))
Out[115]:
tensor([[76, 91, 90], [77, 96, 98]])
68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)¶
In [116]:
Copied!
D = torch.rand(100)
S = torch.randint(0,10,(100,))
D, S
D = torch.rand(100)
S = torch.randint(0,10,(100,))
D, S
Out[116]:
(tensor([0.5690, 0.6294, 0.1967, 0.6663, 0.7349, 0.8090, 0.6465, 0.4757, 0.7544, 0.2880, 0.4359, 0.8469, 0.2878, 0.9058, 0.0692, 0.4114, 0.4194, 0.7091, 0.8311, 0.7559, 0.2742, 0.7287, 0.6091, 0.3165, 0.8048, 0.0957, 0.1402, 0.0854, 0.5251, 0.3859, 0.0299, 0.3823, 0.4522, 0.1774, 0.9133, 0.9405, 0.5548, 0.6002, 0.9097, 0.5888, 0.6377, 0.1302, 0.5465, 0.1004, 0.2581, 0.8762, 0.5438, 0.2888, 0.7940, 0.8110, 0.8090, 0.4681, 0.0791, 0.6083, 0.8325, 0.3287, 0.9695, 0.1953, 0.0160, 0.1868, 0.4152, 0.8241, 0.7723, 0.6150, 0.2700, 0.7227, 0.9099, 0.7400, 0.7662, 0.0664, 0.6784, 0.8831, 0.8740, 0.9224, 0.5443, 0.5881, 0.4542, 0.6867, 0.2959, 0.0187, 0.2366, 0.0708, 0.6050, 0.8888, 0.3366, 0.1237, 0.6413, 0.4217, 0.2097, 0.5530, 0.5272, 0.4623, 0.8588, 0.0237, 0.4579, 0.9789, 0.1385, 0.9111, 0.3959, 0.9429]), tensor([7, 8, 2, 8, 7, 0, 1, 1, 1, 0, 0, 0, 2, 4, 7, 3, 3, 0, 7, 6, 3, 6, 0, 9, 1, 2, 8, 2, 2, 3, 4, 2, 3, 0, 9, 2, 3, 8, 1, 9, 2, 0, 2, 5, 8, 3, 9, 1, 3, 7, 5, 1, 2, 3, 7, 9, 3, 5, 1, 0, 8, 8, 7, 3, 5, 1, 5, 2, 0, 8, 5, 8, 8, 8, 3, 5, 6, 8, 4, 4, 4, 1, 2, 5, 1, 9, 0, 8, 5, 4, 4, 6, 6, 2, 9, 6, 0, 5, 1, 9]))
In [117]:
Copied!
D_sums = torch.bincount(S, D)
D_counts = torch.bincount(S)
D_means = D_sums / D_counts
D_means
D_sums = torch.bincount(S, D)
D_counts = torch.bincount(S)
D_means = D_sums / D_counts
D_means
Out[117]:
tensor([0.4782, 0.4908, 0.3958, 0.5754, 0.3667, 0.5561, 0.7065, 0.6600, 0.5683, 0.5269])
69. How to get the diagonal of a dot product? (★★★)¶
In [118]:
Copied!
z1 = torch.rand((5, 5))
z2 = torch.rand((5, 5))
torch.diag(torch.matmul(z1, z2))
z1 = torch.rand((5, 5))
z2 = torch.rand((5, 5))
torch.diag(torch.matmul(z1, z2))
Out[118]:
tensor([1.5501, 2.0054, 1.3297, 0.6452, 0.9413])
In [119]:
Copied!
torch.einsum("ij,ji->i", z1, z2)
torch.einsum("ij,ji->i", z1, z2)
Out[119]:
tensor([1.5501, 2.0054, 1.3297, 0.6452, 0.9413])
70. Consider the vector [1, 2, 3, 4, 5], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)¶
In [120]:
Copied!
x = torch.tensor([1, 2, 3, 4, 5])
n_zeros = 3
z = torch.zeros(x.shape[0] + n_zeros * (x.shape[0] - 1))
z[::(n_zeros + 1)] = x
z
x = torch.tensor([1, 2, 3, 4, 5])
n_zeros = 3
z = torch.zeros(x.shape[0] + n_zeros * (x.shape[0] - 1))
z[::(n_zeros + 1)] = x
z
Out[120]:
tensor([1., 0., 0., 0., 2., 0., 0., 0., 3., 0., 0., 0., 4., 0., 0., 0., 5.])
71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)¶
In [121]:
Copied!
z1 = torch.ones((5,5,3))
z2 = 2 * torch.ones((5,5))
z1 * z2[:,:,None]
z1 = torch.ones((5,5,3))
z2 = 2 * torch.ones((5,5))
z1 * z2[:,:,None]
Out[121]:
tensor([[[2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]], [[2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]], [[2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]], [[2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]], [[2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.], [2., 2., 2.]]])
72. How to swap two rows of an array/tensor? (★★★)¶
In [122]:
Copied!
z = torch.randint(0, 25, (3, 3))
z
z = torch.randint(0, 25, (3, 3))
z
Out[122]:
tensor([[21, 20, 4], [16, 6, 13], [ 3, 16, 3]])
In [123]:
Copied!
z[[0, 1]] = z[[1, 0]]
z
z[[0, 1]] = z[[1, 0]]
z
Out[123]:
tensor([[16, 6, 13], [21, 20, 4], [ 3, 16, 3]])
😢73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)¶
I think the problem itself is too complex to be a good problem in this practice situation...
😢74. Given a sorted array C that corresponds to a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)¶
I think the problem itself is too complex to be a good problem in this practice situation...
75. How to compute averages using a sliding window over an array? (★★★)¶
In [124]:
Copied!
z = torch.arange(20)
z
z = torch.arange(20)
z
Out[124]:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
In [125]:
Copied!
z.unfold(0, 3, 1)
z.unfold(0, 3, 1)
Out[125]:
tensor([[ 0, 1, 2], [ 1, 2, 3], [ 2, 3, 4], [ 3, 4, 5], [ 4, 5, 6], [ 5, 6, 7], [ 6, 7, 8], [ 7, 8, 9], [ 8, 9, 10], [ 9, 10, 11], [10, 11, 12], [11, 12, 13], [12, 13, 14], [13, 14, 15], [14, 15, 16], [15, 16, 17], [16, 17, 18], [17, 18, 19]])
In [126]:
Copied!
z.unfold(0, 3, 1).mean(dim=1, dtype=torch.float32)
z.unfold(0, 3, 1).mean(dim=1, dtype=torch.float32)
Out[126]:
tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18.])
76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z[0],Z[1],Z[2]) and each subsequent row is shifted by 1 (last row should be (Z[-3],Z[-2],Z[-1]) (★★★)¶
In [127]:
Copied!
z = torch.arange(10)
z
z = torch.arange(10)
z
Out[127]:
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
In [128]:
Copied!
z.unfold(0, 3, 1)
z.unfold(0, 3, 1)
Out[128]:
tensor([[0, 1, 2], [1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 6], [5, 6, 7], [6, 7, 8], [7, 8, 9]])
77. How to negate a boolean, or to change the sign of a float inplace? (★★★)¶
In [129]:
Copied!
z = torch.randint(0, 2, (100, ), dtype=torch.bool)
z
z = torch.randint(0, 2, (100, ), dtype=torch.bool)
z
Out[129]:
tensor([False, False, False, False, False, True, False, False, True, True, True, False, False, False, True, True, False, True, False, False, True, False, True, False, True, True, False, True, False, False, True, True, False, False, False, False, True, False, True, False, True, True, False, False, True, True, True, True, False, True, False, True, False, True, False, True, True, True, False, True, True, False, True, True, False, True, False, False, True, True, True, True, False, True, False, True, True, False, True, False, True, True, False, False, True, False, False, True, True, False, True, False, True, False, False, False, True, True, True, True])
In [130]:
Copied!
torch.logical_not(z, out=z)
z
torch.logical_not(z, out=z)
z
Out[130]:
tensor([ True, True, True, True, True, False, True, True, False, False, False, True, True, True, False, False, True, False, True, True, False, True, False, True, False, False, True, False, True, True, False, False, True, True, True, True, False, True, False, True, False, False, True, True, False, False, False, False, True, False, True, False, True, False, True, False, False, False, True, False, False, True, False, False, True, False, True, True, False, False, False, False, True, False, True, False, False, True, False, True, False, False, True, True, False, True, True, False, False, True, False, True, False, True, True, True, False, False, False, False])
In [131]:
Copied!
z = torch.rand(100) - 0.5
z
z = torch.rand(100) - 0.5
z
Out[131]:
tensor([ 0.2737, -0.3257, 0.1720, 0.1693, -0.0274, 0.0055, -0.2016, -0.4919, -0.2478, 0.4483, 0.0925, 0.2205, -0.1876, -0.3535, 0.4640, -0.0445, 0.1670, 0.3736, 0.3298, -0.3814, -0.4645, -0.0972, -0.0102, 0.0484, 0.3509, 0.1071, -0.2917, -0.0077, -0.0901, -0.0977, 0.1859, 0.2581, 0.1736, -0.4868, 0.1925, 0.3998, 0.4349, -0.1751, 0.0054, 0.3102, -0.1915, -0.1606, 0.2868, 0.1804, -0.4233, 0.3218, -0.2049, -0.2130, 0.0044, -0.4835, -0.1977, 0.4475, -0.1030, 0.0801, 0.4823, 0.4635, -0.2809, -0.2447, -0.4450, -0.0363, 0.1554, -0.1333, 0.1371, 0.3181, 0.0750, -0.4773, -0.1347, -0.0891, 0.0058, -0.2867, 0.3646, 0.3762, -0.3235, 0.3318, 0.2681, 0.4350, 0.3443, 0.4270, -0.2350, -0.1299, 0.3006, -0.3941, 0.1738, 0.4157, -0.3499, 0.4663, -0.1574, -0.2523, 0.4422, 0.3075, 0.1568, 0.1863, 0.2309, 0.0562, -0.4726, -0.0473, -0.2834, -0.1430, 0.0554, -0.3195])
In [132]:
Copied!
torch.negative(z, out=z)
z
torch.negative(z, out=z)
z
Out[132]:
tensor([-0.2737, 0.3257, -0.1720, -0.1693, 0.0274, -0.0055, 0.2016, 0.4919, 0.2478, -0.4483, -0.0925, -0.2205, 0.1876, 0.3535, -0.4640, 0.0445, -0.1670, -0.3736, -0.3298, 0.3814, 0.4645, 0.0972, 0.0102, -0.0484, -0.3509, -0.1071, 0.2917, 0.0077, 0.0901, 0.0977, -0.1859, -0.2581, -0.1736, 0.4868, -0.1925, -0.3998, -0.4349, 0.1751, -0.0054, -0.3102, 0.1915, 0.1606, -0.2868, -0.1804, 0.4233, -0.3218, 0.2049, 0.2130, -0.0044, 0.4835, 0.1977, -0.4475, 0.1030, -0.0801, -0.4823, -0.4635, 0.2809, 0.2447, 0.4450, 0.0363, -0.1554, 0.1333, -0.1371, -0.3181, -0.0750, 0.4773, 0.1347, 0.0891, -0.0058, 0.2867, -0.3646, -0.3762, 0.3235, -0.3318, -0.2681, -0.4350, -0.3443, -0.4270, 0.2350, 0.1299, -0.3006, 0.3941, -0.1738, -0.4157, 0.3499, -0.4663, 0.1574, 0.2523, -0.4422, -0.3075, -0.1568, -0.1863, -0.2309, -0.0562, 0.4726, 0.0473, 0.2834, 0.1430, -0.0554, 0.3195])
😢78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0[i],P1[i])? (★★★)¶
😢79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P[j]) to each line i (P0[i],P1[i])? (★★★)¶
😢80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a fill
value when necessary) (★★★)¶
81. Consider an array Z = [1,2,3,4,5,6,7,8,9,10,11,12,13,14], how to generate an array R = [[1,2,3,4], [2,3,4,5], [3,4,5,6], ..., [11,12,13,14]]? (★★★)¶
In [133]:
Copied!
z = torch.arange(1, 15)
z
z = torch.arange(1, 15)
z
Out[133]:
tensor([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])
In [134]:
Copied!
z.unfold(0, 4, 1)
z.unfold(0, 4, 1)
Out[134]:
tensor([[ 1, 2, 3, 4], [ 2, 3, 4, 5], [ 3, 4, 5, 6], [ 4, 5, 6, 7], [ 5, 6, 7, 8], [ 6, 7, 8, 9], [ 7, 8, 9, 10], [ 8, 9, 10, 11], [ 9, 10, 11, 12], [10, 11, 12, 13], [11, 12, 13, 14]])
82. Compute a matrix rank (★★★)¶
In [135]:
Copied!
z = torch.rand((10, 10))
torch.linalg.matrix_rank(z)
z = torch.rand((10, 10))
torch.linalg.matrix_rank(z)
Out[135]:
tensor(10)
In [136]:
Copied!
# by svd
U, S, V = torch.linalg.svd(z)
torch.sum(S > 1e-10)
# by svd
U, S, V = torch.linalg.svd(z)
torch.sum(S > 1e-10)
Out[136]:
tensor(10)
83. How to find the most frequent value in an array?¶
In [137]:
Copied!
z = torch.randint(0, 10, (10, ))
z
z = torch.randint(0, 10, (10, ))
z
Out[137]:
tensor([9, 2, 0, 2, 3, 5, 1, 0, 1, 9])
In [138]:
Copied!
torch.bincount(z).argmax()
torch.bincount(z).argmax()
Out[138]:
tensor(0)
84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)¶
In [139]:
Copied!
matrix = torch.randint(0, 5, (10, 10))
matrix
matrix = torch.randint(0, 5, (10, 10))
matrix
Out[139]:
tensor([[1, 1, 1, 2, 4, 1, 2, 4, 2, 4], [1, 3, 3, 4, 4, 4, 4, 0, 1, 3], [3, 0, 3, 0, 1, 0, 3, 4, 0, 1], [4, 3, 0, 2, 2, 3, 3, 4, 1, 2], [4, 3, 0, 4, 4, 2, 3, 2, 1, 1], [3, 4, 4, 4, 1, 0, 4, 3, 3, 4], [0, 4, 4, 2, 2, 3, 0, 4, 4, 3], [4, 0, 0, 0, 4, 4, 0, 1, 1, 1], [2, 2, 1, 3, 2, 1, 4, 4, 4, 4], [3, 3, 0, 3, 0, 3, 0, 2, 4, 1]])
In [140]:
Copied!
blocks = matrix.unfold(0, 3, 1).unfold(1, 3, 1)
blocks.shape
blocks = matrix.unfold(0, 3, 1).unfold(1, 3, 1)
blocks.shape
Out[140]:
torch.Size([8, 8, 3, 3])
In [141]:
Copied!
blocks[0, :3, :, :]
blocks[0, :3, :, :]
Out[141]:
tensor([[[1, 1, 1], [1, 3, 3], [3, 0, 3]], [[1, 1, 2], [3, 3, 4], [0, 3, 0]], [[1, 2, 4], [3, 4, 4], [3, 0, 1]]])
🚫85. Create a 2D array subclass such that Z[i,j] == Z[j,i] (★★★)¶
86. Consider a set of p matrices with shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)¶
In [142]:
Copied!
p, n = 10, 20
matrices = torch.ones((p, n, n))
vectors = torch.ones((p, n, 1))
p, n = 10, 20
matrices = torch.ones((p, n, n))
vectors = torch.ones((p, n, 1))
In [143]:
Copied!
torch.matmul(matrices, vectors).sum(axis=0)
torch.matmul(matrices, vectors).sum(axis=0)
Out[143]:
tensor([[200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.], [200.]])
87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)¶
In [144]:
Copied!
z = torch.ones(16, 16)
z.shape
z = torch.ones(16, 16)
z.shape
Out[144]:
torch.Size([16, 16])
In [145]:
Copied!
z.unfold(0, 4, 4).unfold(1, 4, 4).sum(dim=(2, 3))
z.unfold(0, 4, 4).unfold(1, 4, 4).sum(dim=(2, 3))
Out[145]:
tensor([[16., 16., 16., 16.], [16., 16., 16., 16.], [16., 16., 16., 16.], [16., 16., 16., 16.]])
😢88. How to implement the Game of Life using numpy arrays? (★★★)¶
89. How to get the n largest values of an array (★★★)¶
In [146]:
Copied!
z = torch.randint(0, 10, (10,))
z
z = torch.randint(0, 10, (10,))
z
Out[146]:
tensor([7, 8, 5, 5, 5, 9, 3, 7, 6, 6])
In [147]:
Copied!
n = 5
z[torch.argsort(z)][-n:]
n = 5
z[torch.argsort(z)][-n:]
Out[147]:
tensor([6, 7, 7, 8, 9])
90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)¶
In [148]:
Copied!
torch.cartesian_prod(torch.tensor([1, 2, 3]), torch.tensor([4, 5]))
torch.cartesian_prod(torch.tensor([1, 2, 3]), torch.tensor([4, 5]))
Out[148]:
tensor([[1, 4], [1, 5], [2, 4], [2, 5], [3, 4], [3, 5]])
In [149]:
Copied!
torch.cartesian_prod(torch.tensor([1, 2, 3]), torch.tensor([4, 5]), torch.tensor([6]))
torch.cartesian_prod(torch.tensor([1, 2, 3]), torch.tensor([4, 5]), torch.tensor([6]))
Out[149]:
tensor([[1, 4, 6], [1, 5, 6], [2, 4, 6], [2, 5, 6], [3, 4, 6], [3, 5, 6]])
🚫91. How to create a record array from a regular array? (★★★)¶
92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)¶
In [150]:
Copied!
z = torch.rand(50000000)
z = torch.rand(50000000)
In [151]:
Copied!
%timeit x.pow(3)
%timeit x.pow(3)
1.26 μs ± 5.1 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [152]:
Copied!
%timeit x * x * x
%timeit x * x * x
1.93 μs ± 16.6 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [153]:
Copied!
%timeit torch.einsum('i,i,i->i', x, x, x)
%timeit torch.einsum('i,i,i->i', x, x, x)
6.84 μs ± 103 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)¶
In [154]:
Copied!
A = torch.randint(0, 5, (8, 3))
B = torch.randint(0, 5, (2, 2))
A, B
A = torch.randint(0, 5, (8, 3))
B = torch.randint(0, 5, (2, 2))
A, B
Out[154]:
(tensor([[2, 4, 0], [3, 4, 4], [1, 1, 0], [1, 0, 3], [3, 4, 4], [2, 0, 1], [3, 3, 0], [4, 0, 2]]), tensor([[3, 1], [4, 0]]))
In [155]:
Copied!
# Step 1: Check if each element of B exists in A's rows
element_present = (A.unsqueeze(1).unsqueeze(3) == B.unsqueeze(0).unsqueeze(2)).any(dim=2)
# element_present.shape = (8, 2, 2)
# Step 2: Check if all elements of a B row exist in an A row
row_matches = element_present.all(dim=2)
# row_matches.shape = (8, 2)
# Step 3: Check if any B row matches the A row
final_mask = row_matches.any(dim=1)
# final_mask.shape = (8,)
# Get indices of matching rows
matching_indices = final_mask.nonzero().squeeze()
print(matching_indices) # Output: tensor([0, 1, 2, 3])
print(A[matching_indices])
# Step 1: Check if each element of B exists in A's rows
element_present = (A.unsqueeze(1).unsqueeze(3) == B.unsqueeze(0).unsqueeze(2)).any(dim=2)
# element_present.shape = (8, 2, 2)
# Step 2: Check if all elements of a B row exist in an A row
row_matches = element_present.all(dim=2)
# row_matches.shape = (8, 2)
# Step 3: Check if any B row matches the A row
final_mask = row_matches.any(dim=1)
# final_mask.shape = (8,)
# Get indices of matching rows
matching_indices = final_mask.nonzero().squeeze()
print(matching_indices) # Output: tensor([0, 1, 2, 3])
print(A[matching_indices])
tensor([0, 3, 7]) tensor([[2, 4, 0], [1, 0, 3], [4, 0, 2]])
94. Considering a 10x3 matrix, extract rows with unequal values (e.g. [2,2,3]) (★★★)¶
In [156]:
Copied!
z = torch.randint(0, 2, (10, 3))
z
z = torch.randint(0, 2, (10, 3))
z
Out[156]:
tensor([[0, 0, 0], [0, 0, 1], [0, 0, 1], [0, 1, 0], [1, 1, 1], [1, 1, 0], [1, 0, 1], [1, 1, 1], [0, 1, 1], [0, 0, 0]])
In [157]:
Copied!
z[~(z == z[:, 0].reshape(-1, 1)).all(dim=1)]
z[~(z == z[:, 0].reshape(-1, 1)).all(dim=1)]
Out[157]:
tensor([[0, 0, 1], [0, 0, 1], [0, 1, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1]])
95. Convert a vector of ints into a matrix binary representation (★★★)¶
In [158]:
Copied!
z = torch.tensor([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=torch.uint8)
torch.from_numpy(np.unpackbits(z.numpy()[:, np.newaxis], axis=1))
z = torch.tensor([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=torch.uint8)
torch.from_numpy(np.unpackbits(z.numpy()[:, np.newaxis], axis=1))
Out[158]:
tensor([[0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 1, 1], [0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 1, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0, 0]], dtype=torch.uint8)
96. Given a two dimensional array, how to extract unique rows? (★★★)¶
In [159]:
Copied!
z = torch.randint(0, 2, (6, 3))
z
z = torch.randint(0, 2, (6, 3))
z
Out[159]:
tensor([[0, 1, 0], [1, 0, 0], [0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 0, 0]])
In [160]:
Copied!
torch.unique(z, dim=0)
torch.unique(z, dim=0)
Out[160]:
tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0]])
97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)¶
In [161]:
Copied!
A = torch.rand(10)
B = torch.rand(10)
A, B
A = torch.rand(10)
B = torch.rand(10)
A, B
Out[161]:
(tensor([0.1024, 0.3653, 0.7698, 0.8873, 0.1167, 0.9317, 0.8050, 0.4174, 0.1395, 0.1321]), tensor([0.6029, 0.4783, 0.3630, 0.7097, 0.3746, 0.1730, 0.8515, 0.5386, 0.4599, 0.6880]))
In [162]:
Copied!
torch.einsum('i->', A) # np.sum(A)
torch.einsum('i->', A) # np.sum(A)
Out[162]:
tensor(4.6673)
In [163]:
Copied!
torch.einsum('i,i->i', A, B) # A * B
torch.einsum('i,i->i', A, B) # A * B
Out[163]:
tensor([0.0617, 0.1747, 0.2794, 0.6298, 0.0437, 0.1612, 0.6855, 0.2248, 0.0642, 0.0909])
In [164]:
Copied!
torch.einsum('i,i', A, B) # np.inner(A, B)
torch.einsum('i,i', A, B) # np.inner(A, B)
Out[164]:
tensor(2.4160)
In [165]:
Copied!
torch.einsum('i,j->ij', A, B) # np.outer(A, B)
torch.einsum('i,j->ij', A, B) # np.outer(A, B)
Out[165]:
tensor([[0.0617, 0.0490, 0.0372, 0.0726, 0.0383, 0.0177, 0.0872, 0.0551, 0.0471, 0.0704], [0.2202, 0.1747, 0.1326, 0.2592, 0.1368, 0.0632, 0.3110, 0.1967, 0.1680, 0.2513], [0.4641, 0.3682, 0.2794, 0.5464, 0.2883, 0.1332, 0.6555, 0.4147, 0.3540, 0.5296], [0.5350, 0.4244, 0.3221, 0.6298, 0.3324, 0.1535, 0.7556, 0.4780, 0.4081, 0.6105], [0.0704, 0.0558, 0.0424, 0.0828, 0.0437, 0.0202, 0.0994, 0.0629, 0.0537, 0.0803], [0.5617, 0.4456, 0.3382, 0.6612, 0.3490, 0.1612, 0.7934, 0.5018, 0.4285, 0.6410], [0.4854, 0.3850, 0.2922, 0.5714, 0.3015, 0.1393, 0.6855, 0.4336, 0.3702, 0.5539], [0.2517, 0.1996, 0.1515, 0.2962, 0.1563, 0.0722, 0.3554, 0.2248, 0.1920, 0.2872], [0.0841, 0.0667, 0.0506, 0.0990, 0.0523, 0.0241, 0.1188, 0.0751, 0.0642, 0.0960], [0.0797, 0.0632, 0.0480, 0.0938, 0.0495, 0.0229, 0.1125, 0.0712, 0.0608, 0.0909]])
😢98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?¶
😢99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)¶
😢100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)¶
In [ ]:
Copied!