torch_study_3
"/home/yossef/notes/personal/ml/torch_study/torch_study_3.md"
path: personal/ml/torch_study/torch_study_3.md
- **fileName**: torch_study_3
- **Created on**: 2026-04-02 19:00:36
let's dive with tensor and what is pytorch and why using this this
so meny of the problems that meeting the people when coding a model is
the shapes and handle tensors shape so the best solution for this is
understand the tensor shapes too solve this
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
# dynmaic data a locater => now the datatype without human interfrance
data = torch.tensor([1,2,3])
print(data)
data.dtype
# convert numpy array too tensor values
data = np.array([1,2,123,1,41])
print(new_array)
data = torch.from_numpy(data)
print(data)
print(data.dtype)
print(len(data))
print(data.shape)
tensor([1, 2, 3]) [ 1 2 123 1 41] tensor([ 1, 2, 123, 1,
41]) torch.int64 5 torch.Size([5])
print(data[1])
tensor(2)
for i in range(5):
print(data[i].item())
1 2 123 1 41
now with pandes we gone using csv file and creating a dataframe
so what is dataframe
this is normal data 1 2 123 1 41
so dataframe data is like sql the data on tables looks
so the data is more organized and can making more and more operation on this data with nice and easy functions just like sql
data = pd.read_csv("assets/scripts/data.csv")
data
| distance_miles | delivery_time_minutes | |
|---|---|---|
| 0 | 1.60 | 7.22 |
| 1 | 13.09 | 32.41 |
| 2 | 6.97 | 17.47 |
data.head()
| distance_miles | delivery_time_minutes | |
|---|---|---|
| 0 | 1.60 | 7.22 |
| 1 | 13.09 | 32.41 |
| 2 | 6.97 | 17.47 |
data.tail()
| distance_miles | delivery_time_minutes | |
|---|---|---|
| 0 | 1.60 | 7.22 |
| 1 | 13.09 | 32.41 |
| 2 | 6.97 | 17.47 |
data.shape
(3, 2)
data.describe()
| distance_miles | delivery_time_minutes | |
|---|---|---|
| count | 3.000000 | 3.000000 |
| mean | 7.220000 | 19.033333 |
| std | 5.749078 | 12.667558 |
| min | 1.600000 | 7.220000 |
| 25% | 4.285000 | 12.345000 |
| 50% | 6.970000 | 17.470000 |
| 75% | 10.030000 | 24.940000 |
| max | 13.090000 | 32.410000 |
data.all()
distance_miles True delivery_time_minutes True dtype:
bool
data
| distance_miles | delivery_time_minutes | |
|---|---|---|
| 0 | 1.60 | 7.22 |
| 1 | 13.09 | 32.41 |
| 2 | 6.97 | 17.47 |
data.query("distance_miles > 6")
| distance_miles | delivery_time_minutes | |
|---|---|---|
| 1 | 13.09 | 32.41 |
| 2 | 6.97 | 17.47 |
data
| distance_miles | delivery_time_minutes | |
|---|---|---|
| 0 | 1.60 | 7.22 |
| 1 | 13.09 | 32.41 |
| 2 | 6.97 | 17.47 |
data = data.values
data.dtype
dtype('float64')
data.shape
(3, 2)
isinstance(data, np.ndarray)
True
data = torch.tensor(data)
data
tensor( 1.6000, 7.2200], [13.0900, 32.4100], [ 6.9700, 17.4700,
dtype=torch.float64)
data.dtype
torch.float64
zeros = torch.zeros(3,2)
zeros
tensor(0., 0.], [0., 0.], [0., 0.)
ones = torch.ones(3,3)
ones
tensor(1., 1., 1.], [1., 1., 1.], [1., 1., 1.)
rand_values = torch.rand(3,3)
rand_values
tensor([[0.1618, 0.2334, 0.5655], [0.8433, 0.8989, 0.7817], [0.9761,
0.2819, 0.5522]])
## for sequence values u can making this
data = torch.arange(0, 10, step=1)
data
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
rand_values = torch.rand(3,3)
rand_values.shape
torch.Size([3, 3])
zeros = torch.zeros(3,4)
zeros
zeros = zeros.squeeze()
zeros = zeros.squeeze()
zeros = zeros.squeeze()
zeros
tensor(0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.)
ones = torch.ones(3,4)
ones
ones = ones.unsqueeze(2)
ones
tensor([1.], [1.], [1.], [1.,
so what is reshape is change the tensor shape but must handle all the elemnets on that tensor
for example if tenser.shape is 3,3 so if u want too making reshape u
can making the new shape this, 1,9 == 9,1
ones = torch.ones(3,3)
ones.shape
torch.Size([3, 3])
ones.shape
torch.Size([3, 3])
ones.reshape(1,9)
tensor(1., 1., 1., 1., 1., 1., 1., 1., 1.)
ones.shape
torch.Size([3, 3])
transport swaps the specified dimensions of a tensor.
ones = torch.ones(3,1)
ones
tensor(1.], [1.], [1.)
transposed = ones.transpose(0, 0)
transposed
tensor(1.], [1.], [1.)
Combining Tensors
In the data preparation stage, you might need to combine data from
different sources or merge separate batches into one larger dataset.
torch.cat(): Joins a sequence of tensors along an existing dimension. Note: All tensors must have the same shape in dimensions other than the one being concatenated.
ones = torch.ones(3,3)
zeros = torch.zeros(3,3)
full_tensor = torch.cat((ones, zeros), dim=1)
full_tensor
full_tensor.dtype
torch.float32
x = torch.tensor([
[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]
])
x
tensor( 1, 2, 3, 4], [ 5, 6, 7, 8], [ 9, 10, 11, 12)
x[0][0] # first element
tensor(1)
x[-1] # last row
tensor([ 9, 10, 11, 12])
x[1] # second row
tensor([5, 6, 7, 8])
x[-1][-1] # last element
tensor(12)
x[:, -1] # last column
tensor([ 4, 8, 12])
newx = x[0:2, 2:]
newx.shape
torch.Size([2, 2])
4 - Mathematical & Logical Operations
At their core, neural networks are performing mathematical
computations. A single neuron, for example, calculates a weighted sum
of its inputs and adds a bias. PyTorch is optimized to perform these
operations efficiently across entire tensors at once, which is what
makes training so fast.
4.1 - Arithmetic
These operations are the foundation of how a neural network processes
data. You'll see how PyTorch handles element-wise calculations and
uses a powerful feature called broadcasting to simplify your code.
- Element-wise Operations: Standard math operators (
+,*) that apply to each element independently.
a = torch.tensor([1, 2, 3])
b = torch.tensor([4, 5, 6])
print("TENSOR A:", a)
print("TENSOR B", b)
print("-" * 60)
# Element-wise addition
element_add = a + b
print("\nAFTER PERFORMING ELEMENT-WISE ADDITION:", element_add, "\n")
TENSOR A: tensor([1, 2, 3]) TENSOR B tensor([4, 5, 6])
AFTER PERFORMING ELEMENT-WISE ADDITION: tensor([5, 7, 9])
print("TENSOR A:", a)
print("TENSOR B", b)
print("-" * 65)
# Element-wise multiplication
element_mul = a * b
print("\nAFTER PERFORMING ELEMENT-WISE MULTIPLICATION:", element_mul, "\n")
TENSOR A: tensor([1, 2, 3]) TENSOR B tensor([4, 5, 6])
AFTER PERFORMING ELEMENT-WISE MULTIPLICATION: tensor([ 4, 10, 18])
- Dot Product (
torch.matmul()):
Calculates the dot product of two vectors or matrices.
print("TENSOR A:", a)
print("TENSOR B", b)
print("-" * 65)
# Dot product
dot_product = torch.matmul(a, b)
print(dot_product, "\n")
TENSOR A: tensor([1, 2, 3]) TENSOR B tensor([4, 5, 6])
tensor(32)
- Broadcasting: The automatic expansion of smaller tensors to match the shape of larger tensors during arithmetic operations.
- Broadcasting allows operations between tensors with compatible
shapes, even if they don't have the exact same dimensions.
a = torch.tensor([1, 2, 3])
b = torch.tensor([[1],
[2],
[3]])
print("TENSOR A:", a)
print("SHAPE:", a.shape)
print("\nTENSOR B\n\n", b)
print("\nSHAPE:", b.shape)
print("-" * 65)
# Apply broadcasting
c = a + b
print("\nTENSOR C:\n\n", c)
print("\nSHAPE:", c.shape, "\n")
TENSOR A: tensor([1, 2, 3]) SHAPE: torch.Size([3])
TENSOR B
tensor(1], [2], [3)
SHAPE: torch.Size([3, 1])
TENSOR C:
tensor(2, 3, 4], [3, 4, 5], [4, 5, 6)
SHAPE: torch.Size([3, 3])
4.2 - Logic & Comparisons
Logical operations are powerful tools for data preparation and
analysis. They allow you to create boolean masks to filter, select, or
modify your data based on specific conditions you define.
- Comparison Operators: Element-wise comparisons (
>,==,<) that produce a boolean tensor.
temperatures = torch.tensor([20, 35, 19, 35, 42])
print("TEMPERATURES:", temperatures)
print("-" * 50)
### Comparison Operators (>, <, ==)
# Use '>' (greater than) to find temperatures above 30
is_hot = temperatures > 30
# Use '<=' (less than or equal to) to find temperatures 20 or below
is_cool = temperatures <= 20
# Use '==' (equal to) to find temperatures exactly equal to 35
is_35_degrees = temperatures == 35
print("\nHOT (> 30 DEGREES):", is_hot)
print("COOL (<= 20 DEGREES):", is_cool)
print("EXACTLY 35 DEGREES:", is_35_degrees, "\n")
TEMPERATURES: tensor([20, 35, 19, 35, 42])
HOT (> 30 DEGREES): tensor([False, True, False, True, True]) COOL
(<= 20 DEGREES): tensor([ True, False, True, False, False]) EXACTLY
35 DEGREES: tensor([False, True, False, True, False])
is_morning = torch.tensor([True, False, False, True])
is_raining = torch.tensor([False, False, True, True])
print("IS MORNING:", is_morning)
print("IS RAINING:", is_raining)
print("-" * 50)
### Logical Operators (&, |)
# Use '&' (AND) to find when it's both morning and raining
morning_and_raining = (is_morning & is_raining)
# Use '|' (OR) to find when it's either morning or raining
morning_or_raining = is_morning | is_raining
print("\nMORNING & (AND) RAINING:", morning_and_raining)
print("MORNING | (OR) RAINING:", morning_or_raining)
IS MORNING: tensor([ True, False, False, True]) IS RAINING:
tensor([False, False, True, True])
MORNING & (AND) RAINING: tensor([False, False, False, True]) MORNING
| (OR) RAINING: tensor([ True, False, True, True])
data = torch.tensor([10.0, 20.0, 30.0, 40.0, 50.0])
print("DATA:", data)
print("-" * 45)
# Calculate the mean
data_mean = data.mean()
print("\nCALCULATED MEAN:", data_mean, "\n")
DATA: tensor([10., 20., 30., 40., 50.])
CALCULATED MEAN: tensor(30.)
print("DATA:", data)
print("-" * 45)
# Calculate the standard deviation
data_std = data.std()
print("\nCALCULATED STD:", data_std, "\n")
DATA: tensor([10., 20., 30., 40., 50.])
CALCULATED STD: tensor(15.8114)
print("DATA:", data)
print("DATA TYPE:", data.dtype)
print("-" * 45)
# Cast the tensor to a int type
int_tensor = data.int()
print("\nCASTED DATA:", int_tensor)
print("CASTED DATA TYPE", int_tensor.dtype)
DATA: tensor([10., 20., 30., 40., 50.]) DATA TYPE: torch.float32
CASTED DATA: tensor([10, 20, 30, 40, 50], dtype=torch.int32) CASTED
DATA TYPE torch.int32
x = torch.tensor([25.0])
x = x.unsqueeze(0)
x = x.squeeze()
print(x.shape)
torch.Size([])
data = torch.tensor([10., 20., 30., 40., 50.], dtype=float)
data
tensor([10., 20., 30., 40., 50.], dtype=torch.float64)
data.unsqueeze(1)
tensor(10.], [20.], [30.], [40.], [50., dtype=torch.float64)
before:torch_study_2
continue:C1M1_Assignment