# Parametric and Non-Parametric Machine Learning Algorithms

Welcome to my blog. Today’s topic will cover the Parametric and Non-Parametric models.
Linear regression is a parametric model opposite to a non-parametric one. the parametric model, a number of parameters are fixed concerning the sample size. In a non-parametric model, the effective number of parameters can grow with the sample size.

In OLS model the number of parameters always be a length of B(beta)+1. Let’s say Machine learning can be summarized as a learning function (f) that maps input variable (x)
So Y = f(x)
An algo can learn the target mapping function to training data in the form of the function. The function is unknown and machine learning practitioners evaluate different ML algorithms. and see which is better approximating the underline function.

Different algorithms make different assumptions or biases in the form of functions. and how it can be learned.

## Parametric Model

A learning model that summarizes data with a set of parameters of fixed size. (Independent variable of the number of training examples). No matter how much data you have through a parametric model it would not change. It’s only mid about how many independent parameters it needed.

Imagine you’re trying to learn a secret recipe for the perfect pancake. Here’s how a parametric model, like a simple recipe, would work:

• Fixed Ingredients (Parameters): The recipe has a set number of ingredients (like flour, milk, and eggs), no matter how many pancakes you want to make. These ingredients are like the parameters in a parametric model. They are fixed in number, regardless of how much data you have.
• Focus on Quantities (Coefficients): The key to the perfect pancake is the amount of each ingredient. These amounts are like the coefficients in a parametric model. The model learns the best coefficients (amounts) for each ingredient (variable) from the data (your past attempts at pancakes!).

This kind of algorithm requires two steps.

1. Select a form for the function
2. Learn the coefficients for the function from training datasets.

For example: f(x) = Y = b0 + b1*x1 + b2*x2 + b3*x3
Note: (b is beta)

### Two Steps to Success:

1. Choose a Recipe (Function Form): Just like you choose a basic pancake recipe, the model needs a starting point (function form).
2. Perfect the Amounts (Learn Coefficients): By trying different amounts of ingredients (training data), the model learns the ideal amounts (coefficients) for the best pancakes (predictions).

For Example: Imagine your recipe considering flour (x1), milk (x2), and eggs (x3). The model would be like this:
PerfectPancakes(deliciousness) = b0 + b1 * flour(x1) + b2 * milk(x2) + b3 * eggs(x3)

• b0, b1, b2, and b3 are the coefficients (amounts to learn)
• x1, x2, and x3 are the independent variables (flour, milk, and eggs)

Because by learning the best values for b0, b1, b2, and b3, the model becomes a “pancake predictor,” telling you how much of each ingredient to use for the perfect pancake (prediction)!

### Why is it called “linear”?

Because this example uses a simple equation (like a straight line), it’s called a linear model. It focuses on finding the best straight line through the data points to make predictions.
Assume function from a line which simplifies the learning process. Now all we need to find the coefficient value of the line equation. And we have a predictive model for this problem.
Because of the leaner combination of input variables, we call this a linear machine-learning algorithm.

Note: The problem is unknown underline this function. And may not be a linear function as a line we thought.
In this case, the assumption is wrong and the approach will produce poor results.

The Examples of the Parametric Model:

### Benefits of Parametric Models (Think: Easy-to-follow Recipe)

• Easier to Understand: Imagine following a recipe with a set number of ingredients (parameters). You know exactly what’s involved and how each ingredient affects the final dish (prediction). This is similar to a parametric model. It’s clear how the model arrives at its predictions based on the chosen function and coefficients.
• Faster Learning: Just like following a pre-defined recipe is faster than experimenting with random ingredients, parametric models learn quickly from data. They don’t need to explore many possibilities because they have a fixed structure.
• Less Data Required: Even if the model’s predictions aren’t a perfect fit for the data (like a slightly undercooked pancake), it can still be useful. Parametric models can work well with smaller datasets compared to some other algorithms.

### Limitations of Parametric Models (Think: Limited Recipe Options)

• Constrained: You are stuck with a specific recipe that might not be the absolute best. Parametric models are limited by the chosen function form. They might not capture the full complexity of real-world data.
• Simpler Problems Only: These models are well-suited for problems with relatively simple relationships between variables. For very complex problems, a parametric model might not be flexible enough to capture all the nuances.
• Imperfect Match: In real life, things are rarely perfect. Similarly, a parametric model’s predictions might not perfectly match the underlying relationships in the data. This is because the model is constrained by its chosen form.

## Non-Parametric Machine Learning – No Assumptions Needed!

Imagine you’re trying to predict how much someone will tip at a restaurant based on the size of their bill. Nonparametric algorithms are like taking a flexible approach:

• No Strict Recipe: Unlike parametric models that follow a set form (like a recipe), nonparametric methods don’t make strong assumptions about how the tip amount changes with the bill size. They are open to learning any pattern from the data.
• Learn from Examples: These algorithms are like looking at past tips (training data) to understand the relationship with bill sizes. They can capture complex patterns that might not fit a simple equation.

### Examples of Nonparametric Models:

1. k-Nearest Neighbors: Imagine you ask a friend, “How much should I tip for a \$50 bill?” They might say, “Look at what others tipped for similar bills (past data).” This is similar to k-nearest neighbors, which predicts a tip based on the most similar past tips (neighbors) for a given bill size.
2. Decision Trees: Think of a decision tree like a choose-your-own-adventure story. Based on the bill size, you might ask, “Is it a fancy restaurant?” If yes, you might predict a higher tip. These models ask a series of questions about the data to make predictions.
3. SVM

### Benefits (Pros) of Nonparametric Models:

• Flexibility: They can handle complex relationships between variables, unlike some parametric models that are more rigid.
• Power: Since they make fewer assumptions, they can potentially capture more nuanced patterns in the data, leading to better predictions.

### Limitations (Cons) of Nonparametric Models:

• Data Hungry: They often need more data compared to parametric models to learn effectively.
• Slower Training: Because they’re more flexible, training can take longer.
• Less Explainability: It can be harder to understand exactly why a nonparametric model makes a specific prediction.

## Summary

For Example:
Parametric:

• Simple & Fast: Easy to understand and learn from data quickly.
• Fewer Data Needed: Can work well even with smaller datasets.
• Limited Flexibility: Constrained by chosen function, may not capture complex relationships.

Non-Parametric:

• Flexible & Powerful: Can learn any pattern from data, potentially more accurate for complex problems.
• Data Hungry: Requires more data to learn effectively.
• Slower & Less Explainable: Training can take longer and understanding predictions might be harder.