- 487
- 11 889 610
ritvikmath
Приєднався 14 чер 2012
Data science for all.
The Most Important Integral in Data Science
Calculus and Data Science. Best Friends.
ROC Curve Video : ua-cam.com/video/SHM_GgNI4fY/v-deo.html
0:00 Intro
3:46 AUC and Tradeoffs
7:56 Integral Form of AUC
11:36 Comparing AUCs
ROC Curve Video : ua-cam.com/video/SHM_GgNI4fY/v-deo.html
0:00 Intro
3:46 AUC and Tradeoffs
7:56 Integral Form of AUC
11:36 Comparing AUCs
Переглядів: 4 997
Відео
Why You Shouldn't Trust Your ML Models (...too much)
Переглядів 4,7 тис.Місяць тому
Whether you call it feedback loops, selection bias, etc, this pesky problem rears its head in almost every problem out there. 0:00 The Problem 9:35 The Solution
I Traded $1000 with Every Tree-Based Machine Learning Model
Переглядів 3,7 тис.2 місяці тому
Trading $1000 using every tree-based machine learning model! Decision Trees : ua-cam.com/video/kakLu2is3ds/v-deo.html Random Forest : ua-cam.com/video/w-eWTxbRQcU/v-deo.html Gradient Boosted Decision Trees : ua-cam.com/video/en2bmeB4QUo/v-deo.html 0:00 Why Trees? 2:06 The Method 11:37 The Results
Cohen's Kappa : Data Science Basics
Переглядів 2,5 тис.2 місяці тому
All about Cohen's Kappa in Data Science!
When Higher Volatility is Better in the Stock Market : Call Options
Переглядів 2,2 тис.2 місяці тому
What affects the premium of call options and how higher volatility can actually be valuable in the stock market. Visuals Created with Excalidraw : excalidraw.com/ 0:00 Strike Price & Expiration Date 7:12 What about Volatility? 11:26 Next Up
Call Options : The Intuition and Math You Need
Переглядів 3 тис.2 місяці тому
All the intuition and math you need to know about call options! 0:00 Intro to Call Options 3:10 Visual Profit Analysis 9:16 Call Options vs Stocks 14:49 Pricing Call Options
I Day Traded $1000 with the Hidden Markov Model
Переглядів 12 тис.2 місяці тому
Method and results of day trading $1K using the Hidden Markov Model in Data Science 0:00 Method 6:57 Results
The Best Data Visualization of All Time
Переглядів 6 тис.3 місяці тому
The power of Sankey Diagrams! Visuals Created with Excalidraw excalidraw.com/ Code : github.com/ritvikfood/UA-camVideoCode/blob/main/Sankey Diagram.ipynb
I Day Traded $1000 : Autoregressive (AR) vs. Recurrent Neural Network (RNN)
Переглядів 29 тис.3 місяці тому
Comparing the Autoregressive (AR) model vs. the Recurrent Neural Network (RNN) model for stock return prediction! 0:00 Intro 1:30 AR Model 5:17 RNN Model 10:46 Results Visuals Created with Excalidraw excalidraw.com/
I Used Data Science to Buy the Dip
Переглядів 7 тис.3 місяці тому
Training a machine learning model to predict the bottom of the market and buy the dip!
How the Heck do Bending Genetics work in Avatar the Last Airbender?
Переглядів 1,5 тис.3 місяці тому
Your dad's an airbender, your mom's a waterbender. What kind of bender are you? Inspiration thread : www.reddit.com/r/FanTheories/comments/7yt1zm/genetics_of_bending_avatar_the_last_airbender/
The Dirichlet Distribution : Data Science Basics
Переглядів 4,4 тис.4 місяці тому
Beta Distribution Video : ua-cam.com/video/1k8lF3BriXM/v-deo.html 0:00 Recap of Beta Distribution 2:43 Intro to Dirichlet Distribution 6:01 PDF of Dirichlet Distribution 15:07 Statistics and Convergence
Super Bowl Prediction by a Data Scientist
Переглядів 3 тис.4 місяці тому
Link to Data : www.kaggle.com/datasets/tobycrabtree/nfl-scores-and-betting-data?resource=download&select=spreadspoke_scores.csv Logistic Regression Video : ua-cam.com/video/9zw76PT3tzs/v-deo.html RNN Video : ua-cam.com/video/DFZ1UA7-fxY/v-deo.html
Kernel Density Estimation : Data Science Concepts
Переглядів 15 тис.4 місяці тому
All about Kernel Density Estimation (KDE) in data science. Fish Icon: www.freepik.com/search?format=search&icon_color=red&last_filter=icon_color&last_value=red&query=fish&type=icon 0:00 Why do KDE? 2:30 Good vs. Bad KDE 5:35 Intuition and Math 15:09 Bandwidth Selection Theory 19:45 Bandwidth Selection in Practice
A Data Scientist's Prediction for the 2024 Election
Переглядів 9 тис.4 місяці тому
KL Divergence Video : ua-cam.com/video/q0AkK8aYbLY/v-deo.html Monte Carlo Simulations Video : ua-cam.com/video/EaR3C4e600k/v-deo.html Lightbulb Icon : www.freepik.com/icon/light-bulb_2988036#position=3&page=1&term=lightbulb&fromView=search *Note: at times I refer to the Iowa "caucus". A primary and caucus are both ways of selecting a party's nominee. They are quite different in nature and one o...
The S&P 500 Isn't As Diversified as You Think. Here's Why.
Переглядів 2,3 тис.5 місяців тому
The S&P 500 Isn't As Diversified as You Think. Here's Why.
The Planet Fitness Problem : Improved Markov Chains
Переглядів 2,7 тис.5 місяців тому
The Planet Fitness Problem : Improved Markov Chains
The Easy Trick to Understand any Data Science Formula
Переглядів 5 тис.6 місяців тому
The Easy Trick to Understand any Data Science Formula
3 Psychological Tips to Hack the Data Science Interview
Переглядів 3,4 тис.6 місяців тому
3 Psychological Tips to Hack the Data Science Interview
The Unexpected Pure Math You Have to Know as a Data Scientist : Pythagorean Means
Переглядів 5 тис.7 місяців тому
The Unexpected Pure Math You Have to Know as a Data Scientist : Pythagorean Means
The Title and Thumbnail Change if You Watch this Video | Reinforcement Learning
Переглядів 2,4 тис.7 місяців тому
The Title and Thumbnail Change if You Watch this Video | Reinforcement Learning
This is the Math You Need to Master Reinforcement Learning
Переглядів 9 тис.7 місяців тому
This is the Math You Need to Master Reinforcement Learning
I Day Traded $1000 Using Reinforcement Learning and Bayesian Statistics
Переглядів 8 тис.8 місяців тому
I Day Traded $1000 Using Reinforcement Learning and Bayesian Statistics
Can You Solve the Two Radio Problem?
Переглядів 2 тис.8 місяців тому
Can You Solve the Two Radio Problem?
Detecting Phrases with Data Science : Natural Language Processing
Переглядів 1,9 тис.8 місяців тому
Detecting Phrases with Data Science : Natural Language Processing
BM25 : The Most Important Text Metric in Data Science
Переглядів 7 тис.8 місяців тому
BM25 : The Most Important Text Metric in Data Science
Spearman Correlation - Simply Explained
Переглядів 9 тис.9 місяців тому
Spearman Correlation - Simply Explained
Why is the Formula for F1-Score Unnecessarily Complicated?
Переглядів 4,4 тис.10 місяців тому
Why is the Formula for F1-Score Unnecessarily Complicated?
Incredible and amazing explanation! Thanks so much for such great content!
12:54 Wow, I didn't know about moments - except for "moment-generating functions".
11:29 What about the median? Which is better - mean or median? Could you do a video on this? Thanks!
I first learned about kurtosis in my high school research class - it was a stat we looked at for our project, but I really didn't know what it was aside from being a weird word... Thank you for the explanation. This is a great, well-needed video!
9:16 But you said that the standard deviations of both distributions is the same, correct? How is that possible that the number of outliers in both distribution differs, YET they still both have the same standard deviation? Thanks!
Thank you for a good explanation of a seemingly weird looking formula. It would be hard to forget this formula now that I got it from here.
Hey @ritvikmath, I tried using ADF and KPSS on 3 sample datasets, similar to the ones in your video. One dataset violates the constant mean, the other thd constant variance, and lastly one with seasonality. However, it seems that both the ADF and KPSS are returning the datasets to be stationary for both non-constsnt deviation and the seasonality dataset. It accurately tests non-constant mean datasets. Any thoughts as to why that would happen?
great video. thanks !
Is the change of coordinates from cartesian to polar related to the so called 'kernel trick'?
So help me understand this, anyone. Lets suppose I want the perceptron to classify spam and not spam mail. Then, by a point (on the plot) do we mean a single email? Also, does the update of parameters happen after each mail is classified or after the entire training set of emails is classified? From the video, I get the impression that update happens only after all emails are classified. How do we update params if there are multiple wrongly classified points?
Hey Ritvik, thanks for this video! I was wondering how will the parameters be updated if we have multiple points that are wrongly classified?
thanks a lot
How do you do it for Panel data? Like i have price for 1000 parts in an dataframe. And for each part, I have 20 years for price information along with other variables. I have to predict the prices for each component for next 5 years. It is impossible to look at trends for each part. How does one work with Panel format data for forecasting? Can I use this model?
nicely articulated. ty!
I guess using probability algorithm and key fact you can sometimes come up with a pretty accurate guesstimate... Cool !
8:47 but what about when you have more than 1 moitake and some should be in the upper and some in the lower?
great explanation!!!
The fact that the nazis were OCD was their downfall
Your videos stand out because you truly put yourself in the shoes of the student, understanding their concerns. This approach makes your content more than just a plain talk about a topic-it's actual teaching.
This is really great!
Cool video, I think bending might go beyond Mendelian inheritance though. I remember an episode that had identical twins, one was an earth bender, the other one wasn't.
As always Ritvik never disappoints when it comes to breaking down a concept without relying on mathematical equations, and still giving the best overview of a concept in the most generalized way possible. Thank you!
you saved my life, I will watch all your videos before my exam on machine learning
Epic explanation
Is there a markov chain model for this area calculation that converges quicker. Something that actively samples the inside of the circle preferentially.
The shear quantity of -1's in that formula makes me sad. Especially when the numbers you need to input are all +1 of the quantity you'd expect. Like this could easily be (alpha + k - 1) ! / product(alpha_i!) * p_i ^ alpha_i! where alpha is the total population, alpha_i is each population and k is the number of groups.
what if the similarities are negative??
My man- u r my savior and a legend.❤
Many thanks 🎉❤
Takeaway for myself: ARIMA is the model applied for the time series data, where there is time dependence. It has a more step if transforming from crrelation of x and time to the correlation of x and x(t-1) (it's precedence). And from the formular of linear regressiin, the diff of x and x(t-1) is const (slope). So it doesn't depend on time. The 3 critiera for a series that can be applied ARMA (stationary): constant mean, constant variance, no seasonality.
not bad
I can't thank you enough. Although I understood the concept in my class, still I wasn't able to visualize how this is working until I saw your video. You are helping thousands of us students. Thank you sooo much.
Thank you, this video really helps to look at probabilities in a way that makes it more understandable.
Awesome video
Thank you so much 😊
Will there be a continuation of this?
very helpful. Thanks for explaining this with simplicity.
i make more percentage gains using simple technical analysis...this mumbo jumbo is making things only complicated
I started this playlist out of curiosity to get an introduction to time series analysis since it's a term I have been hearing for a while now. Now I feel like I will finish this playlist in no time. You explain things really well, and you have just showed me awesomely how to do the code of time series analysis which I understood very well. Great job, man!! We very much appreciate it!
Thorough and great explanation of this subject matter. Thanks so much
Didn't understand a think. plz someone tell me anything I have missed out before this video , although i have watched the previous 4 videos
how is it the coefs b is 1 and 1? I'm a little confused🥹
I think it's worth pointing out that this assumption that you only ever depend on the previous state is a "weak" assumption. You can get around it by expanding your state space to include the previous value. The bigger limitation on markov chains is that the probability of transition is independent of state. That is to say it can be constructed as a linear expression x_t = A*x_t-1, which is the heart of the simplification here.
Wath an amazing tutorial you managed to give a detailed yet simple explanation, gained a subscriber
Thank you so much. This is amazing!!
11:35 useful to think about the same "ratio", thank you boss
Great explanations: very easy to understand!
great video
just love this simplicity!
why this accent though?😑