Comment on page
β
Selbsttestfragen
Hilft dir meine Webseite weiter? Falls du dieses Projekt unterstΓΌtzen mΓΆchtest, kannst du mir gerne einen Kaffee an paypal.me/markusbilz spenden. Danke.οΈβ
β€
Nachfolgende Fragen eigenen sich zur PrΓΌfungsvorbereitung mittels Active Recall gedacht und eine ErgΓ€nzung zu Karteikarten.
Zur Herkunft der Fragen:
- Fragen aus Altklausuren sind mit einem Stern (β) markiert.
- Ein weiterer Teil der Fragen stammt aus der Vorlesung Maschinelles Lernen (Grundverfahren). Diese Fragen sind mit einem Gehirn (π§ ) gekennzeichnet.
- Fragen der University of Toronto sind mit einem Camp (ποΈ) markiert.
- Fragen der University of Berkley mit einem Feuerwehrmann (π§βπ)
- Die ΓΌbrigen Fragen sind eigene Fragen oder es handelt sich um Interviewfragen.

Γberblick ΓΌber Algorithmen der Vorlesung
- F: What are the characteristics of big data? β
- volume
- variety
- velocity
- veracity
- value
- F: Explain three characteristics of big data? β
- Volume refers to the sheer amount of data that is generated.
- Variety refers to the diversity of types of data. Data can come in structured, semi-structured, or even unstructured types.
- Velocity refers to the sheer speed at which data is generated (and processed).
- Veracity refers to the quality of data or accuracy of the collected data. To resolve data quality issues one has to apply sophisticated pre-processing.
- F: What is the difference between veracity and variety?Veracity refers to the quality of data (e. g. noise in data). While variety refers to types of data (e. g. unstructured data) in which data can come. As data is often collected from different sources both their types and their quality can differ.
- F: Compare ML to Statistics. What are the most significant differences?Statistics is:
- based on hypothesis, then a collection of data and analysis
- model-oriented with an emphasis on parametric models
- focus on understanding and hypothesis testing
Whereas in Machine Learning:- there is seldomly a priori hypothesis
- data is collected in advance
- analysis is data-driven not hypothesis-driven
- analysis is algorithm-oriented rather than model-oriented
- focus lies on prediction
- F: Compare ML to Econometrics. In which way do both differ?Econometrics is:
- concerned about casual interference and counterfactuals
- mostly centred around linear regressions and complex structural models
- standard errors are often reported after one run
Machine Learning is:- concerned about prediction
- using all sorts of data-driven models e. g. Trees, NN, etc.
- F: What are the characteristics of unstructured data? Explain them. βUnstructured data is:
- Nonnumeric: No predefined numeric representation for the constructs of interests. Requires manual or automatic coding prior to analysis.
- Multifaceted: A single unit of unstructured data posses multiple facets. Each aspect of data provides unique information for studying and different types of research goals. E. g. voice data present information about the speaker such as pitch, speech rate. Data can be used both in psychology and communication.
- Concurrent representation: The simultaneous presence of a single data unit's multiple facets that each provides unique information, which allows to represent of different phenomena at the same time. One can study different research questions with one single unit of unstructured data.
- F: What is 'structured data'?
- Structured data is data that adheres to a pre-defined data model and is therefore straightforward to analyze.
- Structured data conforms to a tabular format with a relationship between the different rows and columns.
- F: What is unsupervised learning? β
- Observe data and construct a low complexity description of the data.
- That means in unsupervised learning the dataset that a data set transforms into is not previously known or understood. Data is not labeled. (Grooking p. 13)
- We observe only the features. We are not interested in prediction, because we do not have an associated response variable.
- F: What is supervised learning? β
- We observe both a set of featuresfor each object, as well as a response or outcome variable. The goal is then to predictusing.
- Examples include clustering and PCA.
- F: What are advantages / disadvantages of unsupervised learning techniques?
- No labeled data required, which is often expensive and laborious. (+)
- Adding labels to the data after clustering is often easier (+) (own)
- Unsupervised techniques such as clustering help with data understanding of the raw data. (+) (own)
- Unsupervised learning is more subjective, as there is no simple objective (-)
- F: Name practical applications of unsupervised learning.
- Subgroups of breast cancer patients grouped by their gene expression measurements
- Groups of shoppers characterized by their browsing and purchase histories
- Movies grouped by ratings assigned by movie raters
- F: What is the goal of unsupervised learning?
- The goal of unsupervised learning is to discover interesting things about the measurements on how to visualize data and finding subgroups among variables or observations.
- F: Give two examples for unsupervised learning techniques.
- Clustering algorithms such as-means
- Dimensionality reduction techniques such as PCA
- F: Give examples for structured/unstructured data.Unstructured: (low degree of organization)
- Video Data, as the video comes in different formats, compression ratios, sizes, where the video has to be transformed first to extract information from every single frame
- Image Data, just like videos.
Structured: (high degree of organization)- Numeric secondary data e. g. sales figures, as they come in a standardized format and easy to process format e. g. float withdecimal places
- Categorial data e. g. gender, as there are predefined formats
- F: Give a brief explanation of categorical, binary, ordinal, and numeric variables.
- categorical/nominal: Names of things or symbols.
- binary: A nominal variable with two categories or states: 0 or 1.
- ordinal: Ordinal variables have a meaningful order or ranking among them, but the magnitude between successive values is not known.
- numeric: A quantitative variable. Numeric variables could be interval-scaled or ratio-scaled.
- F: Which steps are part of the CRISP-DM model? Explain them in-depth.
- 1.Business understanding i. e. developing an understanding of business objects and requirements of the data mining
- 2.Data understanding i. e. identify and collect the data set needed to fulfill the business goals
- 3.Data preparation i. e. prepare data for modeling
- 4.modeling i. e. build several models and assess them on a technical level.
- 5.Evaluation i. e. Evaluate whether models are able to help achieve the business goals. Plan on the next steps.
- 6.Deployment i. e. Deploy model to production. Make it accessible to customers.
- F: Explain common techniques for data gathering.
- Bulk downloads: Downloading large amounts of data. Often done using sophisticated software.
- APIs: Accessing data through machine-readable interfaces. Examples include Google Maps API.
- Web Scraping: Extraction of data from websites. Often done using bots and web crawlers or manually.
- F: Why is it desirable to work on normalized data?
- Some algorithms require normalized data, such as-means clustering, which is 'isotropic' in all directions of space and therefore tens to produce more or less round shapes. Not standardizing data would give more relative on variables with a smaller variance. (See here.)
- F: Explain common techniques to analyze the relationship between variables.
- A scatter plot (or scatter diagram) is used to show the relationship between variables
- Bar plot for high dimensional data
- Mean graph for categorical data
- Correlation analysis
- F: How can missing data be replaced? Explain.
- mean-based imputation: i. e. mean is calculated from all observations
- median-based imputation: Same as above but with median.
- stratified imputation: i. e. categories/ structure of data is considered for replacements. E. g. missing height is different for gender male and female.
- regressed imputation: i. e. replacing missing values by predictions of a regression model
- F: Explain 3 patterns in which missing data can occur.
- Completely random / MCAR: Missing values have no pattern. Can not be predicted.
- Missing at random / MaR: Missing values can be predicted using other data available for observation. Assign a categorial value.
- Latent, yet unknown variable: Missing value depends on a latent and highly correlated variable.
- F: What is a training, test, and validation set for?
- Training set is used to fit all potential models
- Validation set is used to select hyperparameters of a model
- Test set is used to estimate the predictive power of a model on unseen data
- F: What is the risk with tuning hyperparameters using a test dataset? π§βπ
- Tuning model hyperparameters to a test set means that the hyperparameters may overfit that test set. If the same test set is used to estimate performance, it will produce an overestimate. Using a separate validation set for tuning and test set for measuring performance provides an unbiased, realistic measurement of performance. (Berkley p. 14)
- F: Explain how best subset selection works in 3 steps.
- 1.Letdenote the null model, which contains no predictors. This model simply predicts the sample mean for each observation. 2. For:
- a. Fit allmodels that contain exactlypredictors.
- b. Pick the best among thesemodels, and call it. Here best is defined as having the smallest RSS, or equivalently largest.
- Select a single best model from amongusing cross-validated prediction error,(AIC), BIC, or adjusted.
- F: Explain how forward stepwise selection works in 3 steps.
- Intuition:Instead of searching through all possible subsets, we can seek a good path through them. Forward stepwise selection starts with the intercept, and then sequentially adds into the model the predictor that most improves the fit. Like best subset regression, forward stepwise produces a sequence of models indexed by, the subset size, which must be determined. (Hastie p. 59)
- More formal description: 1. Letdenote the null model, which contains no predictors. 2. For: a. Consider allmodels that augment the predictors inwith one additional predictor. b. Choose the best among thesemodels, and call it. Here best is defined as having smallest RSS or highest. 3. Select a single best model from amongusing cross-validated prediction error,, AIC, BIC, on adjusted.
- F: Explain how backward stepwise selection works in 3 steps.
- Backward- stepwise selection starts with the full model and sequentially deletes the predictor that has the least impact on the fit. The candidate for dropping is the one with the largest RSS and lowest. (Hastie p. 60)
- More formal definition:Letdenote the full model, which contains allpredictors.
- 1.For: a. Consider allmodels that contain all but one of the predictors in, for a total ofpredictors. b. Choose the best among thesemodels, and call it. Here best is defined as having smallest RSS or highest.
- 2.Select a single best model from amongusing cross-validated prediction error,, AIC, BIC, or adjusted.
- F: Compare the best subset selection to forward selection.
- Forward stepwise selection begins with a model containing no predictors, and then adds predictors to the model, one-at-a-time, until all of the predictors are in the model.
- Best Subset Selection does not add predictors one-at-a-time but chooses from models containing exactlyvariables. Predictors might be different for differents.
- Best Subset Selection becomes infeasible for large number of variables (). (Hastie p.59)
- F: Compare the subset selection methods forward and backward stepwise selection.
- Backward stepwise selection is pretty much the inverse of forward stepwise selection.
- F: When is it desirable to use backward stepwise selection and when is it desirable to use forward stepwise selection or best subset selection?
- Computationally best subset selection is most demanding and becomes infeasible for a large number of features.
- All three deliver similar results (See the comparison in Hastie p. 59)
- F: What is an alternative to subset selection methods presented in the lecture?
- Forward-stagewise regression
- F: What are the reasons why shrinkage methods such as LASSO are preferred over subset selection methods such as best subset selection?
- If the best subset can be found, it is indeed better than the LASSO in terms of selecting the variables that actually contribute to the fit.
- In practice LASSO is still preferred as it is computationally much easier to estimate e. g. through the calculation of regularization paths using pathwise coordinate descent. Whereas best subset selection is a NP-hard problem. (see here.)
- TODO: Lasso ist wenig nachvollziehbar
- F: Explain how Linear Regression works.
- Linear regression is a linear model (aka a model that assumes a linear relationship between input variablesand the single output variable).is a linear combination of the input variablesto. To best describe the relationship between input variables and output variables a line or hyperplane is fitted to the point cloud. (own)
- The multiple linear regression model for the population is defined as:βis intercept,are regression coefficients ofindependent variables, andis error.are the input variables andthe dependent variable. We can present this equation in vector notation:
- F: What is the purposein a Multiple Linear Regression Model?
- βis a-dimensional vector, whereis the intercept andare the regression coefficients ofindependent variables.
- F: Explain how an optimal estimate forcan be derived.
- A linear regression model has the best fit when the error termis minimal. To achieve this, the regression coefficientshave to be estimated such that the error term is minimized. It's common to use squared error terms forminimization.
- This leads to the following equation:
TODO: Formel fΓΌr Linear Regression TODO: Minimierung SSE? - Which can be reformulated to:
- F: Does standard Linear Regression require scaling?
- No, as multiplyingwith a constantleads to a scaling of the least square coefficient estimates by a factor.
- F: Why do we optimize for the SSE for?
- It is fully differentiable
- Easy to optimize
- It also makes sense as:
- F: Give the definition for the SSE.
- whereis the observed mean of.
- F: Give the definition for the SSR.
- whereis the prediction forandthe observed mean.
- F: Give the definition for the SST.
- whereis the prediction for.
- F: Give a graphical intuition for the SSE, SSR and SST.
- __β
- F: Name two measure to test the goodness of fit of a Linear Regression model.
- Total Sum of Squares (SST)
- ββ
- F: Write the definitions, Adj., MAE, RMSE.
- ββ
- ββ
- ββ
- ββ
- TODO: n, k einfΓΌhren....
- F: Give an intuition for, MAE and RMSE.
- β/ adj.: How well can my model explain the variance?
- MAE: How does the model perform on average?
- RMSE: How many large prediction derivations does the model have? (lecture BDA p. 43)
- F: Compare, Adj.to MSE, MAE and RMSE. Name advantages and drawbacks.
- MSE:
- MSE is differentiable, which is important for finding optima. (+)
- MAE:
- The scale of MAE, RMSE depends on the scale of the dependent variable. (-)
- MAE is not differentiable. (-)
- MAE is more robust to outliers. (+)
- β:
- Measure always increases by adding new independent variables which can lead to the addition of redundant variables in the model. (-)
- F: Compare MAE to RMSE.
- RMSE penalizes large errors more than MAE. This can be useful if being off by ten is more than twice as bad as being of by 5. If however being off by 5 is just as bad as being of by 10, MAE should be preferred. (See here.)
- F: Compare, Adj.to MAE, RMSE. Which of these is normed.
- βand Adj.between. TODO: worse than 0 if prediction is worse than mean / large SSE vs small SST.
- MAE, RMSE between. If RMSE is acceptable depends on the scale of the variables. (see here). RMSE and MAE arefor models with a perfect fit.
- F: In which way does the adjustedimprove the standard?
- A model might have a good fit in-sample but poor fit on out-of-sample, if to many regressors are used.
- Adj.is awhich has been corrected by a penalty function and takes into account the number ofregressors in the model.
- F: Explain the three steps in fitting a regression model.
- Specification:
- Determine dependent and explanatory variables.
- Exclude explanatory variables without predictive power.
- Collect data for dependent and explanatory variables.
- Fitting / Estimating:
- Estimating regression coefficients.
- Diagnosis:
- Determine the quality of the regression model with e. g., adj., MSE and MAE.
- Determine the model's significance and the significance of the regression coefficients.
- analyze standard deviation of regression errors.
- F: Explain how one can test for the significance of a regression model. GiveandHypothesis for regression models.
- βstates that all regression coefficients are equal to zero, which means none of the explanatory variables play any role.βstates that at least one coefficient is different from zero.
- F: Give an intuition for the Analysis of Variance (ANOVA) test.
- The ANOVA-Test compares whether the means of two separate sets are equal.
- The observationwhich is the-th observation ofth can be decomposed into the between-groups variance, the within-group varianceand the between-groups mean. One gets:
- ββ
- F: How is the ANOVA test /-test defined?
- whereis sample size,number of parameters in model,number of slope parameters.
- F: Explain how one can interpret the-Test.
- If the-Value of the-Test is less than a significance level, the model does explain some variation of the dependent variable.
- One needs to have atable for the corresponding.
- F: Explain what multicollinearity is.
- Multicollinearity refers to the situation in which more than two explanatory variables in a multiple regression model are highly correlated.
- Tests for multicollinearity are necessary after the models significance has been determined and all significant independent variables as if strong multicollinearity is present, a change in one explanatory variable will also lead to a change of another explanatory variable.
- F: Name three possible indicators for multicollinearity.
- Sensitivity of regression coefficients to the inclusion of additional explanatory variables
- change from significance to insignificance after more explanatory variables have been added
- An increase in the modelβs standard error of the regression
- F: How can one test for multicollinearity?
- One can use the variance inflation factor (VIF)
- F: Give the definition for the variance inflation factor.
- To check theth variable for multicollinearity, one can calculate the VIF as following:
- The-th variable is regressed on the remainingvariables. The resulting regression would look like:Then we obtain coefficients of determination of this regression,.
- F: What is the intuition of the Variance Inflation Factor?
- Theth variable is regressed on the remainingvariables / features.
- Ifis large, that means the remaining variables can explain theth variable and so the resulting VIF will be large.
- TODO: Formula!
- F: How can VIF be interpreted.
- A VIF of 10 indicates a severe impact due to multicollinearity.
- F: How can one test for linearity?
- Plot regression residuals on the vertical axis and values of the explanatory variables on the horizontal axis. Repeat for every explanatory variable. If errors are randomly scattered, around zero the model assumption is correct.
- ββ
- F: Why is it not desirable to use Linear Regression for default prediction?
- In default prediction one searches for provability of default, which ranges betweenand.
- Fitting a line between to a binary response variable (1 = default / 0 = non-default), could lead to estimates outside theinterval, making them hard to interpret as probabilities i. e. if probabilities are negative.
- Nevertheless, the predictions provide an ordering and can be interpreted as crude probability estimates.
- F: Explain scenarios, where Ridge Regression would be preferred over LASSO.
- Ridge only performs parameter shrinkage and no variable selection.
- Ridge regression is preferred if one wants to insert some prior knowledge into the approach. With ridge, one has the ability to say that all features have at least some weight, even if it is very little (See here.)β
- F: Explain scenarios where LASSO are preferred over Ridge Regression.
- As with ridge regression, the LASSO shrinks the coefficient estimates towards zero.
- However, in the case of LASSO some coefficient estimates are forced to be exactly equal to zero (zeroed out) when the tuning parameteris sufficiently large.
- Therefore, LASSO does variable selection automatically and shrinkage of parameters.
- F: Name two approaches for shrinking regression coefficients towards zero.
- ridge regression
- LASSO
- F: Explain what regularization is and why it is useful
- F: Explain the ridge regression. β
- TODO: extension to Linearen Regression
- Ridge regression is a regularization approach. Regularization is used to prevent coefficients from fitting so perfectly. This is done by adding a constant multiple to an existing weight vector. Which is sometimes referred to as a regularization term or shrinkage penalty. In case of ridge regression this regularization term. In case of ridge regression it is the sum of the square of the weights.
- Taking this into account one get's the following formula for ridge regression:
- β
- whereis a tuning parameter, to be determined separately. The tuning parameterserves as control of the relative impact of these two terms on the regression coefficients. Should be selected using cross-validation.
- Still ridge regression seeks coefficient estimates that fit the data well through minimizing the RSS.
- The shrinkage penalty is small whenare close to zero, and so it has the effect of shrinking the estimates oftowards zero. However,is left out from the penalty term, as penalizing the intercept would just shiftby some amount. (Hastie p. 64)
- As such:
- ridge regression yields non-sparse outputs, as coefficients are shrinked towards zero but never actually are 0.
- doesn't allow for feature selection. Same reasoning as above.
- Typically yields better results than LASSO.
- F: Why is the interceptnot part of the regularization term?
- The intercepthas been left out of the penalty term. Penalization of the intercept would make the procedure depend on the origin chosen for; that is, adding a constantto each of the targetswould not simply result in a shift of the predictions by the same amount. (Hastie p. 64)
- Indeed, in the presence of the intercept term, addingto allwill simply lead toincreasing byas well, and correspondingly all predicted valueswill also increase by. This is not true if the intercept is penalized:will have to increase by less than c. (see here.)
- F: Matchnorm,norm, ridge regression and LASSO to its counterparts.
- ridge:β
- LASSO:β
- F: How is thenorm defined?
- ββ
- F: How is thenorm defined?
- ββ
- F: Explain LASSO.
- Lasso regression is a regularization approach. Regularization is used to prevent coefficients from fitting to perfectly. This is done by adding a constant multiple with an existing weight vector. Which is referred to as regularization term or shrinkage penalty. In the case of LASSO regression, it is the sum of absolute weights.
- Taking this into account one get's the following formula for ridge regression:whereis a tuning parameter, to be determined separately. The tuning parameterserves as control of the relative impact of these two terms on the regression coefficients. Should be selected using cross-validation.
- Still ridge regression seeks for coefficient estimates that fit the data well through minimizing the RSS.
- The shrinkage penalty is small whenare close to zero, and so it has the effect of shrinking the estimates oftowards zero. However,is left out from the penalty term. Some coefficient estimates are even forced to be exactly zero, ifis sufficiently large.
- As such:
- Lasso regression yields sparse models. That is, models that involve only a subset of the variables.
- Can be used for feature selection.
- F: Explain the difference between LASSO and ridge regression? β
- Both are regularization approaches in order to prevent overfitting of an ordinary linear regression model and introduce smoothness to the model. This is done by adding a constant multiple of an weight vector that prevents the coefficients so perfectly that they overfit.
- Both shrinkage methods to shrink regression coefficients towards zero.
- The difference between LASSO and ridge regression is that ridge is just the square of the weights, while Lasso is just the sum of the absolute weights in MSE or other loss functions.
- TODO: l1 and l2 norm
- LASSO: